Saturday, 13 May 2017

CFP: IEEE TNNLS Special Issue on Discriminative Learning for Model Optimization and Statistical Inference

Model optimization and statistical inference have played a central role in various applications of computational intelligence, data analytics, and computer vision. Traditional model-centric learning approaches require properly crafted optimization and inference algorithms, as well as carefully tuned parameters. Recently, the discriminative learning technique has demonstrated its power for process-centric learning. The resulting solutions are closely related to a variety of statistical and optimization models such as sparse representation, structured regression, and conditional random fields, and are empowered by effective computational techniques such as bi-level optimization and partial differential equations (PDEs). Moreover, many deep learning models has been shown to be closely tied with discriminative learning models. For example, a problem-specific deep architecture can be formed by unfolding the model inference as an iterative process, whose parameters can be jointly learned from training data with a discriminative loss. Such a viewpoint motivates the incorporation of domain expertise and problem structures into designing deep architectures, and helps the interpretation and performance improvement of deep models.

This special issue aims at promoting first-class research along this direction, and offers a timely collection of information to benefit the researchers and practitioners. We welcome high-quality original submissions addressing both novel theoretical and modeling progress, and real-world applications that benefit discriminative learning for model optimization and statistical inference. 

Topics of interests include, but are not limited to:
  • Task-driven learning for model optimization and/or statistical inference.
  • Novel architectures and algorithms for bi-level optimization and/or PDEs .
  • Problem-specific deep architectures for solving model optimization and statistical inference.
  • Integration of optimization-based, statistical learning, and inference models with deep learning models.
  • Sparse representation motivated deep architectures.
  • Structured regression motivated deep architectures.
  • Conditional random forest motivated recurrent neural networks.
  • Novel interpretative frameworks on the working mechanism of representative deep learning models. 
  • Theoretical analysis of deep learning models and algorithms: convergence, optimality, generalization, stability, and sensitivity analysis.
  • Applications based on the above described models and algorithms: (1) image enhancement, restoration and synthesis; (2) optical flow, stereo matching, camera localization, and normal estimation; (3) visual recognition, detection, and segmentation, and scene understanding; (4) pattern classification, clustering and dimensionality reduction; (5) medical image analysis and other novel application domains.

  15 July 2017 – Deadline for manuscript submission
  30 September 2017 – Reviewer’s comments to authors
  15 November 2017 – Deadline for submitting revised manuscripts
  30 December 2017 – Final decision of acceptance to authors
  30 April 2018- Tentative publication date

  Wangmeng Zuo, Harbin Institute of Technology, China.
  Zhangyang (Atlas) Wang, Texas A&M University, USA.
  Xi Peng, Institute for Infocomm, A*STAR, Singapore.
  Ling Shao, University of East Anglia, UK.
  Danil Prokhorov, Toyota Research Institute North America, USA.
  Horst Bischof, Graz University of Technology, Austria.

Read the Information for Authors at
Submit your manuscript at the TNNLS webpage ( and follow the submission procedure. 
Please, clearly indicate on the first page of the manuscript and in the cover letter that the manuscript is submitted to this special issue. Send an email to the leading guest editor, Prof. Wangmeng Zuo (, with subject “TNNLS special issue submission” to notify about your submission. 
Early submissions are welcome. We will start the review process as soon as we receive your contributions. 

No comments:

Post a Comment