CADL2020: Workshop on Computational Aspects of Deep Learning at ICPR 2020 Milan, Italy, January 10-15, 2021 |
Conference website | https://ailb-web.ing.unimore.it/cadl2020/ |
Abstract registration deadline | October 10, 2020 |
Submission deadline | October 10, 2020 |
The ICPR workshop on “Computational Aspects of Deep Learning” fosters the submission of research works that focus on the development of optimized deep neural network architectures and on the optimization of existing ones, also onto highly scalable systems. This includes the training on large-scale or highly-dimensional datasets, the design of novel architectures and operators for increasing the efficacy or the efficiency in feature extraction and classification, the optimization of hyperparameters to enhance model’s performance, solutions for training in multi-node systems such as HPC clusters.
The workshop targets any research field related to pattern recognition, ranging from computer vision to natural language processing and multimedia, in which data and computationally intensive architectures are needed to solve key research issues. The workshop also favors positive criticism on the current data-intensive trends in machine learning and will encourage new perspectives and solutions on the matter. Submissions should address computationally intensive scenarios from the point of view of architectural design, data preparation and processing, operator design, training strategies, distributed and large-scale training. Quantitative comparisons of existing solutions and datasets are also welcome to raise awareness on the topic.
Submission Guidelines
We invite submission of full and short papers describing work in the domains suggested above or in closely-related areas.
Reviewing of full and short submissions will be single-blind. Accepted submissions will be presented either as oral or posters at the workshop, and published in the ICPR 2020 Workshops volume, edited by Springer.
- Full papers: 12-15 pages
- Short papers: 6-8 pages
Submission must be formatted according to the Springer template.
List of Topics
Topics of interest include the following:
- Design of innovative architectures and operators for data-intensive scenarios
- Video understanding and spatio-temporal feature extraction
- Distributed reinforcement learning algorithms
- Applications of large-scale pre-training techniques
- Distributed training approaches and architectures
- HPC and massively parallel architectures in Deep Learning
- Frameworks and optimization algorithms for training Deep Networks
- Model pruning, gradient compression techniques to reduce the computational complexity
- Design, implementation and use of hardware accelerators
Contact
All questions about submissions should be emailed to lorenzo.baraldi@unimore.it