DLR 2019: Deep Learning Research with Engineering Applications |
Website | http://dlr.teamsb.net/ |
Submission link | https://easychair.org/conferences/?conf=dlr2019 |
Abstract registration deadline | August 31, 2018 |
Submission deadline | November 15, 2018 |
Deep learning (also known as deep structured learning or hierarchical learning) is part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms. Learning can be supervised, partially supervised or unsupervised. Some representations are loosely based on interpretation of information processing and communication patterns in a biological nervous system, such as neural coding that attempts to define a relationship between various stimuli and associated neuronal responses in the brain. Research attempts to create efficient systems to learn these representations from large-scale, unlabeled data sets. Deep learning architectures such as deep neural networks, deep belief networks and recurrent neural networks have been applied to fields including computer vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, bioinformatics and drug design, where they produced results comparable to and in some cases superior to human experts.
Deep Learning is a new area of Machine Learning (ML) research, which has been introduced with the objective of moving ML closer to one of its original goals i.e. Artificial Intelligence. Deep Learning was developed as a ML approach to deal with complex input-output mappings. While traditional ML methods successfully solve problems where final value is a simple function of input data, Deep Learning techniques are able to capture composite relations between air pressure recordings and English words, millions of pixels and textual description, brand-related news and future stock prices and almost all real world problems. Deep learning is a class of machine learning algorithms that uses a cascade of multiple layers of nonlinear processing units for feature extraction and transformation. Each successive layer uses the output from the previous layer as input. The learning may be supervised (e.g., classification) and/or unsupervised (e.g., pattern analysis) manners. These algorithms learn multiple levels of representations that correspond to different levels of abstraction by resorting to some form of gradient descent for training via backpropagation. Layers that have been used in deep learning include hidden layers of an artificial neural network and sets of propositional formulas. They may also include latent variables organized layer-wise in deep generative models such as the nodes in Deep Belief Networks and Deep Boltzmann Machines. Deep learning is part of state-of-the-art systems in various disciplines, particularly computer vision, automatic speech recognition (ASR) and human action recognition.
Submission Guidelines
All papers must be original and not simultaneously submitted to another journal or conference. The following paper categories are welcome:
- Prospective authors are invited to submit a 3- 4 pages Abstract of the paper along with title of the paper and author details. Abstract should highlight the novelty and contribution of the proposed article.
- Authors need to submit this abstract using EasyChair submission link given in http://dlr.teamsb.net/#submission
List of Topics
- Deep neural architectures
- Deep neural applications
- Deep neural networks in action
- Deep neural and artificial intelligence
Publication
DLR 2019 proceedings will be published in Cambridge Scholar Publishing
Contact
All questions about submissions should be emailed to dr.siddhartha.bhattacharyya@gmail.com