EMC2: The Sixth Workshop on Energy Efficient Machine Learning and Cognitive Computing San Jose, CA, United States, December 5, 2020 |
Conference website | https://www.emc2-ai.org/ |
Submission link | https://easychair.org/conferences/?conf=emc23 |
Abstract registration deadline | September 8, 2020 |
Submission deadline | October 15, 2020 |
IMPORTANT DATES:
Paper submission: October 15, 2020 (11:59 pm PST)
Acceptance and rebuttals: November 9, 2020
Camera ready: November 23, 2020
Workshop: December 5, 2020
CALL FOR PAPERS:
The Sixth Workshop on Energy Efficient Machine Learning and Cognitive Computing (EMC2) will be held virtually on December 5, 2020.
Artificial intelligence continues to proliferate everyday life aided by the advances in algorithms, vast amounts of data, and enormous compute. With the growing prominence of AI, there is a realization about the energy cost of developing and deploying AI. Training the most successful AI models has become exceedingly power-hungry often dwarfing the energy needs of entire households for years. At the edge, AI applications are ubiquitous in cell phones, appliances, smart sensors, vehicles, and even wildlife monitors where efficiency is paramount for practical reasons. Naturally, these applications have diverse requirements for performance, energy, reliability, accuracy, and security that demand a holistic approach to designing the hardware, software, and intelligent algorithms to achieve the best outcome.
The goal of this Workshop is to provide a forum for researchers who are exploring novel ideas in the field of energy efficient machine learning and artificial intelligence. We invite full-length papers describing original, cutting-edge, and even work-in-progress research projects about efficient machine learning. Suggested topics for papers include, but are not limited to:
-
Neural network architectures for resource constrained applications
-
Efficient hardware designs to implement neural networks including sparsity, locality, and systolic designs
-
Power and performance efficient memory architectures suited for neural networks
-
Network reduction techniques – approximation, quantization, reduced precision, pruning, distillation, and reconfiguration
-
Exploring interplay of precision, performance, power, and energy through benchmarks, workloads, and characterization
-
Performance potential, limit studies, bottleneck analysis, profiling, and synthesis of workloads
-
Simulation and emulation techniques, frameworks, tools, and platforms for machine learning
-
Optimizations to improve performance of training techniques including on-device and large-scale learning
-
Load balancing and efficient task distribution, communication and computation overlapping for optimal performance
-
Unique verification, validation, determinism, robustness, bias, safety, and privacy challenges in efficient AI systems
All submissions are reviewed by our diverse, multi-national panel of reviewers following a single-blind review process. Accepted papers will be published as part of the Workshop proceedings shortly after the meeting. Please refer to our submission page for more details and formatting guidelines: https://www.emc2-ai.org/submission
Organizing Committee
Raj Parihar, Microsoft
Michael Goldfarb, Qualcomm
Satyam Srivastava, Intel
Tao Sheng, Amazon
Sikandar Mashayak, Apple
Bita Darvish Rouhani, Microsoft
Kushal Datta, Nvidia
Program Committee
Raj Parihar, Microsoft
Michael Goldfarb, Qualcomm
Satyam Srivastava, Intel
Tao Sheng, Amazon
Sikandar Mashayak, Apple
Bita Darvish Rouhani, Microsoft
Kushal Datta, Nvidia
Jeaff Wang, Amazon
Mahdi N. Bojnordi, University of Utah
Krishna Nagar, Intel
Debu Pal, Cadence
Sushant Kondguli, Samsung
Ananya Pareek, Apple
Venue
The conference will be held virtually.
Contact
All questions about submissions should be emailed to submission chairs Satyam Srivastava or Tao Sheng at satyam.srivastava@intel.com, tsheng@amazon.com.