Download PDFOpen PDF in browser

Escape the local minima by error reflection

EasyChair Preprint no. 576

4 pagesDate: October 13, 2018

Abstract

For the current deep learning, one of the most important questions is: how to make neural network escape from local minima or saddle points. People tend to believe that a local minima is already able to reach satisfaction. In this paper, we provided theoretical analysis for the situation when the neural network is trapped into local minima. We would try to criticize the point that "local minima is good enough". Furthermore, based on the property, information forgetting in local minima, that we investigated in this paper, we provide a possible method to solve this problem with error reflection. Our experiments provide strong evidence of this method where it can lower the loss by 99% for our designed function approximation tasks. It can serve as evidence of our theory in two-dimensional space. Our testing result on image recognition task also shows its superiority that can reach 98.81% in Fashion-MNIST datasets with parameter size of 3000.

Keyphrases: deep learning theory, local minima, neural networks

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:576,
  author = {Liyao Gao},
  title = {Escape the local minima by error reflection},
  howpublished = {EasyChair Preprint no. 576},
  doi = {10.29007/6wsv},
  year = {EasyChair, 2018}}
Download PDFOpen PDF in browser