Download PDFOpen PDF in browser

The Hierarchical Learning Algorithm for Deep Neural Netwprks

EasyChair Preprint no. 451

12 pagesDate: August 24, 2018

Abstract

In the article, the emphasis is put on the modern articial neural network (ANN) structure, which in the literature is known as a deep neural network. A network includes more than one hidden layer and comprises many standard modules with ReLu nonlinear activation function. A learning algorithm includes two standard steps, forward and backward, and its effectiveness depends on the way the learning error is transported back through all he layers to the first layer. Taking into account all the dimensionalities of matrixes and the nonlinear characteristics of ReLu activation function, the problem is very challenging. In practice tasks, a neural networks internal layer matrixes with ReLu activations function, include a lot of null value of weight coeffcients. This phenomenon has a negative impact on the effectiveness of the learning algorithm's convergence. Analyzing and describing an ANN structure, one usually finds that the first parameter is the number of ANNs layers "L". By implementing the hierarchical structure to the learning algorithm, an ANN structure is divided into sub-networks. Every sub-network is responsible for finding the optimal value of its weight coeffcients using a local target function to minimize the learning error. The second coordination level of the learning algorithm is responsible for coordinating the local solutions and finding the minimum of the global target function. In each iteration the coordinator has to send coordination parameters into the first level of subnetworks. By using the input and the teaching vectors, the local procedures are working and finding their weight coeffcients. At the same step the feedback error is calculated and sent to the coordinator. The process is being repeated until the minimum of all the target functions is achieved.

Keyphrases: coordination procedure, decomposition and coordination, Deep Neural Network, hierarchical structure

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:451,
  author = {Stanisław Płaczek and Aleksander Płaczek},
  title = {The Hierarchical Learning Algorithm for Deep Neural Netwprks},
  howpublished = {EasyChair Preprint no. 451},
  doi = {10.29007/63s1},
  year = {EasyChair, 2018}}
Download PDFOpen PDF in browser