Download PDFOpen PDF in browser

Historical Gradient Boosting Machine

13 pagesPublished: September 17, 2018

Abstract

We introduce the Historical Gradient Boosting Machine with the objective of improving the convergence speed of gradient boosting. Our approach is analyzed from the perspective of numerical optimization in function space and considers gradients in previous steps, which have rarely been appreciated by traditional methods. To better exploit the guiding effect of historical gradient information, we incorporate both the accumulated previous gradients and the current gradient into the computation of descent direction in the function space. By fitting to the descent direction given by our algorithm, the weak learner could enjoy the advantages of historical gradients that mitigate the greediness of the steepest descent direction. Experimental results show that our approach improves the convergence speed of gradient boosting without significant decrease in accuracy.

Keyphrases: Decision Tree, Gradient Boosting Machine, Historical Gradient

In: Daniel Lee, Alexander Steen and Toby Walsh (editors). GCAI-2018. 4th Global Conference on Artificial Intelligence, vol 55, pages 68--80

Links:
BibTeX entry
@inproceedings{GCAI-2018:Historical_Gradient_Boosting_Machine,
  author    = {Zeyu Feng and Chang Xu and Dacheng Tao},
  title     = {Historical Gradient Boosting Machine},
  booktitle = {GCAI-2018. 4th Global Conference on Artificial Intelligence},
  editor    = {Daniel Lee and Alexander Steen and Toby Walsh},
  series    = {EPiC Series in Computing},
  volume    = {55},
  pages     = {68--80},
  year      = {2018},
  publisher = {EasyChair},
  bibsource = {EasyChair, https://easychair.org},
  issn      = {2398-7340},
  url       = {https://easychair.org/publications/paper/pCtK},
  doi       = {10.29007/2sdc}}
Download PDFOpen PDF in browser