Download PDFOpen PDF in browser

Uncertainty Estimates in Deep Generative Models using Gaussian Processes

EasyChair Preprint no. 4448

12 pagesDate: October 20, 2020


We propose a new framework to estimate the uncertainty of deep generative models. In real-world applications, uncertainty allows us to evaluate the reliability of the outcome of machine learning systems. Gaussian processes are widely known as a method in machine learning which provides estimates of uncertainty. Moreover, Gaussian processes have been shown to be equivalent to deep neural networks with infinitely wide layers. This equivalence suggests that Gaussian process regression can be used to perform Bayesian prediction with deep neural networks. However, existing Bayesian treatments of neural networks via Gaussian processes have only been applied so far to supervised learning; we are not aware of any work using neural networks and Gaussian processes for unsupervised learning. We extend the Bayesian Gaussian process latent variable model, an unsupervised learning method using Gaussian processes, and propose a Bayesian deep generative model by approximating the expectations of complex kernels. With a series of experiments, we validate that our method provides estimates of uncertainty from the relevance between variance and the output quality.

Keyphrases: Bayesian learning, deep learning, Gaussian process, Gaussian Process Latent Variable Model, neural network

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
  author = {Kai Katsumata and Ryoga Kobayashi},
  title = {Uncertainty Estimates in Deep Generative Models using Gaussian Processes},
  howpublished = {EasyChair Preprint no. 4448},

  year = {EasyChair, 2020}}
Download PDFOpen PDF in browser