Abstract
We develop a probabilistic framework for deep learning based on the Deep Rendering Mixture Model (DRMM), a new generative probabilistic model that explicitly capture variations in data due to latent task nuisance variables. We demonstrate that max-sum inference in the DRMM yields an algorithm that exactly reproduces the operations in deep convolutional neural networks (DCNs), providing a first principles derivation. Our framework provides new insights into the successes and shortcomings of DCNs as well as a principled route to their improvement. DRMM training via the Expectation-Maximization (EM) algorithm is a powerful alternative to DCN back-propagation, and initial training results are promising. Classification based on the DRMM and other variants outperforms DCNs in supervised digit classification, training 2-3× faster while achieving similar accuracy. Moreover, the DRMM is applicable to semi-supervised and unsupervised learning tasks, achieving results that are state-of-the-art in several categories on the MNIST benchmark and comparable to state of the art on the CIFAR10 benchmark.
Original language | English (US) |
---|---|
Title of host publication | Advances in Neural Information Processing Systems |
Pages | 2558-2566 |
Number of pages | 9 |
State | Published - 2016 |
Event | 30th Annual Conference on Neural Information Processing Systems: NIPS 2016 - Barcelona; , Spain Duration: Dec 5 2016 → Dec 10 2016 |
Conference
Conference | 30th Annual Conference on Neural Information Processing Systems |
---|---|
Abbreviated title | NIPS |
Country/Territory | Spain |
City | Barcelona; |
Period | 12/5/16 → 12/10/16 |
ASJC Scopus subject areas
- Computer Networks and Communications
- Information Systems
- Signal Processing