HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Stacked What-Where Auto-encoders

Junbo Zhao; Michael Mathieu; Ross Goroshin; Yann LeCun

Stacked What-Where Auto-encoders

Abstract

We present a novel architecture, the "stacked what-where auto-encoders" (SWWAE), which integrates discriminative and generative pathways and provides a unified approach to supervised, semi-supervised and unsupervised learning without relying on sampling during training. An instantiation of SWWAE uses a convolutional net (Convnet) (LeCun et al. (1998)) to encode the input, and employs a deconvolutional net (Deconvnet) (Zeiler et al. (2010)) to produce the reconstruction. The objective function includes reconstruction terms that induce the hidden states in the Deconvnet to be similar to those of the Convnet. Each pooling layer produces two sets of variables: the "what" which are fed to the next layer, and its complementary variable "where" that are fed to the corresponding layer in the generative decoder.

Code Repositories

zhangqinghao0811/unpool
tf
Mentioned in GitHub
isaacgerg/keras_odds_and_ends
tf
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
image-classification-on-cifar-10SWWAE
Percentage correct: 92.2
image-classification-on-cifar-100SWWAE
Percentage correct: 69.1
image-classification-on-mnistZhao et al. (2015) (auto-encoder)
Percentage error: 4.76
image-classification-on-stl-10SWWAE
Percentage correct: 74.3
semi-supervised-image-classification-on-stl-1SWWAE
Accuracy: 74.30

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp