Command Palette
Search for a command to run...
Squeezed Very Deep Convolutional Neural Networks for Text Classification
Andréa B. Duque; Luã Lázaro J. Santos; David Macêdo; Cleber Zanchettin

Abstract
Most of the research in convolutional neural networks has focused on increasing network depth to improve accuracy, resulting in a massive number of parameters which restricts the trained network to platforms with memory and processing constraints. We propose to modify the structure of the Very Deep Convolutional Neural Networks (VDCNN) model to fit mobile platforms constraints and keep performance. In this paper, we evaluate the impact of Temporal Depthwise Separable Convolutions and Global Average Pooling in the network parameters, storage size, and latency. The squeezed model (SVDCNN) is between 10x and 20x smaller, depending on the network depth, maintaining a maximum size of 6MB. Regarding accuracy, the network experiences a loss between 0.4% and 1.3% and obtains lower latencies compared to the baseline model.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| sentiment-analysis-on-yelp-binary | SVDCNN | Error: 4.74 |
| sentiment-analysis-on-yelp-fine-grained | SVDCNN | Error: 46.80 |
| text-classification-on-ag-news | SVDCNN | Error: 9.45 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.