Image Classification On Mnist
评估指标
Percentage error
评测结果
各个模型在此基准测试上的表现结果
比较表格
模型名称 | Percentage error |
---|---|
multi-column-deep-neural-networks-for-image | 0.23 |
vision-models-are-more-robust-and-fair-when | 0.58 |
on-second-order-behaviour-in-augmented-neural | 0.37 |
enhanced-image-classification-with-a-fast | 0.4 |
pcanet-a-simple-deep-learning-baseline-for | 0.6 |
learning-in-wilson-cowan-model-for | - |
batch-normalized-maxout-network-in-network | 0.24 |
cnn-filter-db-an-empirical-investigation-of | - |
deep-fried-convnets | 0.7 |
exact-how-to-train-your-accuracy | 0.33 |
lets-keep-it-simple-using-simple | 0.25 |
network-in-network | 0.5 |
the-tsetlin-machine-a-game-theoretic-bandit | 1.8 |
spinalnet-deep-neural-network-with-gradual-1 | 0.28 |
spike-time-displacement-based-error | - |
explaining-and-harnessing-adversarial | 0.8 |
personalized-federated-learning-with-hidden | - |
learning-in-wilson-cowan-model-for | - |
rmdl-random-multimodel-deep-learning-for | 0.18 |
evaluating-the-performance-of-taaf-for-image | 0.48% |
competitive-multi-scale-convolution | 0.3 |
fkan-fractional-kolmogorov-arnold-networks | - |
dynamic-routing-between-capsules | 0.25 |
robust-training-in-high-dimensions-via-block | - |
trainable-activations-for-image | 3.0 |
sparse-activity-and-sparse-connectivity-in | 0.8 |
a-block-based-convolutional-neural-network | - |
performance-of-gaussian-mixture-model | - |
accelerating-spiking-neural-network-training | - |
an-evolutionary-approach-to-dynamic | - |
deep-convolutional-neural-networks-as-generic | 0.5 |
trainable-activations-for-image | 3.6 |
sparse-networks-from-scratch-faster-training | 1.26 |
regularization-of-neural-networks-using | 0.21 |
textcaps-handwritten-character-recognition | 0.29 |
a-branching-and-merging-convolutional-network | 0.13 |
binaryconnect-training-deep-neural-networks | 1.0 |
convolutional-sequence-to-sequence-learning | 1.41 |
apac-augmented-pattern-classification-with | 0.23 |
on-the-importance-of-normalisation-layers-in | 0.4 |
on-the-ideal-number-of-groups-for-isometric | 1.67 |
a-single-graph-convolution-is-all-you-need | 1.96 |
convolutional-clustering-for-unsupervised | 1.4 |
unsupervised-feature-learning-with-c-svddnet | 0.4 |
trainable-activations-for-image | 2.8 |
ensemble-learning-in-cnn-augmented-with-fully | 0.16 |
tensorizing-neural-networks | 1.8 |
neupde-neural-network-based-ordinary-and | 0.51 |
training-very-deep-networks | 0.5 |
training-neural-networks-with-local-error | 0.26 |
renet-a-recurrent-neural-network-based | 0.5 |
improved-training-speed-accuracy-and-data | 0.53 |
wavemix-resource-efficient-token-mixing-for | 0.29 |
generalizing-pooling-functions-in | 0.3 |
parametric-matrix-models | 2.62 |
learning-local-discrete-features-in | 0.20 |
xnodr-and-xnidr-two-accurate-and-fast-fully | - |
parametric-matrix-models | 1.01 |
the-weighted-tsetlin-machine-compressed | 1.5 |
fractional-max-pooling | 0.3 |
stacked-what-where-auto-encoders | 4.76 |
deeply-supervised-nets | 0.4 |
hybrid-orthogonal-projection-and-estimation | 0.4 |
the-convolutional-tsetlin-machine | 0.6 |
a-novel-lightweight-convolutional-neural | 0.29 |
projectionnet-learning-efficient-on-device | 5.0 |
stochastic-optimization-of-plain | 0.17 |
all-you-need-is-a-good-init | 0.4 |
diffprune-neural-network-pruning-with | 0.6 |
maxout-networks | 0.5 |
exploring-effects-of-hyperdimensional-vectors | - |
augmented-neural-odes | 0.37 |
the-backpropagation-algorithm-implemented-on | - |
improving-k-means-clustering-performance-with | - |
efficient-capsnet-capsule-network-with-self | 0.16 |
rkan-rational-kolmogorov-arnold-networks | - |
模型 77 | 0.5 |
augmented-neural-odes | 1.8 |
accelerating-spiking-neural-network-training | - |
convolutional-kernel-networks | 0.4 |