HyperAI
HyperAI超神经
首页
算力平台
文档
资讯
论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
Command Palette
Search for a command to run...
首页
SOTA
图像生成
Image Generation On Imagenet 64X64
Image Generation On Imagenet 64X64
评估指标
Bits per dim
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
Bits per dim
Paper Title
Repository
DenseFlow-74-10
3.35 (different downsampling)
Densely connected normalizing flows
2-rectified flow++ (NFE=1)
-
Improving the Training of Rectified Flows
Performer (6 layers)
3.719
Rethinking Attention with Performers
GDD-I
-
Diffusion Models Are Innate One-Step Generators
Sparse Transformer 59M (strided)
3.44
Generating Long Sequences with Sparse Transformers
CTM (NFE 1)
-
Consistency Trajectory Models: Learning Probability Flow ODE Trajectory of Diffusion
CD (Diffusion + Distillation, NFE=2)
-
Consistency Models
CT (Direct Generation, NFE=1)
-
Consistency Models
MaCow (Unf)
3.75
MaCow: Masked Convolutional Generative Flow
TCM
-
Truncated Consistency Models
-
Very Deep VAE
3.52
Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images
CDM
-
Cascaded Diffusion Models for High Fidelity Image Generation
-
Combiner-Axial
3.42
Combiner: Full Attention Transformer with Sparse Computation Cost
GLIDE + CLS-FREE
-
Composing Ensembles of Pre-trained Models via Iterative Consensus
-
SiD
-
Score identity Distillation: Exponentially Fast Distillation of Pretrained Diffusion Models for One-Step Generation
RIN
-
Scalable Adaptive Computation for Iterative Generation
Logsparse (6 layers)
4.351
Enhancing the Locality and Breaking the Memory Bottleneck of Transformer on Time Series Forecasting
Efficient-VDVAE
3.30 (different downsampling)
Efficient-VDVAE: Less is more
MRCNF
3.44
Multi-Resolution Continuous Normalizing Flows
Gated PixelCNN (van den Oord et al., [2016c])
3.57
Conditional Image Generation with PixelCNN Decoders
0 of 64 row(s) selected.
Previous
Next
Image Generation On Imagenet 64X64 | SOTA | HyperAI超神经