Command Palette
Search for a command to run...
{Qi. Wang Haopeng Li Yuan Yuan}
Abstract
Video frame interpolation achieves temporal super-resolution by generating smooth transitions between frames. Although great success has been achieved by deep neural networks, the synthesized images stills suffer from poor visual appearance and unsatisfactory artifacts. In this paper, we propose a novel network structure that leverages residue refinement and adaptive weight to synthesize in-between frames. The residue refinement technique is used for optical flow and image generation for higher accuracy and better visual appearance, while the adaptive weight map combines the forward and backward warped frames to reduce the artifacts. Moreover, all sub-modules in our method are implemented by U-Net with less depths, so the efficiency is guaranteed. Experiments on public datasets demonstrate the effectiveness and superiority of our method over the state-of-the-art approaches.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| video-frame-interpolation-on-msu-video-frame | RRIN | LPIPS: 0.072 MS-SSIM: 0.902 PSNR: 25.76 SSIM: 0.893 VMAF: 59.82 |
| video-frame-interpolation-on-ucf101-1 | RRIN | PSNR: 34.93 SSIM: 0.9496 |
| video-frame-interpolation-on-vimeo90k | RRIN | PSNR: 35.22 SSIM: 0.9643 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.