Video Stitching for Linear Camera Arrays

UC Merced
NVIDIA Research
SenseTime Research
UC Merced, Google
NVIDIA Research
*This work was done when Wei-Sheng was an intern at NVIDIA.

Despite the long history of image and video stitching research, existing academic and commercial solutions still produce strong artifacts. In this work, we propose a wide-baseline video stitching algorithm for linear camera arrays that is temporally stable and tolerant to strong parallax. Our key insight is that stitching can be cast as a problem of learning a smooth spatial interpolation between the input videos. To solve this problem, inspired by pushbroom cameras, we introduce a fast pushbroom interpolation layer and propose a novel pushbroom stitching network, which learns a dense flow field to smoothly align the multiple input videos for spatial interpolation. Our approach outperforms the state-of-the-art by a significant margin, as we show with a user study, and has immediate applications in many areas such as virtual reality, immersive telepresence, autonomous driving, and video surveillance.


Wei-Sheng Lai, Orazio Gallo, Jinwei Gu, Deqing Sun, Ming-Hsuan Yang, and Jan Kautz, "Video Stitching for Linear Camera Arrays", in British Machine Vision Conference (BMVC), 2019

    author    = {Lai, Wei-Sheng and Gallo, Orazio and Gu, Jinwei and Sun, Deqing and Yang, Ming-Hsuan and Kantz, Jan}, 
    title     = {Video Stitching for Linear Camera Arrays}, 
    booktitle = {BMVC},
    year      = {2019}