Video compressive sensing for spatial multiplexing cameras using motion-flow models

Aswin C. Sankaranarayanan, Lina Xu, Christoph Studer, Yun Li, Kevin F. Kelly, Richard G. Baraniuk

Research output: Contribution to journalArticlepeer-review

30 Scopus citations

Abstract

Spatial multiplexing cameras (SMCs) acquire a (typically static) scene through a series of coded projections using a spatial light modulator (e.g., a digital micromirror device) and a few optical sensors. This approach finds use in imaging applications where full-frame sensors are either too expensive (e.g., for short-wave infrared wavelengths) or unavailable. Existing SMC systems reconstruct static scenes using techniques from compressive sensing (CS). For videos, however, existing acquisition and recovery methods deliver poor quality. In this paper, we propose the CS multiscale video (CS-MUVI) sensing and recovery framework for high-quality video acquisition and recovery using SMCs. Our framework features novel sensing matrices that enable the efficient computation of a low-resolution video preview, while enabling high-resolution video recovery using convex optimization. To further improve the quality of the reconstructed videos, we extract optical-flow estimates from the low-resolution previews and impose them as constraints in the recovery procedure. We demonstrate the efficacy of our CS-MUVI framework for a host of synthetic and real measured SMC video data, and we show that high-quality videos can be recovered at roughly 60× compression.

Original languageEnglish (US)
Pages (from-to)1489-1518
Number of pages30
JournalSIAM Journal on Imaging Sciences
Volume8
Issue number3
DOIs
StatePublished - Jul 23 2015

Keywords

  • Measurement matrix design
  • Optical flow
  • Spatial multiplexing cameras
  • Video compressive sensing

ASJC Scopus subject areas

  • Mathematics(all)
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Video compressive sensing for spatial multiplexing cameras using motion-flow models'. Together they form a unique fingerprint.

Cite this