TY - JOUR
T1 - Learned D-AMP
T2 - 31st Annual Conference on Neural Information Processing Systems, NIPS 2017
AU - Metzler, Christopher A.
AU - Mousavi, Ali
AU - Baraniuk, Richard G.
N1 - Funding Information:
This work was supported in part by DARPA REVEAL grant HR0011-16-C-0028, DARPA OMNISCIENT grant G001534-7500, ONR grant N00014-15-1-2735, ARO grant W911NF-15-1-0316, ONR grant N00014-17-1-2551, and NSF grant CCF-1527501. In addition, C. Metzler was supported in part by the NSF GRFP.
Publisher Copyright:
© 2017 Neural information processing systems foundation. All rights reserved.
PY - 2017
Y1 - 2017
N2 - Compressive image recovery is a challenging problem that requires fast and accurate algorithms. Recently, neural networks have been applied to this problem with promising results. By exploiting massively parallel GPU processing architectures and oodles of training data, they can run orders of magnitude faster than existing techniques. However, these methods are largely unprincipled black boxes that are difficult to train and often-times specific to a single measurement matrix. It was recently demonstrated that iterative sparse-signal-recovery algorithms can be "unrolled" to form interpretable deep networks. Taking inspiration from this work, we develop a novel neural network architecture that mimics the behavior of the denoising-based approximate message passing (D-AMP) algorithm. We call this new network Learned D-AMP (LDAMP). The LDAMP network is easy to train, can be applied to a variety of different measurement matrices, and comes with a state-evolution heuristic that accurately predicts its performance. Most importantly, it outperforms the state-of-the-art BM3D-AMP and NLR-CS algorithms in terms of both accuracy and run time. At high resolutions, and when used with sensing matrices that have fast implementations, LDAMP runs over 50× faster than BM3D-AMP and hundreds of times faster than NLR-CS.
AB - Compressive image recovery is a challenging problem that requires fast and accurate algorithms. Recently, neural networks have been applied to this problem with promising results. By exploiting massively parallel GPU processing architectures and oodles of training data, they can run orders of magnitude faster than existing techniques. However, these methods are largely unprincipled black boxes that are difficult to train and often-times specific to a single measurement matrix. It was recently demonstrated that iterative sparse-signal-recovery algorithms can be "unrolled" to form interpretable deep networks. Taking inspiration from this work, we develop a novel neural network architecture that mimics the behavior of the denoising-based approximate message passing (D-AMP) algorithm. We call this new network Learned D-AMP (LDAMP). The LDAMP network is easy to train, can be applied to a variety of different measurement matrices, and comes with a state-evolution heuristic that accurately predicts its performance. Most importantly, it outperforms the state-of-the-art BM3D-AMP and NLR-CS algorithms in terms of both accuracy and run time. At high resolutions, and when used with sensing matrices that have fast implementations, LDAMP runs over 50× faster than BM3D-AMP and hundreds of times faster than NLR-CS.
UR - http://www.scopus.com/inward/record.url?scp=85047007715&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85047007715&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85047007715
VL - 2017-December
SP - 1773
EP - 1784
JO - Advances in Neural Information Processing Systems
JF - Advances in Neural Information Processing Systems
SN - 1049-5258
Y2 - 4 December 2017 through 9 December 2017
ER -