TY - GEN
T1 - Least favorable compressed sensing problems for first-order methods
AU - Maleki, Arian
AU - Baraniuk, Richard
PY - 2011
Y1 - 2011
N2 - Compressed sensing (CS) exploits the compressibility of natural signals to reduce the number of samples required for accurate reconstruction. The cost for sub-Nyquist sampling has been computationally expensive reconstruction algorithms, including large-scale ℓ1 optimization. Therefore, first-order optimization methods that exploit only the gradient of the reconstruction cost function have been developed; notable examples include iterative soft thresholding (IST), fast iterative soft thresholding algorithm (FISTA), and approximate message passing (AMP). The performance of these algorithms has been studied mainly in the standard framework of convex optimization, called the deterministic framework here. In this paper, we first show that the deterministic approach results in overly pessimistic conclusions that are not indicative of algorithm performance in practice. As an alternative to the deterministic framework, we second study the theoretical aspects of the statistical convergence rate, a topic that has remained unexplored in the sparse recovery literature. Our theoretical and empirical studies reveal several hallmark properties of the statistical convergence of first-order methods, including universality over the matrix ensemble and the least favorable coefficient distribution.
AB - Compressed sensing (CS) exploits the compressibility of natural signals to reduce the number of samples required for accurate reconstruction. The cost for sub-Nyquist sampling has been computationally expensive reconstruction algorithms, including large-scale ℓ1 optimization. Therefore, first-order optimization methods that exploit only the gradient of the reconstruction cost function have been developed; notable examples include iterative soft thresholding (IST), fast iterative soft thresholding algorithm (FISTA), and approximate message passing (AMP). The performance of these algorithms has been studied mainly in the standard framework of convex optimization, called the deterministic framework here. In this paper, we first show that the deterministic approach results in overly pessimistic conclusions that are not indicative of algorithm performance in practice. As an alternative to the deterministic framework, we second study the theoretical aspects of the statistical convergence rate, a topic that has remained unexplored in the sparse recovery literature. Our theoretical and empirical studies reveal several hallmark properties of the statistical convergence of first-order methods, including universality over the matrix ensemble and the least favorable coefficient distribution.
UR - http://www.scopus.com/inward/record.url?scp=80054819874&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=80054819874&partnerID=8YFLogxK
U2 - 10.1109/ISIT.2011.6033758
DO - 10.1109/ISIT.2011.6033758
M3 - Conference contribution
AN - SCOPUS:80054819874
SN - 9781457705953
T3 - IEEE International Symposium on Information Theory - Proceedings
SP - 134
EP - 138
BT - 2011 IEEE International Symposium on Information Theory Proceedings, ISIT 2011
T2 - 2011 IEEE International Symposium on Information Theory Proceedings, ISIT 2011
Y2 - 31 July 2011 through 5 August 2011
ER -