TY - JOUR
T1 - NuMax
T2 - A Convex Approach for Learning Near-Isometric Linear Embeddings
AU - Hegde, Chinmay
AU - Sankaranarayanan, Aswin C.
AU - Yin, Wotao
AU - Baraniuk, Richard G.
N1 - Publisher Copyright:
© 1991-2012 IEEE.
Copyright:
Copyright 2017 Elsevier B.V., All rights reserved.
PY - 2015/11/15
Y1 - 2015/11/15
N2 - We propose a novel framework for the deterministic construction of linear, near-isometric embeddings of a finite set of data points. Given a set of training points x C ℝn, we consider the secant set S(x)that consists of all pairwise difference vectors of , normalized to lie on the unit sphere. We formulate an affine rank minimization problem to construct a matrix that preserves the norms of all the vectors in S(x)up to a distortion parameter . While affine rank minimization is NP-hard, we show that this problem can be relaxed to a convex formulation that can be solved using a tractable semidefinite program (SDP). In order to enable scalability of our proposed SDP to very large-scale problems, we adopt a two-stage approach. First, in order to reduce compute time, we develop a novel algorithm based on the Alternating Direction Method of Multipliers (ADMM) that we call Nuclear norm minimization with Max-norm constraints (NuMax) to solve the SDP. Second, we develop a greedy, approximate version of NuMax based on the column generation method commonly used to solve large-scale linear programs. We demonstrate that our framework is useful for a number of signal processing applications via a range of experiments on large-scale synthetic and real datasets.
AB - We propose a novel framework for the deterministic construction of linear, near-isometric embeddings of a finite set of data points. Given a set of training points x C ℝn, we consider the secant set S(x)that consists of all pairwise difference vectors of , normalized to lie on the unit sphere. We formulate an affine rank minimization problem to construct a matrix that preserves the norms of all the vectors in S(x)up to a distortion parameter . While affine rank minimization is NP-hard, we show that this problem can be relaxed to a convex formulation that can be solved using a tractable semidefinite program (SDP). In order to enable scalability of our proposed SDP to very large-scale problems, we adopt a two-stage approach. First, in order to reduce compute time, we develop a novel algorithm based on the Alternating Direction Method of Multipliers (ADMM) that we call Nuclear norm minimization with Max-norm constraints (NuMax) to solve the SDP. Second, we develop a greedy, approximate version of NuMax based on the column generation method commonly used to solve large-scale linear programs. We demonstrate that our framework is useful for a number of signal processing applications via a range of experiments on large-scale synthetic and real datasets.
KW - Approximate nearest neighbors
KW - classification
KW - compressive sensing
KW - dimensionality reduction
UR - http://www.scopus.com/inward/record.url?scp=84959326453&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84959326453&partnerID=8YFLogxK
U2 - 10.1109/TSP.2015.2452228
DO - 10.1109/TSP.2015.2452228
M3 - Article
AN - SCOPUS:84959326453
VL - 63
SP - 6109
EP - 6121
JO - IEEE Transactions on Signal Processing
JF - IEEE Transactions on Signal Processing
SN - 1053-587X
IS - 22
M1 - 7145475
ER -