TY - GEN
T1 - Sub-linear memory sketches for near neighbor search on streaming data
AU - Coleman, Benjamin
AU - Baraniuk, Richard G.
AU - Shrivastava, Anshumali
N1 - Funding Information:
This work was supported by National Science Foundation IIS-1652131, BIGDATA-1838177, RI-1718478, AFOSR-YIP FA9550-18-1-0152, Amazon Research Award, and the ONR BRC grant on Randomized Numerical Linear Algebra.
Publisher Copyright:
© 37th International Conference on Machine Learning, ICML 2020.
Copyright:
Copyright 2021 Elsevier B.V., All rights reserved.
PY - 2020
Y1 - 2020
N2 - We present the first sublinear memory sketch that can be queried to find the nearest neighbors in a dataset. Our online sketching algorithm compresses an N element dataset to a sketch of size O(Nb log3 N) in O(N(b+1) log3 N) time, where b < 1. This sketch can correctly report the nearest neighbors of any query that satisfies a stability condition parameterized by b. We achieve sublinear memory performance on stable queries by combining recent advances in locality sensitive hash (LSH)-based estimators, online kernel density estimation, and compressed sensing. Our theoretical results shed new light on the memory-accuracy tradeoff for nearest neighbor search, and our sketch, which consists entirely of short integer arrays, has a variety of attractive features in practice. We evaluate the memory-recall tradeoff of our method on a friend recommendation task in the Google Plus social media network. We obtain orders of magnitude better compression than the random projection based alternative while retaining the ability to report the nearest neighbors of practical queries.
AB - We present the first sublinear memory sketch that can be queried to find the nearest neighbors in a dataset. Our online sketching algorithm compresses an N element dataset to a sketch of size O(Nb log3 N) in O(N(b+1) log3 N) time, where b < 1. This sketch can correctly report the nearest neighbors of any query that satisfies a stability condition parameterized by b. We achieve sublinear memory performance on stable queries by combining recent advances in locality sensitive hash (LSH)-based estimators, online kernel density estimation, and compressed sensing. Our theoretical results shed new light on the memory-accuracy tradeoff for nearest neighbor search, and our sketch, which consists entirely of short integer arrays, has a variety of attractive features in practice. We evaluate the memory-recall tradeoff of our method on a friend recommendation task in the Google Plus social media network. We obtain orders of magnitude better compression than the random projection based alternative while retaining the ability to report the nearest neighbors of practical queries.
UR - http://www.scopus.com/inward/record.url?scp=85095065983&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85095065983&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85095065983
T3 - 37th International Conference on Machine Learning, ICML 2020
SP - 2067
EP - 2077
BT - 37th International Conference on Machine Learning, ICML 2020
A2 - Daume, Hal
A2 - Singh, Aarti
PB - International Machine Learning Society (IMLS)
T2 - 37th International Conference on Machine Learning, ICML 2020
Y2 - 13 July 2020 through 18 July 2020
ER -