[NeurIPS 2019 Paper Highlight] Meena Jagadeesan @ Harvard University - Crossminds
CrossMind.ai logo
[NeurIPS 2019 Paper Highlight] Meena Jagadeesan @ Harvard University
Aug 14, 2020
|
60 views
Reinforcement Learning
LINE
LARS
Machine Learning
REINFORCE
Details
This episode is an interview with Meena from Harvard University, discussing highlights from her paper, "Understanding Sparse JL for Feature Hashing," accepted as an oral presentation at NeurIPS 2019 conference. Meena Jagadeesan is a senior at Harvard, pursuing an A.B./S.M. (Bachelor’s and Master’s degrees) in computer science. She is broadly interested in research in theoretical computer science. She has received a CRA Outstanding Undergraduate Researcher Award, a Siebel Scholarship, and a Barry Goldwater Scholarship for her research. /Paper Abstract/ Feature hashing and other random projection schemes are commonly used to reduce the dimensionality of feature vectors. The goal is to efficiently project a high-dimensional feature vector living in R^n into a much lower-dimensional space R^m, while approximately preserving Euclidean norm. These schemes can be constructed using sparse random projections, for example using a sparse Johnson-Lindenstrauss (JL) transform. A line of work introduced by Weinberger et. al (ICML '09) analyzes the accuracy of sparse JL with sparsity 1 on feature vectors with small l_infinity-to-l_2 norm ratio. Recently, Freksen, Kamma, and Larsen (NeurIPS '18) closed this line of work by proving a tight tradeoff between l_infinity-to-l_2 norm ratio and accuracy for sparse JL with sparsity 1. In this paper, we demonstrate the benefits of using sparsity s greater than 1 in sparse JL on feature vectors. Our main result is a tight tradeoff between l_infinity-to-l_2 norm ratio and accuracy for a general sparsity s, that significantly generalizes the result of Freksen et. al. Our result theoretically demonstrates that sparse JL with s > 1 can have significantly better norm-preservation properties on feature vectors than sparse JL with s = 1; we also empirically demonstrate this finding. Slides: https://nips.cc/media/Slides/nips/2019/westballc(12-15-50)-12-15-50-15777-understanding_s.pdf Paper: http://papers.nips.cc/paper/9656-understanding-sparse-jl-for-feature-hashing.pdf
Comments
loading...
Reaction (0) | Note (0)
    📝 No reactions and notes yet
    Be the first one to share your thoughts!
loading...
Recommended