I’m currently a researcher at Google DeepMind, where I work on language models (i.e. Gemini) and more broadly topics related to machine learning and artificial intelligence.
I recently completed my PhD at UC Berkeley in computer science advised by Michael Jordan. Before that I received an M.Phil from the University of Cambridge where I was supervised by Zoubin Ghahramani, and a B.A. in Physics from Harvard University.
email: “firstname”_“lastname”@berkeley.edu
Michelangelo: Long Context Evaluations Beyond Haystacks via Latent Structure Queries
Kiran Vodrahalli, Santiago Ontanon, Nilesh Tripuraneni, et al.
Preprint arxiv
Choosing a Proxy Metric from Past Experiments
N. Tripuraneni, L. Richardson, A. D’Amour, J. Soriano, S. Yadlowsky
KDD 2024 arxiv
Pretraining Data Mixtures Enable Narrow Model Selection
Capabilities in Transformer Models
S. Yadlowsky, L. Doshi, N. Tripuraneni
Preprint arxiv
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
Gemini Team, Google.
Preprint arxiv
Gemini: a family of highly capable multimodal models
Gemini Team, Google.
Preprint arxiv
Optimal Mean Estimation without a Variance
Y. Cherapanamjeri, N. Tripuraneni, P. Bartlett, M. I. Jordan
Conference on Learning Theory (COLT) 2022 COLT arxiv
Overparameterization Improves Robustness to Covariate Shift in High Dimensions/Covariate Shift in High-Dimensional Random Feature Regression
N. Tripuraneni, B. Adlam, J. Pennington
Conference on Neural Information Processing Systems (NeurIPS) 2021 NeurIPS arxiv
On the Theory of Transfer Learning: The Importance of Task Diversity
N. Tripuraneni, M. I. Jordan, C. Jin
Conference on Neural Information Processing Systems (NeurIPS) 2020 NeurIPS arxiv
Stochastic Cubic Regularization for Fast Nonconvex Optimization
N. Tripuraneni, M. Stern, C. Jin, J. Regier, M. I. Jordan
Conference on Neural Information Processing Systems (NeurIPS) 2018 NeurIPS arxiv
Magnetic Hamiltonian Monte Carlo
N. Tripuraneni, M. Rowland, Z. Ghahramani, R. Turner
International Conference on Machine Learning (ICML) 2017 ICML arxiv