Pratik Rathore
I am a fourth-year PhD student in the Electrical Engineering department at Stanford University, interested in optimization for machine learning, advised by Madeleine Udell.
Before Stanford, I graduated from the University of Maryland with a double degree in Electrical Engineering and Mathematics.
I most recently interned at Gridmatic, where I worked on faster optimization
for battery scheduling and price impact models for energy trading.
Email  / 
CV  / 
Google Scholar  / 
LinkedIn  / 
GitHub
|
|
Research
I'm interested in using randomization and preconditioning to design fast, scalable optimization algorithms for machine learning. Recently, I've been thinking about optimization challenges in scientific machine learning and using randomized low-rank approximations to develop new algorithms for convex, finite-sum optimization and deep learning.
* denotes equal contribution.
|
Have ASkotch: A Neat Solution for Large-scale Kernel Ridge Regression
Pratik Rathore,
Zachary Frangella,
Jiaming Yang,
MichaĆ DereziĆski,
Madeleine Udell
submitted, 2024
[arXiv]
[abstract]
We develop ASkotch, a scalable, accelerated, iterative method for full
kernel ridge regression (KRR) that provably obtains linear convergence.
ASkotch outperforms state-of-the-art KRR solvers on a testbed of 23 large-scale
KRR regression and classification tasks derived from a wide range of application domains.
Our work opens up the possibility of as-yet-unimagined applications of
full KRR across a number of disciplines.
|
Challenges in Training PINNs: A Loss Landscape Perspective
Pratik Rathore,
Weimu Lei,
Zachary Frangella,
Lu Lu,
Madeleine Udell
ICML, 2024, Oral (top 1.5% of submissions)
[arXiv]
[code]
[abstract]
We study challenges in training physics-informed neural networks. We link training issues to ill-conditioning of the loss,
and show a combined Adam and L-BFGS approach, along with a new optimizer, NysNewton-CG, enhances PINN performance.
|
PROMISE: Preconditioned Stochastic Optimization Methods by Incorporating Scalable Curvature Estimates
Zachary Frangella*,
Pratik Rathore*,
Shipu Zhao,
Madeleine Udell
JMLR, 2024
[arXiv]
[code]
[abstract]
We propose PROMISE, a family of preconditioned stochastic optimization methods that use scalable, randomized curvature estimates to solve
large-scale, ill-conditioned convex optimization problems in machine learning.
PROMISE methods, with default hyperparameters, outperform popular tuned stochastic optimizers on ridge and logistic regression.
Furthermore, we introduce quadratic regularity, which determines the speed of linear convergence for PROMISE methods
and allows us to obtain improved rates for ridge regresison.
|
|
CA, Optimization (CME 307), Fall 2024
CA, Optimization (CME 307), Winter 2024
CA, Convex Optimization II (EE 364B), Spring 2023
|
|
TA, Intermediate Programming Concepts for Engineers (ENEE 150), Spring 2021
|
|