Pratik Rathore
I am a fourth-year PhD student in the Electrical Engineering department at Stanford University, interested in optimization for machine learning, advised by Madeleine Udell.
Before Stanford, I graduated from the University of Maryland with a double degree in Electrical Engineering and Mathematics.
I most recently interned at Gridmatic, where I worked on faster optimization
for battery scheduling and price impact models for energy trading.
Email  / 
CV  / 
Google Scholar  / 
LinkedIn  / 
Github
|
|
Research
I'm interested in using randomization and preconditioning to design fast, scalable optimization algorithms for machine learning. Recently, I've been thinking about challenges in training physics-informed neural networks and using randomized low-rank approximations to develop new algorithms for convex, finite-sum optimization and deep learning.
* denotes equal contribution.
|
Have ASkotch: Fast Methods for Large-scale, Memory-constrained Kernel Ridge Regression
Pratik Rathore,
Zachary Frangella,
Madeleine Udell
preprint, 2024
[arXiv]
[abstract]
We develop ASkotch, a fast, memory-efficient algorithm for
large-scale kernel ridge regression (KRR), based on randomized preconditioning, acceleration, and coordinate descent.
Our experiments show that ASkotch outperforms state-of-the-art KRR methods on a variety of tasks,
such as molecular orbital energy prediction, classification in particle physics, and transportation data analysis.
|
Challenges in Training PINNs: A Loss Landscape Perspective
Pratik Rathore,
Weimu Lei,
Zachary Frangella,
Lu Lu,
Madeleine Udell
ICML, 2024, Oral (top 1.5% of submissions)
[arXiv]
[code]
[abstract]
We study challenges in training physics-informed neural networks. We link training issues to ill-conditioning of the loss,
and show a combined Adam and L-BFGS approach, along with a new optimizer, NysNewton-CG, enhances PINN performance.
|
PROMISE: Preconditioned Stochastic Optimization Methods by Incorporating Scalable Curvature Estimates
Zachary Frangella*,
Pratik Rathore*,
Shipu Zhao,
Madeleine Udell
JMLR, 2024
[arXiv]
[code]
[abstract]
We propose PROMISE, a family of preconditioned stochastic optimization methods that use scalable, randomized curvature estimates to solve
large-scale, ill-conditioned convex optimization problems in machine learning.
PROMISE methods, with default hyperparameters, outperform popular tuned stochastic optimizers on ridge and logistic regression.
Furthermore, we introduce quadratic regularity, which determines the speed of linear convergence for PROMISE methods
and allows us to obtain improved rates for ridge regresison.
|
|
CA, Optimization (CME 307), Fall 2024
CA, Optimization (CME 307), Winter 2024
CA, Convex Optimization II (EE 364B), Spring 2023
|
|
TA, Intermediate Programming Concepts for Engineers (ENEE 150), Spring 2021
|
|