Pratik Rathore

I am a third-year PhD student in the Electrical Engineering department at Stanford University, interested in optimization for machine learning, advised by Madeleine Udell.

Before Stanford, I graduated from the University of Maryland with a double degree in Electrical Engineering and Mathematics. As an undergraduate, I completed internships at STR, where I conducted research on radar image processing, and Lockheed Martin, where I reviewed and tested computational models for satellites. I also conducted research in number theory, for which I received the Dan Shanks Award from the University of Maryland Math Department.

Email  /  CV  /  Google Scholar  /  LinkedIn  /  Github

profile photo
Research

I'm interested in using randomization and preconditioning to design fast, scalable optimization algorithms for machine learning. Recently, I've been thinking about challenges in training physics-informed neural networks and using randomized low-rank approximations to develop new algorithms for convex, finite-sum optimization and deep learning.

* denotes equal contribution.

Challenges in Training PINNs: A Loss Landscape Perspective
Pratik Rathore, Weimu Lei, Zachary Frangella, Lu Lu, Madeleine Udell
in submission
arXiv

We study challenges in training physics-informed neural networks. We link training issues to ill-conditioning of the loss, and show a combined Adam and L-BFGS approach, along with a new optimizer, NysNewton-CG, enhances PINN performance.

PROMISE: Preconditioned Stochastic Optimization Methods by Incorporating Scalable Curvature Estimates
Zachary Frangella*, Pratik Rathore*, Shipu Zhao, Madeleine Udell
in submission
arXiv / code

We propose PROMISE, a family of preconditioned stochastic optimization methods that use scalable, randomized curvature estimates to solve large-scale, ill-conditioned convex optimization problems in machine learning. PROMISE methods, with default hyperparameters, outperform popular tuned stochastic optimizers on ridge and logistic regression. Furthermore, we introduce quadratic regularity, which determines the speed of linear convergence for PROMISE methods and allows us to obtain improved rates for ridge regresison.

SketchySGD: Reliable Stochastic Optimization via Randomized Curvature Estimates
Zachary Frangella, Pratik Rathore, Shipu Zhao, Madeleine Udell
in submission
arXiv / code

We use techniques from randomized numerical linear algebra to develop a fast, scalable, preconditioned stochastic gradient method for convex machine learning problems.

There are no Cube-free Descartes Numbers with Exactly Seven Distinct Prime Factors
Pratik Rathore
preprint
arXiv

We prove new results regarding the prime factorizations of Descartes numbers, a family of odd spoof perfect numbers.

Teaching
stanford CA, Optimization (CME 307), Winter 2024
CA, Convex Optimization II (EE 364B), Spring 2023
umd TA, Intermediate Programming Concepts for Engineers (ENEE 150), Spring 2021

Template borrowed from Jon Barron.