|
Justin Whitehouse
Email: jwhiteho (at) stanford (dot) edu
|
I am a SAIL postdoctoral fellow at Stanford University,
where I am fortunate to work with Vasilis Syrgkanis and Ramesh Johari.
My research is broadly focused on problems at the intersection of causal inference, machine learning, and optimal decision making.
I am particularly interested in studying how classical estimation strategies for causal inference (doubly-robust/double ML methods) can be applied to
modern ML tasks such as model calibration, policy learning/evaluation, and more. I am also interested in developing anytime-valid
statistical methods, which focus on providing
non-asymptotic confidence intervals under data-dependent stopping conditions. A more detailed outline for some of my interests is provided below.
- Causal Calibration: Calibrated predictions are known to be more accurate and result in optimal downstream decision making. However, existing calibration algorithms (isotonic calibration, Platt scaling, histogram bining)
require fully-observed data, and are thus unapplicable when calibrating models predicting heterogeneous treatment effects. How can we adapt generic calibration algorithms so that they can be used
when calibrating estimates of general heterogeneous causal (e.g. conditional average treatment effects, conditional quantile treatment effects)?
- Policy Learning/Evaluation: How can one use observational data to learn the maximal reward of any individualized treatment strategy? How can we develop safe treatment policies or policies that abstain for assigning treatments in regions
of uncertainty?
- Generative AI Evaluation: How can we develop causal methods to evaluate the quality (e.g. usefulness, relevance) of the outputs of generative AI models in a target population of interest?
Further, given that obtaining labeled data from the target population may be costly, how can we adapt these methods to leverage "cheaper" sources of data, such as ML model predictions
or observational data from a different population?
- Adaptive Inference for Self-Normalized Statistics: How can we use recent improvements in martingale concentration to develop generic, multivariate concentration inequalities that control
growth of stochastic processes that are normalized by proxies for their own variance? How can we adapt these inequalities for use in online learning tasks such as adaptive mean estimation and multi-armed bandit learning?
Before starting as a postdoc at Stanford, I received my PhD in computer science from Carnegie Mellon University. There, I was advised by Aaditya Ramdas and Steven Wu.
The bulk of my theoretical research was focused on developing anytime-valid methods and time-uniform concentration inequalities. I have applied my inequalities to a variety of ML-related/causal inferece-related problems, such as
kernelized bandit learning, differentially private learning, and adaptive causal effect estimation in panel data/network interference settings. Prior to my PhD, I was an undergraduate at Columbia University in New York City. There, I majored in mathematics and computer science.
Publications and Preprints
- Inference on Optimal Policy Values and Other Irregular Functionals via Smoothing
(with Morgane Austern and Vasilis Syrgkanis).
Arxiv Preprint, 2025.
- Doubly-Robust LLM-as-a-Judge: Externally Valid Estimation with Imperfect Personas
(with Luke Guerdan, Kimberly Truong, Steven Wu, and Ken Holstein).
Arxiv Preprint, 2025.
- Mean Estimation in Banach Spaces Under Infinite Variance and Martingale Dependence
(with Ben Chugg, Diego Martinez Taboada, and Aaditya Ramdas).
In Revision (Major Revision @ Stochastic Processes and Their Applications), 2025.
- Orthogonal Causal Calibration
(with Vasilis Syrgkanis, Christopher Jung, Bryan Wilder, and Steven Wu).
Extended Abstract @ Conference on Learning Theory (non-archival), 2025.
- Time-Uniform Self-Normalized Concentration for Vector-Valued Processes
(with Aaditya Ramdas and Steven Wu).
Annals of Applied Probability, 2025.
- Multi-Armed Bandits with Network Interference
(with Abhineet Agarwal, Anish Agarwal, and Lorenzo Masoero).
Neurips, 2024.
- On the Sublinear Regret of GP-UCB
(with Aaditya Ramdas and Steven Wu).
Neurips, 2023.
- Adaptive Principal Component Regression with Applications to Panel Data
(with Anish Agarwal, Keegan Harris, and Steven Wu).
Neurips, 2023.
- Fully-Adaptive Composition in Differential Privacy
(with Aaditya Ramdas, Steven Wu, and Ryan Rogers).
ICML, 2023.
- Brownian Noise Reduction: Maximizing Privacy Subject to Accuracy Constraints
(with Aaditya Ramdas, Steven Wu, and Ryan Rogers).
Neurips, 2022.
- The Case for Phase-Aware Scheduling of Parallelizable Jobs
(with Benjamin Berg, Benjamin Moseley, Mor Harchol-Balter, and Weina Wang).
39th International Symposium on Computer Performance, Modeling, Measurements and Evaluation, 2021.
- Optimal Resource Allocation for Elastic and Inelastic Jobs
(with Benjamin Berg, Benjamin Moseley, Mor Harchol-Balter, and Weina Wang).
ACM Symposium on Parallelism in Algorithms and Architectures (SPAA 2020).
-
Bringing Engineering Rigor to Deep Learning
(with Kexin Pei, Shiqi Wang, Yuchi Tian, Carl Vondrick, Yinzhi Cao, Baishakhi Ray, Suman Jana, and Junfen Yang).
ACM SIGOPS Operating Systems Review, Volume 53 Issue 1 (SIGOPS 2019).
-
Efficient Formal Safety Analysis of Neural Networks
(with Shiqi Wang, Suman Jana, Kexin Pei, and Junfeng Yang).
Neurips, 2018.
-
Formal Security Analysis of Neural Networks Using Symbolic Intervals
(with Shiqi Wang, Suman Jana, Kexin Pei, and Junfeng Yang).
27th USENIX Security Symposium, 2018.
Teaching
I have served as a teaching assistant for the following classes.
- Graduate Algorithms (Spring 2022, CMU).
- Foundations of Privacy (Fall 2021, CMU).
- Computer Science Theory (Spring 2019, Columbia).
- Modern Algebra II (Spring 2019, Columbia).
- Complexity Theory (Fall 2018, Columbia).
- Introduction to Cryptography (Fall 2018, Columbia).
- Number Theory and Cryptography (Spring 2018, Columbia).