Publications and Preprints

Robust Explanations for Deep Neural Networks via Pseudo Neural Tangent Kernel Surrogate Models Authors: Andrew Engel, Zhichao Wang, Natalie S. Frank, Ioana Dumitriu, Sutanay Choudhury, Anand Sarwate, Tony Chiang. Submitted to Neurips , 2023. link
This paper compares neural nets and GLMs trained with a pNTK, a normalized NTK summed across classes. Our experiments show that such a GLM is a good surrogate model for the neural net. We compute data attributions for these surrogate models under a data poisoning attack, and show that these attributions are more accurate than several alternatives.

The Adversarial Consistency of Surrogate Risks for Binary Classification, Authors: Natalie S. Frank, Jonathan Niles-Weed. Submitted to Neurips, 2023. link
This paper studies statistical consistency and calibration in the adversarial setting. We show that the supremum-based surrogate $\sup_{|\mathbf x’-\mathbf x|\leq \epsilon} \phi(yf(\mathbf x’))$ is consistent for all data distributions iff the surrogate $\phi$ satisfies $\inf_\alpha \phi(\alpha)+\phi(-\alpha)<2\phi(0)$.

Existence and Minimax Theorems for Adversarial Surrogate Risks in Binary Classification, Author: Natalie S. Frank Submitted to JMLR, 2023. link
We prove and existence, regularity, and minimax theorems for adversarial surrogate risks in the binary classification scenario. Our results extend previously known existence and minimax theorems for the adversarial classification risk to surrogate risks.

On the Existence of the Adversarial Bayes Classifier, Authors: Pranjal Awasthi, Natalie Frank, Mehryar Mohri. NeurIPS, 2021, Spotlight Presentation. link
We prove that there exist minimizers to the adversarial risk which we call the adversarial Bayes classifier with nice reguarity properties. The results of the original paper did not apply to non-strictly convex norms. The extended version of the paper extends these results to all possible norms.

Calibration and Consistency of Adversarial Surrogate Losses, Authors: Pranjal Awasthi, Natalie Frank, Anqi Mao, Mehryar Mohri, Yutao Zhong. NeurIPS, 2021, Spotlight Presentation. link
This paper studies statistical consistency and calibration in the adversarial setting. One major highlight is that we show no continuous surrogate loss is statistically consistent in the adversarial setting when learning over a well-motivated linear function class.

Adversarial Learning Guarantees for Linear Hypotheses Sets and Neural Networks, Authors: Pranjal Awasthi, Natalie Frank, Mehryar Mohri. ICML, 2020. link
Consider perturbations measured in $\ell_r$ norm. We give bounds on the adversarial Rademacher complexity of linear classes, a single ReLU unit, feed-forward neural networks.

The Frog Model on Trees with Drift, Authors: Erin Beckman, Natalie Frank, Yufeng Jiang, Matthew Junge, Si Tang. Electronic Communications in Probability, 2019. link
Consider the one-per-cite frog model on a $d$-ary tree with drift towards the root. We show that for any $d$, the frog model is recurrent with drift larger than or equal to $0.4155$.

Expository Notes

On the Rademacher Complexity of Linear Hypothesis Sets, Authors: Pranjal Awasthi, Natalie Frank, Mehryar Mohri. 2020. link
We give upper and lower bounds on the empirical Rademacher complexity of the linear hypothesis classes with weight factors bounded in $\ell_p$ norm for $p \in [1,\infty]$. These were the best known bounds at time of writing. Previously, bounds were known only for $p \in [1,2]$.