Omar Montasser

Omar Montasser

Hi! I am a fifth year PhD student at the Toyota Technological Institute at Chicago. I am advised by Professor Nathan Srebro. I am interested in the theory of machine learning. Recently, I have been thinking about questions related to adversarially robust learnability.

Before TTIC, I completed a five year program (combined BS/MS) in computer science and engineering at Penn State. During my stay there, I have had the pleasure of working with Professors Daniel Kifer and Sean Hallgren on problems in machine learning and quantum computational complexity.


Boosting Barely Robust Learners: A New Perspective on Adversarial Robustness
Avrim Blum, Omar Montasser, Greg Shakhnarovich, Hongyang Zhang
Preprint, 2022.

A Theory of PAC Learnability under Transformation Invariances
Han Shao, Omar Montasser, and Avrim Blum
Preprint, 2022.

Transductive Robust Learning Guarantees
Omar Montasser, Steve Hanneke, and Nathan Srebro
AISTATS, 2022.

Adversarially Robust Learning with Unknown Perturbation Sets
Omar Montasser, Steve Hanneke, and Nathan Srebro
COLT, 2021.

Beyond Perturbations: Learning Guarantees with Arbitrary Adversarial Test Examples
Shafi Goldwasser, Adam Tauman Kalai, Yael Tauman Kalai, Omar Montasser
NeurIPS, 2020. (Spotlight)

Reducing Adversarially Robust Learning to Non-Robust PAC Learning
Omar Montasser, Steve Hanneke, and Nathan Srebro
NeurIPS, 2020.

Efficiently Learning Adversarially Robust Halfspaces with Noise
Omar Montasser, Subhi Goel, Ilias Diakonikolas, and Nathan Srebro
ICML, 2020.

Approximate is Good Enough: Probabilistic Variants of Dimensional and Margin Complexity
Pritish Kamath, Omar Montasser, and Nathan Srebro
COLT, 2020.

VC Classes are Adversarially Robustly Learnable, but Only Improperly
Omar Montasser, Steve Hanneke, and Nathan Srebro
COLT, 2019. Best Student Paper Award!

Predicting Demographics of High-Resolution Geographies with Geotagged Tweets
Omar Montasser and Daniel Kifer
AAAI, 2017 (Oral).