Isaac Lage

Email: isaaclage AT g.harvard.edu
Pronouns: he/him/his


I am a Ph.D. student in Computer Science at Harvard with Professor Finale Doshi-Velez in the Data to Actionable Knowledge lab. I work on techniques to learn interpretable models by incorporating user studies into optimization. I am supported by an NSF GRFP fellowship since 2019.

I was previously a Research Software Engineer with Professor David Sontag in the Clinical Machine Learning Group at MIT. I worked on predicting hospital readmissions and modeling the progression of multiple myeloma.

I earned a B.A. in Computer Science and Social & Cultural Analysis from NYU in 2016.

Publications:

Lage I*, Chen E*, He J*, Narayanan M*, Gershman S, Kim B, Doshi-Velez F. Human Evaluation of Models Built for Interpretability. AAAI Conference on Human Computation and Crowdsourcing (HCOMP). 2019. (Honorable mention paper)

Lage I*, Lifschitz D*, Doshi-Velez F, Amir O. Exploring Computational User Models for Agent Policy Summarization. International Joint Conference on Artificial Intelligence (IJCAI). 2019.

Lage I, Ross A, Kim B, Gershman S, Doshi-Velez F. Human-in-the-Loop Interpretability Prior. Conference on Neural Information Processing Systems (NeurIPS). 2018.

Ross AS*, Lage I*, Doshi-Velez F. The Neural Lasso: Local Linear Sparsity for Interpretable Explanations. Neural Information Processing Systems (NIPS) Workshop on Transparent and Interpretable Machine Learning in Safety Critical Environments. 2017.

Teaching:
I am currently a Teaching Fellow for Interpretability and Explainability in Machine Learning at Harvard in Fall 2019, and I was a Teaching Fellow for Advanced Machine Learning at Harvard in Fall 2018.

Other:
I was a member of the SEAS Diversity, Inclusion and Belonging committee at Harvard from fall 2017-fall 2019 where I worked on the SEAS 2018 Climate Survey.