I am a computational cognitive scientist who focuses on reverse engineering how children learn language, both to support the development of machine intelligence and to guide educational practice and policies. More specifically, I work on:
I make heavy use of computational models (neural networks and Bayesian probabilistic models) and my work is distinguished by an emphasis on multi-agent and multi-task learning contexts. I am deeply preoccupied with Moravec’s Paradox. I also use experimental methods like eyetracking to characterize children’s knowledge. See Publications and Research Topics for more details.
I’m currently a postdoc at MIT Brain and Cognitive Sciences (in Dr. Roger Levy’s Computational Psycholinguistics Lab), and hold a joint appointment at Duke University (Dr. Elika Bergelson child language lab, moving Fall 2023 to Harvard). In 2018, I completed my Ph.D. in the Computational Cognitive Science Lab at UC Berkeley (with Dr. Tom Griffiths) on understanding language learning and language processing with probabilistic models. Before that, I was a post-bac researcher at Stanford (with Dr. Mike Frank). My work has been funded by the NIH, NSF, U.S. Air Force Office of Scientific Research, DARPA, and the Simons Foundation.
Before focusing on human cognition, I was a data scientist at a crowdsourcing startup and a geospatial data analyst at the U.S. Geological Survey, where I worked on tools for characterizing ocean acidification. I spend my non-academic time on large-format film photography, backpacking and trekking, scuba diving, biking, vegetable gardening, and reading cultural criticism.