Jacob Andreas
I study language and machine learning. I'm interested in natural language processing problems as a window into reasoning, planning and perception; these days I'm especially focused on using language as a scaffold for more efficient learning and as a probe for understanding model behavior. I'm also broadly interested in structured neural methods that combine the advantages of deep representations and discrete compositionality.
I'm currently a senior research scientist at Microsoft Semantic Machines in Berkeley. I will join the MIT EECS faculty in Fall 2019. I've (almost!) graduated from UC Berkeley, where I was a member of the Berkeley NLP Group and the Berkeley AI Research Lab. Previously I worked with the Cambridge NLIP Group, and the Center for Computational Learning Systems and NLP Group at Columbia.
Prospective students: apply through the MIT graduate admissions portal in the fall. (I'm afraid I can't respond to emails individually.)
jda@cs.berkeley.edu, Curriculum vitæ, Google scholar, elsewhere
All publications
Research highlights
Language and reasoning
The dominant paradigm in deep learning is a "one-size-fits-all" approach: we write down a fixed model architecture that we hope captures everything about the relationship between our inputs and outputs. But real-world problem solving doesn't work this way: it involves a variety of different capabilities, combined and in new ways for every challenge we encounter. The work below explores modular deep learning architectures that can dynamically assemble themselves in response to changing, complex tasks.
Papers:
Modular multitask reinforcement learning with policy sketches
(ICML 2017)
Learning to reason: end-to-end module networks for visual question answering
(ICCV 2017)
Learning to compose neural networks for question answering
(NAACL 2016)
Language for interpretable AI
It's a common complaint that complex machine learning models provide improved performance at the cost of interpretability. How can we help users better understand the features and strategies that their learning algorithms discover? Language provides a rich set of tools for describing beliefs, observations, and plans. We use these tools generate natural language explanations directly from learned representations.
Papers:
Translating neuralese (ACL 2017)
Analogs of linguistic structure
in deep representations (EMNLP 2017)
At various points I have also been interested in semantic parsing, pragmatics, graph automata, self-normalization, and pianos.
I am currently supported by a Facebook fellowship. I was a Churchill scholar from 2012–2013, a National Science Foundation fellow from 2013–2016, and a Huawei—Berkeley AI fellow from 2016–2017.
Collaboration graph trivia: My Erdős number is at most three (J Andreas to R Kleinberg to L Lovász to P Erdős). My Kevin Bacon number (and consequently my Erdős-Bacon number) remains lamentably undefined, but my Kevin Knight number (since apparently that's a thing) is one. I have never starred in a film with Kevin Knight. Noam Chomsky is my great-great-grand-advisor (J Andreas to D Klein to C Manning to J Bresnan to N Chomsky).
Annotated bibliographies on:
module networks
language and behavior