Title: Memory, Reading, and Comprehension Abstract: In this talk I will present recent work at DeepMind on recurrent neural models for learning to read, understand, and transform natural language. In the context of these models I will discuss the recent trend for embedding mechanisms mirroring traditional symbolic algorithms into continuous representations and the implications for natural language understanding.