Sebastian Riedel Title: Reading and Reasoning with Vector Representations Abstract: In recent years, vector representations of knowledge have become popular in NLP and beyond. They have at least two core benefits: reasoning with (low-dimensional) vectors tends to lead to better generalisation, and usually scales very well. But they raise their own set of questions: What type of inferences do they support? How can they capture asymmetry? How can explicit background knowledge be injected into vector-based architectures? How can we provide "proofs" that justify predictions? In this talk, I sketch some initial answers to these questions based on work we have developed recently. In particular, I will illustrate how a vector space can simulate the workings of logic.