Enhancing Reading Comprehension with External Knowledge In this talk, I will present my current work in the reading comprehension task where a model is supposed to read a given story context and answer questions about it. While the human accuracy on different benchmark datasets varies from 81 to 100%, models have still field for improvement. A key difference between humans and the computer is that humans have a lot of background knowledge. Most available reading comprehension systems try to answer the questions, given only the given context. In my work, I use an attention based approach in order to add factual knowledge from ConceptNet and WordNet to a very good baseline model and improve the performance.