Title: Active Question Answering with Reinforcement Learning Abstract: Humans are capable of solving sophisticated information seeking tasks by interacting with digital media. For example, users find answers to complex questions using search engines, by formulating multiple queries in iterative, stateful sessions followed by critical assessment and synthesis. Can machine learning be applied to solve similar tasks? This talk presents ongoing research within a framework, called Active Question Answering, where we investigate machine-learned agents that perform information-seeking tasks by using language to interact with information-providing environments. We start by considering query reformulation in question answering tasks. Question answering systems frequently return slightly different answers to different variants of a question. An Active Question Answering agent ideally sits between a user and a black-box QA system. This agent learns how to query the QA system optimally on the user's behalf. The agent is trained using Reinforcement Learning to reformulate questions and aggregate evidence returned by the QA system in order to infer the best final answer. In an empirical study we show that the agent can learn to outperform the environment, and other benchmarks, by a significant margin. We also analyze the language the agent has learned by interacting with the QA system. We find that the agent seems to have re-discovered basic information retrieval techniques such as tf-idf term reweighting and stemming.