Large-scale Semantic Parsing without Question-Answer Pairs Querying a database to retrieve an answer, telling a robot to perform an action, or teaching a computer to play a game are tasks requiring communication with machines in a language interpretable by them. Semantic parsing addresses the specific task of learning to map natural language to machine interpretable formal meaning representations. Traditionally, sentences are converted into logical form grounded in the symbols of some fixed ontology or relational database. Approaches for learning semantic parsers have been for the most part supervised, using manually annotated training data consisting of sentences and their corresponding logical forms. More recently, methods which learn from question-answer pairs have been gaining momentum as a means of scaling semantic parsers to large, open-domain problems. In this talk, I will present an approach to semantic parsing that does not require example annotations or question-answer pairs but instead learns from a large knowledge base and web-scale corpora. Our semantic parser exploits Freebase, a large community-authored knowledge base that spans many sub-domains and stores real world facts in graphical format, and parsed sentences from a large corpus. Our key insight is to represent natural language via semantic graphs whose topology shares many commonalities with Freebase. Given this representation, we conceptualize semantic parsing as a graph matching problem. We convert the output of an open-domain combinatory categorial grammar (CCG) parser into a graphical representation and subsequently map it onto Freebase, guided by denotations as a form of weak supervision. Evaluation experiments on two benchmark datasets show that our semantic parser improves over state-of-the-art approaches.