Title: Towards better understanding of cross-lingual AMR (parsing): A strong baseline and improved evaluation
Speaker: Juri Opitz (ICL)
Tales from an ICL seminar: In cross-lingual Abstract Meaning Representation (AMR) parsing, researchers develop models that project sentences from various languages onto their AMRs to capture their essential semantic structures: given a sentence in any language, we aim to capture its core semantic content through concepts connected by manifold types of semantic relations. Methods typically leverage large silver training data to learn a single model that is able to project non-English sentences to AMRs. However, we find that a simple baseline tends to be overlooked: translating the sentences to English and projecting their AMR with a monolingual AMR parser (translate+parse,T+P). We show that this baseline is very effective. Moreover, we discuss perspectives for improved evaluation of cross-lingual AMR parsing with improved metrics and consistency checks.