Topic-Aware Learning-to-Rank for Cross-Lingual Information Retrieval Cross-lingual information retrieval (CLIR) enables search across languages, avoiding storage of a cross-lingual index by either translating the search query or the document repository in order to apply monolingual retrieval. The primary goal of cross-lingual retrieval is to find relevant documents, however, optimizing translations to preserve meaning and structure in bitexts while being agnostic of the search task does not necessarily yield optimal search results. In addition, the meaning of a query is highly dependent on contextual information such as topic, category, or authorship. We propose a multilingual CLIR system that does not rely on machine translation but directly models relevance between queries and documents across languages. Our approach avoids the "lost in translation" problem often found in cross-lingual pipeline models. The system is trained and evaluated on multilingual data from Wikipedia, but can easily be adapted to other domains where structured document information is available such as patent prior art search.