Current approaches to cross-lingual information retrieval (CLIR) often rely on standard retrieval models into which query translations by statistical machine translation (SMT) are integrated at varying degree. I present an attempt to turn this situation on its head: Instead of the retrieval aspect, I emphasize the translation component in CLIR. My approach performs search using an SMT decoder in forced decoding mode to produce a bag-of-words representation of the target documents to be ranked. The SMT model is extended by retrieval-specific features that are optimized jointly with standard translation features for a ranking objective. I show significant gains over state-of-the-art translation-based CLIR models in a large-scale evaluation on cross-lingual search for Wikipedia and patent prior art search.