Multimodal Pivots for Image Caption Translation Subject of the talk will be an approach to improve statistical machine translation of image descriptions by multimodal pivots defined in visual space. The key idea is to perform image retrieval over a database of images that are captioned in the target language, and use the captions of the most similar images for crosslingual reranking of translation outputs. The approach does not depend on the availability of large amounts of in-domain parallel data, but only relies on available large datasets of monolingually captioned images, and on convolutional neural networks to compute image similarities. Experimental evaluation shows improvements of 1 BLEU point over strong baselines.