Translation on a shoestring Abstract: Machine translation is challenging from limited resources. Notably for richly parameterised deep learning sequence-to-sequence approaches, which dominate on large competition datasets. For most languages there's insufficient text corpora for model estimation. In this talk I will present several ways to address this shortcoming, in terms of developing more robust neural models, i.e., 1) extending models to support more complex structured inputs, such as tree and semantic graphs, and 2) better models of the translation process through incorporating stochasticity into the generative process. Finally, 3) I will cover speech recognition and translation, which is better able to facilitate field linguist efforts for documenting truly low-resource or endangered languages.