Title: Learning Neural Sequence-to-Sequence Models from Weak Feedback with Bipolar Ramp Loss Abstract: In many scenarios gold labels are not available and consequently neural models cannot be trained directly using a maximum likelihood estimation (MLE) objective. Using a weak supervision signal instead, metric-augmented objectives can be employed to assign feedback to model outputs and this feedback can be used for training. We present several objectives for two separate weakly supervised tasks, machine translation and semantic parsing. We show that simply promoting a surrogate gold structure is not effective. Instead, objectives should also actively discourage negative outputs. This notion of bipolarity is naturally present in ramp loss objectives, which we lift to neural models. We show that bipolar ramp loss objectives outperform other non-bipolar ramp loss objectives and Minimum Risk Training (MRT) on both weakly supervised tasks, as well as on a supervised machine translation task. Additionally, we introduce a novel token-level ramp loss objective, which is able to outperform even the best sequence-level ramp loss on both weakly supervised tasks. This is joint work with Laura Jehl and Stefan Riezler.