Recent Semantic Role Labeling (SRL) models have pushed the state-of-the-art by using end-to-end supervised neural sequence labeling models. The trade-off of this approach is the need for a considerable amount of annotated training data, which is currently a problem for languages that do not have big SRL datasets. We propose a multilingual Encoder-Decoder model that takes advantage of rich existing SRL annotations in English and learns to simultaneously translate and generate sentences with SRL annotations on a target lower-resource language. We achieve this via three steps: i) benchmark our approach on well-known SRL datasets ii) train a multilingual model that makes use of data in different languages to improve labeling and iii) train a cross-lingual model and use it to generate more SRL labeled data on the target side. We measure the improvement of SRL models on lower-resource target languages (we take the specific cases of French and German) by augmenting the training data and also perform a manual evaluation on the extended dataset to analyze the quality and precision of the newly generated SRL data.