Despite achieving parsing scores in the range of 93% (UAS) and 92% (LAS) on German newspaper text, neural dependency parsers still struggle, for example, with predicting the core arguments for verbal predicates and often confuse direct and indirect objects, or attach prepositions to the wrong heads. Their performance drops even more dramatically when the target domain is different from the one the parser was trained on. In this talk, we investigate different techniques to make recent parsers more robust, namely (1) incorporating subcatergorisation frame (SCF) information into dependency parsing and (2) improving out-of-domain parsing by re-ranking. SCFs provide syntactic information on the type of argument that a verb can take. In my colloquium talk last year, I showed that this information has the potential to significantly improve core argument prediction by extending a state-of-the-art parser for German (Dozat & Manning, 2017) with gold SCF information. This year, in the first part of the talk, I report on follow-up results for predicting SCFs and incorporating them into the parser. In the second part of the talk, I present work in progress toward improving parsing German Twitter data using re-ranking. We propose to augment a neural network re-ranker (Zhu et al., 2015) with point-wise mutual information (PMI) of preposition-head pairs from the Subcategorisation Frame Database (Scheible et al., 2013) and SCF-aware margin for the hinge loss function, and report some preliminary parsing results on the new TweeDe dataset.