Parsing has been a central problem in Natural Language Processing, further improved with pre-trained language models and neural networks. I will present two advances made in significantly improving the speed of dependency parsing (EMNLP, 2021) and significantly improving the accuracy of unsupervised constituency parsing (Findings of ACL, 2022). In the first part of the talk, I will mainly show how a simple modification to the weights of a learned model can make dependency parsing asymptotically optimal, with a quadratic complexity for parsing (rather than cubic). In the second part of the talk, I will show how co-training and intuitions from spectral learning combine with pre-trained language models to get effective multilingual unsupervised parsing. Joint work with Miloš Stanojević and Nickil Maveli.