Humans Learn From Task Descriptions and So Should Our Models Joint work with Timo Schick and Sahana Udupa Task descriptions are ubiquitous in human learning. They are usually accompanied by a few examples, but there is little human learning that is based on examples only. In contrast, the typical learning setup for NLP tasks lacks task descriptions and is supervised with 100s or 1000s of examples. We introduce Pattern-Exploiting Training (PET), an approach to learning that mimicks human learning in that it leverages task descriptions in few-shot settings. PET is built on top of a pretrained language model that "understands" the task description, especially after finetuning, resulting in excellent performance compared to other few-shot methods. In particular, a model trained with PET outperforms GPT-3 even though it has 99.9% fewer parameters. The idea of task descriptions can also be applied to reducing bias in text generated by language models. Instructing a model to reveal and reduce its biases is remarkably effective as I will show in an evaluation on several benchmarks. This may contribute in the future to a fairer and more inclusive NLP.