NLP research has made significant progress thanks to rapid advances in Large Language Models (LLMs), which are immensely powerful yet remain brittle and opaque. In this talk, I argue for the importance of embracing variation in research, which will foster greater innovation, and in turn, trust. I will provide an overview of current challenges and show how they have impacted trust in our models. To counter this, I propose to embrace variation holistically. Doing so will be crucial to move our field toward more trustworthy, human-facing NLP.