Outdoor Vision and Language Navigation with Large Language Models Large Language Models (LLM) are taking many NLP tasks by storm. How can they be incorporated into the challenging Vision and Language Navigation task. I propose a project that not only uses LLMs to encode the instructions text but also as the controller for the visual agent. Another project is proposed that requires no explicit environment for VLN but rather probes the LLM for its knowledge of street layouts implicitly.