Named entity recognition with recursive and non-recursive convolutional auto-encoders Named entity recognition (NER) is a quintessential sequence labeling task, performed by learning to assign to each token in a sequence a label that indicates whether the token is inside or outside of a named entity mention. In addition to recognition, the task usually also includes typing, i.e. assigning a class label such as "PERSON" to each recognized entity mention. Recent progress in entity typing has been made by learning representations of unknown words via subwords, e.g. using character-CNNs (Ma and Hovy, 2016) or subword embeddings (Heinzerling and Strube, 2017). Such approaches help learning the types of unknown entities (e.g. the -shire in the fictional Melfordshire indicates a location). However, robustly recognizing entity mentions remains a challenge. For example, Stanford NER identifies two persons in this sentence: Héctor "Macho" Camacho was a Puerto Rican professional boxer and singer. In this talk, we present ongoing work on robust named entity recognition. Eschewing the token-based sequence labeling framework, we propose two auto-encoder models which can be applied to untokenized, raw text. The first, recursive, model can be seen as an extension of Byte-Pair Encoding (Sennrich et al., 2016) from subwords to phrases. The second model is a simpler, non-recursive version, which is more efficient and allows large-scale, unsupervised pretraining.