Title: Network Topic Models Topic models such as Latent Dirichlet Allocation are an unsupervised technique to extract word clusters with topical character from a given corpus of text documents. In essence, the clustering algorithm strives to optimize two criteria: 1) same words should be in the same cluster, 2) all words of the same document should be covered only by few clusters. This simplicity makes the model attractive for many text comprehension tasks, yet it also limits the obtainable quality of clusters. To address this, many extensions have been suggested in literature. Often we find text documents with an underlying link structure, or a network in which nodes are associated with text content. It is often assumed that connected nodes have some shared trait or interest which motivated the forming of the connection. In this talk, I will discuss several topic model extensions for textual network data. This includes the Citation Influence Model [1] which quantifies the strengths of a citation strength in an acyclic graph through a topic model. Furthermore, I will discuss the Shared Taste Model [2] which learns topics that capture shared interests in an undirected social network. As communication between users is often off-limits due to privacy concerns, the model learns from public text written by users, such as tweets, tags, posts, etc. The goal is to predict which friend of the user is interested in the content. The source code for both models is available on Github [3]. [1] Dietz, Bickel, Scheffer. Unsupervised Prediction of Citation Influences. ICML 2007. [2] Dietz, Gamari, Guiver, Snelson, Herbrich. De-Layering Social Networks with Shared Tastes of Friendships. ICWSM 2012. [3] http://github.com/bgamari/bayes-stack/blob/stable/doc/usage.markdown