Abstract anaphora resolution (AAR) is a challenging task that aims to resolve anaphoric reference of pronominal and nominal expressions that refer to abstract objects like facts, events, propositions, actions or situations, in the (typically) preceding discourse. A central property of abstract anaphora is that it establishes a relation between the anaphor embedded in the anaphoric sentence and its (typically non-nominal) antecedent. In this talk, I will present a mention-ranking model that learns how abstract anaphors relate to their antecedents with an LSTM-Siamese Net [1]. I will describe how we harvested training data from a parsed corpus using a common syntactic pattern consisting of a verb with an embedded sentential argument. I will show results of the mention-ranking model trained for shell noun resolution and results on an abstract anaphora subset of the ARRAU corpus. The latter corpus presents a greater challenge due to a mixture of nominal and pronominal anaphors and a greater range of confounders. Finally, I will present our ongoing work that aims to answer following questions: Should nominal and pronominal anaphors be learned independently? Is harvested data noisy? Are natural and extracted data similar enough?