Metonymy is a figure of speech that is widespread in spoken as well as written communication. Detecting metonymy in text is crucial for various natural language understanding tasks. In this talk, I discuss how we leverage selectional preferences for metonymy detection. We explore two existing resources, namely SelPref embeddings and BERT, to extract selectional preference information. We also use a set of features inspired by a prior work on figurative language. We developed a model based on these resources and features. Our neural-net based classification model significantly outperforms most of the baselines, on two different datasets. The results are comparable to the remaining baseline on both the datasets. This study demonstrates the potential of SelPref embeddings and BERT for modelling selectional preferences.