Commonsense knowledge relations as represented in ConceptNet are crucial for advanced NLU tasks, including argument analysis and the reconstruction of implicit premises. In preceding studies we found that missing knowledge between adjacent sentences in argumentative texts can in most cases be encoded as commonsense knowledge relations between concepts that appear in these sentences, such as in the following example of two adjacent sentences from the Microtext Corpus: (S1) We need independent media. (S2) Everyone should pay for public broadcasters. (Relation) public broadcasters HAS_PROPERTY independent. However, commonsense knowledge relations as represented in ConceptNet are difficult to learn due to very specific properties of this resource, such as the complexity of argument types and relation ambiguity. We examine the learnability of such relations with a neural open world multi-label classification approach. Based on an in-depth study of the specific properties of ConceptNet, we investigate the impact of different relation representations and model variations. In contrast to current link prediction systems based on ConceptNet, our model assesses classification accuracy for individual relation types and achieves overall F1 scores of up to 68 in an open world and 71 in a closed world setting. We then investigate the usefulness of commonsense knowledge for argumentation analysis in two different settings: for (1) classifying argumentative relations and for (2) reconstructing implicit premises in argumentative texts. Argumentative relation classification aims at determining the type of relation (e.g., support or attack) that holds between two argument units. We find that adding links between premises in the form of commonsense knowledge relations as contained in ConceptNet and as predicted by our classifier helps the attention-based argumentative relation classification model proposed by Paul et al. (to appear). Establishing links between concepts in argumentative unit pairs is related to the long-standing problem of reconstructing implicit premises, so-called enthymemes. In future work, we want to tackle this problem by combining the learning of inference paths for commonsense knowledge relations with targeted information extraction for filling knowledge gaps in argumentative texts.