The term ‘spurious correlations’ has been used in NLP to informally denote any undesirable feature-label correlations. However, a correlation can be undesirable because, (i) the feature is irrelevant to the label (e.g. punctuation in a review), or (ii) the feature’s effect on the label depends on the context (e.g. negation words in a review), which is ubiquitous in language tasks. While we want the model prediction to be independent of the feature in (i) since it is neither necessary nor sufficient, an ideal model (e.g. humans) must rely on the feature in (ii) since it is necessary but not sufficient. Therefore, a more fine-grained treatment of spurious correlations is needed to address the problem. We formalize this distinction by using a causal model and probabilities of necessity and sufficiency to delineate the causal relations between a feature and a label. We then show that this distinction helps explain results of existing debiasing methods on different spurious features, and demystifies surprising results such as the encoding of spurious features in model representations after debiasing.