“We use words to talk about the world. Therefore, to understand what words mean, we must have a prior explication of how we view the world.” – Hobbs (1987)
Researchers in Artificial Intelligence and (Computational) Linguistics have long-cited the requirement of common-sense knowledge in language understanding. This knowledge is viewed as a key component in filling in the gaps between the telegraphic style of natural language statements: we are able to convey considerable information in a relatively sparse channel, presumably owing to a partially shared model at the start of any discourse. 
—-Common-sense inference – inferences based on common-sense knowledge – is possibilistic: things everyone more or less would expect to hold in a given context, but without the necessary strength of logical entailment. Because natural language corpora exhibit human reporting bias (Gordon and Van Durme, 2013), systems that derive knowledge exclusively from such corpora may be more accurately considered models of language, rather than of the world (Rudinger et al., 2015). Facts such as “A person walking into a room is very likely to be blinking and breathing” are usually unstated in text, so their real-world likelihoods do not align to language model probabilities. We would like to have systems capable of, e.g., reading a sentence that describes a real-world situation and inferring how likely other statements about that situation are to hold true in the real world. This capability is subtly but crucially distinct from the ability to predict other sentences reported in the same text, as a language model may be trained to do.
The JHU Ordinal Common-sense Inference (JOCI) corpus is a collection of 39k automatically generated common-sense inference pairs manually labelled for ordinal inference with the labels very likely, likely, plausible, technically possible, and impossible. JOCI is created to support ordinal common-sense inference, which is an extension of recognizing textual entailment: predicting ordinal human responses on the subjective likelihood of an inference holding in a given context.
Examples of JOCI
Statistics of JOCI
1. Sheng Zhang, Rachel Rudinger, Kevin Duh and Benjamin Van Durme. Ordinal Common-sense Inference. Transactions of the ACL, 2017.
Humans have the capacity to draw common-sense inferences from natural language: various things that are likely but not certain to hold based on established discourse, and are rarely stated explicitly. We propose an evaluation of automated common-sense inference based on an extension of recognizing textual entailment: predicting ordinal human responses on the subjective likelihood of an inference holding in a given context. We describe a framework for extracting common-sense knowledge from corpora, which is then used to construct a dataset for this ordinal entailment task. We train a neural sequence-to-sequence model on this dataset, which we use to score and generate possible inferences. Further, we annotate subsets of previously established datasets via our ordinal annotation protocol in order to then analyze the distinctions between these and what we have constructed.
Paper link: TACL site, PDF download
Slides from ACL 2017: PDF download
Data: The JOCI corpus
- ACL, 2017. Ordinal Common-sense Inference. (Sheng Zhang)
 E.g., many of the bridging inferences of Clark (1975) make use of common-sense knowledge, such as the following example of “Probable part”: I walked into the room. The windows looked out to the bay. To resolve the definite reference the windows, one needs to know that rooms have windows is probable.