Universal decompositional representations for meaning
The Johns Hopkins Decompositional Semantics Initiative (or just: Decomp) is a series of efforts aimed at collecting and modeling human annotations of meaning via the decomposition of lexical meanings into component parts. Traditional notions of decompositional semantics commit to categorical structures: hard entailments represented by underlying semantic primitives that must be true, false, or underspecified. Here we focus on approaches to decomposition that involve fine-grained scalar judgements which reflect the ambiguity of language, and the underspecification of meaning in context: “how likely is it that this property holds, given this text?”, as compared to, “must it be the case that this property holds?”. We believe that collections of answers to these questions will serve as linguistically motivated constraints, upon which one or more styles of latent meaning representation may be induced.
Rapid judgments from everyday speakers
In contrast to the traditional computational approach involving the intensive development of an underlying meaning representation, complete with annotation manuals and expert discussions of the appropriate structure, here we aim to access decompositional representations through downstream entailments: what sorts of questions do everyday speakers of a language agree on, when presented with a word, a sentence, or a document?
Semantic Proto-Roles (Fine-grained property annotations for discovering thematic roles and linking models.)
Selectional annotations and MegaAttitude (Acceptability annotations to comprehensively discover the selection behavior of large sets of predicates; currently focused on clause-embedding verbs.)
Common-sense Inference (Inferences based on common-sense knowledge.)