“Towards Open World Event Understanding with Neuro Symbolic Reasoning”
Hosted by the Department of Computer Science & Engineering (CSE)
Abstract: Deep learning models for multimodal understanding have taken great strides in task such as event recognition, segmentation and localization. However, there appears to be an implicit closed world assumption in these approaches, i.e., they assume that all observed data is composed of a static, known set of objects (nouns), actions (verbs), and activities (noun+verb combination) that are in 1:1 correspondence with the vocabulary from the training data. One must account for every eventuality when training these systems to ensure their performance in real-world environments. In this talk, I will present our recent efforts to build open world understanding models that leverage the general-purpose knowledge embedded in large scale knowledge bases for providing supervision using a neuro symbolic framework based on Grenenader’s Pattern Theory formalism. Finally, I will talk about how this framework can be extended to abductive reasoning for commonsense natural language inference, in addition to commonsense reasoning for visual understanding.
For inquiries contact Maria Sanchez.