Sign Up

1095 Regent Drive, Boulder, CO 80309

View map Free Event

Neural Knowledge Representation and Reasoning

ABSTRACT: Making complex decisions in areas like science, government policy, finance, and clinical treatments all require integrating and reasoning over disparate data sources. While some decisions can be made from a single source of information, others require considering multiple pieces of evidence and how they relate to one another. Knowledge graphs (KGs) provide a natural approach for addressing this type of problem: they can serve as long-term stores of abstracted knowledge organized around concepts and their relationships, and can be populated from heterogeneous sources including databases and text. KGs can facilitate higher level reasoning, influence the interpretation of new data, and serve as a scaffolding for knowledge that enhances the acquisition of new information. A symbolic graph over a fixed, human-defined schema encoding facts about entities and their relations is the predominant method for representing knowledge, but this approach is brittle, lacks specificity, and is inevitably highly incomplete. On the other extreme, recent work on purely text-based knowledge models lack abstractions necessary for complex reasoning. 

In this talk I will present work incorporating neural models, rich structured ontologies, and unstructured raw text for representing knowledge. I will first discuss my work enhancing universal schema, a method for learning a latent schema over both existing structured resources and unstructured free text, embedding them jointly within a shared semantic space.  Next, I inject additional hierarchical structure into the embedding space of concepts, resulting in more efficient statistical sharing among related concepts and improved accuracy in both fine-grained entity typing and linking. I then present initial work representing knowledge in context, including a single model for extracting all entities and long-range relations simultaneously over full paragraphs while jointly linking these entities to a KG. I will conclude by discussing possible future directions for representing knowledge in context, and how these models can address long-standing shortcomings in knowledge representation by taking inspiration from cognitive theories of human memory systems.

BIO: Patrick Verga is a final year PhD candidate in the College of Information and Computer Sciences at UMass Amherst, advised by Andrew McCallum.  His research contributes to knowledge representation and reasoning, with a focus on large knowledge base construction from unstructured text, with applications to general knowledge, commonsense, and biomedicine.  Pat interned at Google and the Chan Zuckerberg Initiative. He received the best long paper award at EMNLP 2018. Over the past several years he has advised multiple M.S. and junior PhD students, resulting in published research in fine-grained entity typing, unsupervised parsing, and partially labeled named entity extraction. He holds M.S. and B.A. degrees in computer science as well as a B.S. in neuroscience.

0 people are interested in this event

User Activity

No recent activity