Thursday, November 14, 2024 3:30pm to 4:30pm
About this Event
1111 Engineering Drive, Boulder, CO 80309
Abstract: As deep neural networks (DNNs) demonstrate growing capabilities to solve complex tasks, there is a push to incorporate them as components in software and cyber-physical systems. To reap the benefits of these learning-enabled systems without propagating harms, there is an urgent need to develop tools and methodologies for evaluating their safety. Formal methods are a powerful set of tools for analyzing behaviors of software systems. However, formal analysis of learning-enabled systems is challenging; DNNs are notoriously difficult to interpret and lack logical specifications, the environments in which these systems operate can be difficult to model mathematically, and existing formal methods do not scale to these complex systems.
In this talk, I will present a bottom-up and a top-down perspective for the analysis of such systems. The bottom-up perspective focuses on analyzing DNNs in isolation. To address the challenges in intepreting and specifying DNN behavior, I will present a logical specification language designed to facilitiate writing specifications about vision-based DNNs in terms of high-level, human-understandable concepts. I will then demonstrate how we can leverage vision-language models such as CLIP to encode and check these specifications. The top-down perspective focuses on analyzing learning-enabled systems as a whole. To address the challenges in modeling the environment and scaling formal analysis, I will present new probabilistic abstractions for DNN-based perception components in learning-enabled cyber-physical systems that make it feasible to formally analyze such systems.
Bio: Ravi Mangal is an assistant professor in the Department of Computer Science at Colorado State University. He is interested in all aspects of designing and applying formal methods for assuring the correctness and safety of software systems. His current research focuses on developing formal methods for Trustworthy Machine Learning , i.e., for safety, robustness and explainability analysis of machine learning models as well as formal safety analysis of systems with such learning-enabled components. Previously, he was a postdoctoral researcher at Carnegie Mellon University in the Security and Privacy Institute (CyLab) and, before that, he graduated with a PhD in Computer Science from Georgia Institute of Technology.
Please join us in ECCR 265 or on Zoom: https://cuboulder.zoom.us/j/91008309605
https://cuboulder.zoom.us/j/91008309605
User Activity
No recent activity