Silicon Flatirons Conference: Explainable Artificial Intelligence: Can We Hold Machines Accountable?
With artificial intelligence (AI) becoming more ubiquitous, an increasing number of significant decisions are being made by machines. For example, some companies follow the recommendations of AI systems when hiring. In the legal world, judges increasingly rely upon AI systems in bail or sentencing decisions for criminal defendants. A job applicant or a criminal defendant might reasonably wonder: Why did the AI system come to the decision that it did?
The reality is that many of these AI decisions are difficult for humans to understand, and AI-based outcomes cannot always be explained. Are there ways to make AI decisions more explainable, more understandable and more accountable?
Legal scholars have characterized AI (machine-learning) decision-making as “black box” decision-making, and note that it raises problems of fairness, legitimacy and error. While computer science has long discussed the concept of explainability, different notions—for example, making algorithmic decisions understandable to individuals subject to those decisions—have taken hold in the legal community. Even within recent legal scholarship, different concepts of accountability and explanability abound.
This conference will advance the state of knowledge surrounding AI and explainability for multiple constituencies, including private sector firms that are actively developing artificial intelligence systems, governments and policymakers who are navigating possible regulatory approaches in this area, academic entities who are studying this space, and the public at large.
Friday, May 3 at 9:30am to 4:10pm
Wolf Law, Wittemyer Courtroom
2450 Kittredge Loop Drive, Boulder, CO 80309
No recent activity