How We Test Self-Driving Cars, and How We Explain Their Failures

Autonomous systems are prone to errors and failures without knowing why. In critical domains like driving, these autonomous counterparts must be able to recount their actions for safety, liability, and trust. An explanation: a model-dependent reason or justification for the decision of the autonomous agent being assessed, is a key component for post-mortem failure analysis, but also for pre-deployment verification. I will present a framework that uses a model and commonsense knowledge to detect and explain unreasonable vehicle scenarios, even if it has not seen that error before. In the second part of the talk, I will motivate the use of explanations as a testing framework for autonomous systems. I will conclude by discussing new challenges at the intersection of XAI and autonomy toward autonomous vehicles systems that are explainable by design.

Event Details

See Who Is Interested

  • Jean Mahoney

1 person is interested in this event

User Activity

No recent activity