The recent use of machine learning in high stakes applications has been pushing many industrial actors to rethink how safety-critical systems (such as planes or cars) can be certified before being manufactured and used. Key questions have emerged, such as: how to properly define safety of systems with learning components? how to formally guarantee safety? which new mathematical guarantees would be needed from the ML research community?
This workshop will bring together machine learning researchers with international authorities and industrial experts from sectors where certification and reliability is a critical issue. It will consist of invited talks, a poster session, and group discussions.
The goal is to present key open industrial questions, traditional methods in critical software verification and certification (and AI-related challenges), as well as an introduction to several promising mathematical theories (distribution-free uncertainty quantification, deep learning theory, formal methods, and rigorous numerics).
This workshop is organized by the DEEL project
. We hope this workshop will help shape the future research agenda toward the middle-term objective of certifying critical systems involving AI components.