Marta Kwiatkowska (University of Oxford)
Provable Robustness Guarantees for Artificial Intelligence: Progress and Challenges
Artificial intelligence powered by neural networks has made significant advances in the past few years and its deployment is accelerating. However, neural network models are susceptible to adversarial perturbations of inputs, such as imperceptible manipulations of images or structural perturbations of graph data that can drastically change the network output. Beyond inference, data errors or poisoning attacks that inject malicious data into the training process can result in wrong or harmful model predictions, and in multi-agent reinforcement learning strategic manipulations can steer the learned policy towards undesirable goals. Undetected, such issues pose a significant risk to the safety and security of high-stakes applications. Formal methods, and particularly formal verification techniques, have become indispensable components of rigorous software engineering methodologies and are well placed to provide the needed assurance in the form of provable robustness guarantees on model predictions. Using illustrative examples, this lecture will overview recent progress and future challenges in research aimed at developing provably robust and strategyproof AI systems.
Marta Kwiatkowska is Professor at the University of Oxford and Fellow of Trinity College. Kwiatkowska is a leader in formal methods and verification, who pioneered probabilistic and quantitative approaches in verification that are now central to ensuring the reliability of computing systems. She led the development of the probabilistic model checker PRISM, winner of the 2024 ETAPS Test-of-Time Tool Award, which has been widely adopted in academia and industry. PRISM allows for the analysis and verification of systems that exhibit uncertainty, for example wireless communication protocols and power management schemes, and has been used to detect and correct flaws in real-world protocols. Kwiatkowska’s recent work focuses on safety and robustness of neural networks, aimed at developing methodologies for computing provable robustness guarantees based on verification techniques such as SAT solving and convex relaxation. Her work has been supported by two ERC Advanced Grants, VERIWARE and FUN2MODEL, EPSRC projects FAIR and Erlangen AI Hub, and EU European Lighthouse on Safe and Secure AI (ELSA). Kwiatkowska won the Royal Society Milner Award, the BCS Lovelace Medal and the Van Wijngaarden Award, and received an honorary doctorate from KTH Royal Institute of Technology in Stockholm. She is a Fellow of the Royal Society, Fellow of ACM, Member of Academia Europea and International Honorary Member of the American Academy of Arts and Sciences.