Ensuring Neural Networks Robustness: Problems and Opportunities
Ekaterina Komendantskaya
University of Southampton, United Kingdom

Thu., Nov. 21, 2024, 1 p.m.
This seminar is held online.
Online: Zoom link of our Chair

Google Scholar


Machine learning methods have recently seen a rapid development, both in terms of variety of model architectures (feedforward, recurrent, convolutional neural networks, transformers), training methods
(gradient descent, adversarial and property-based training) and sheer sizes of models. Thanks to these developments, machine learning is being incorporated in an ever growing number of applications, ranging from traditional computer vision applications, to more recent domains such as conversational agents and scientific computing. However, neural networks, new and old equally, suffer from a range of safety and security problems, such as vulnerability to adversarial attacks, data poisoning, catastrophic forgetting. Blindly adapting neural networks to safety critical domains may lead to a whole range of issues that machine-learning-free applications were not prone to. This problem led to the development of neural network verification, a hybrid field that merges formal methods and security with machine learning methods, with the purpose of developing robust tools and methods to guarantee safe neural network operation. In this talk, I will overview some of the pitfalls and challenges in adapting neural networks to different domains, and discuss their common symptoms and underlying technical reasons. I will survey the existing methods to safeguard neural networks or applications incorporating neural networks; focusing in particular on the available methods and tools of neural network verification.


Brief CV

Ekaterina Komendantskaya is a Professor in Computer Science at Southampton University and at Heriot-Watt University in the UK. She completed her undergraduate degree in Mathematical Logic in Moscow State University, and PhD in Mathematics at the University College Cork in Ireland. Since then she worked in INRIA in France, St Andrews and Dundee universities in the UK before taking her current posts at Heriot-Watt and Southampton Universities. She is an expert in methods linking AI and Machine Learning on the one hand, and Logic and Programming Languages, on the other hand. She leads the Lab for AI and Verification (www.laiv.uk). She has received more than £19.5M of funding from EPSRC/UKRI, NCSC, SICSA (including large doctoral training center grants). Currently she is leading a £3M EPSRC project "AISEC: AI Secure and Explainable by Construction (AISEC)" and is delivering a training program in the Center for Doctor Training "DAIR: Dependable and Deployable AI for Robotics" in Edinburgh.



Share
Ensuring Neural Networks Robustness: Problems and Opportunities
Ekaterina Komendantskaya
University of Southampton, United Kingdom

Thu., Nov. 21, 2024, 1 p.m.
This seminar is held online.
Online: Zoom link of our Chair

Google Scholar


Machine learning methods have recently seen a rapid development, both in terms of variety of model architectures (feedforward, recurrent, convolutional neural networks, transformers), training methods
(gradient descent, adversarial and property-based training) and sheer sizes of models. Thanks to these developments, machine learning is being incorporated in an ever growing number of applications, ranging from traditional computer vision applications, to more recent domains such as conversational agents and scientific computing. However, neural networks, new and old equally, suffer from a range of safety and security problems, such as vulnerability to adversarial attacks, data poisoning, catastrophic forgetting. Blindly adapting neural networks to safety critical domains may lead to a whole range of issues that machine-learning-free applications were not prone to. This problem led to the development of neural network verification, a hybrid field that merges formal methods and security with machine learning methods, with the purpose of developing robust tools and methods to guarantee safe neural network operation. In this talk, I will overview some of the pitfalls and challenges in adapting neural networks to different domains, and discuss their common symptoms and underlying technical reasons. I will survey the existing methods to safeguard neural networks or applications incorporating neural networks; focusing in particular on the available methods and tools of neural network verification.


Brief CV

Ekaterina Komendantskaya is a Professor in Computer Science at Southampton University and at Heriot-Watt University in the UK. She completed her undergraduate degree in Mathematical Logic in Moscow State University, and PhD in Mathematics at the University College Cork in Ireland. Since then she worked in INRIA in France, St Andrews and Dundee universities in the UK before taking her current posts at Heriot-Watt and Southampton Universities. She is an expert in methods linking AI and Machine Learning on the one hand, and Logic and Programming Languages, on the other hand. She leads the Lab for AI and Verification (www.laiv.uk). She has received more than £19.5M of funding from EPSRC/UKRI, NCSC, SICSA (including large doctoral training center grants). Currently she is leading a £3M EPSRC project "AISEC: AI Secure and Explainable by Construction (AISEC)" and is delivering a training program in the Center for Doctor Training "DAIR: Dependable and Deployable AI for Robotics" in Edinburgh.



Share