Research and Technologies for Society and Industry


September 18-20, 2024


Safe Artificial Intelligence

Maurizio Mongelli

Consiglio Nazionale delle Ricerche (CNR)
Institute of Electronics, Computer and Telecommunication Engineering (IEIIT)


The intrinsic statistical error introduced by any machine learning algorithm is under severe criticism by safety and security engineers. Research in EU is increasing its interests in trustworthiness of artificial intelligence (AI)1. Safe AI means handling assurance under uncertainties in AI systems and understanding under which conditions autonomous actuations may lead to hazards with detrimental effect to the human or the environment. Examples may involve the mitigation of: dangerous maneuvers by autonomous cars, inaccurate clinical diagnosis by artificial doctors, wrong decision making in natural language processing, robotics and in many other sectors (cyberwarfare, energy, finance). The picture below deals with some applications of interest.

Scientific overview
Some approaches to safe AI lie in definition of: regions of attraction [1], formal verification [2], addressing the well- known vulnerability of the celebrated deep learning [3], or in the intelligibility and reliability of predictors [4, 5]. But there are still many steps to be taken towards an exhaustive framing of the problem. Regulation activities in automotive [6] and in avionics [7] fields have recently been established with the mandate to provide certification of safety of AI.

The tutorial covers the following topics. 1. Why does AI pose threats to safety critical applications?, 2.1 examples in autonomous driving, autonomous diagnosis, 2.2 projects of 1st H-EU Trustworthy AI call2, 3. existing standards, 4. solutions outside AI: SOTIF risk analysis, solutions inside AI: eXplainability (if-then rule generation) and reliability (e.g., error control, out-of-distribution detection, failsafe fallback) of algorithms, 5. open issues, 6. opportunities for industry (Gartner trends, existing spin-off and SMEs in Italy and abroad).
Existing standards are: SAE/EUROCAE in avionics, SOTIF in automotive, IEEE Recommended Practice for Deep Learning, ISO/IEC TR 5469.

[4] S. Narteni, A. Carlevaro, J. Guzzi, M. Mongelli, “Ensuring Safe Social Navigation via Explainable Probabilistic and Conformal Safety Regions,” 2nd World Conference on eXplainable Artificial Intelligence (xAI-2024), 17-19 July, 2024, Malta, Valletta.
[5] A. Carlevaro, S. Narteni, M. Muselli, F. Dabbene, M. Mongelli, “CONFIDERAI: CONFormal Interpretable-by-Design score function for Explainable and Reliable Artificial Intelligence,” 12th Symposium on Conformal and Probabilistic Prediction with Applications, COPA 2023, Limassol, Cyprus, 13-15 Sept., 2023.
[6] [7]


Maurizio Mongelli obtained his Ph.D. Degree in Electronics and Computer Engineering from the University of Genoa (UNIGE) in 2004. The doctorate was funded by Selex Communications S.p.A. (Selex). He worked for both Selex and the Italian Telecommunications Consortium (CNIT) from 2001 until 2010. During his doctorate and in the following years, he worked on the quality of service for military networks with Selex. From 2007 to 2008, he coordinated a joint laboratory between UniGe and Selex, dedicated to the study and prototype implementation of Ethernet resilience mechanisms. He was an adjunct professor at the military academy of Chiavari from 2007 to 2011 for various courses on telecommunications networks. He was the CNIT technical coordinator of a research project concerning satellite emulation systems, funded by the European Space Agency; spent three months working on the project at the German Aerospace Center in Munich. Since 2012 he is a researcher at the Institute of Electronics, Computer and Telecommunication Engineering (IEIIT) of the National Research Council (CNR), where he deals with machine learning applied to bioinformatics and cyber-physical systems, having the responsibility and coordination, for the CNR part, of funded projects (10, of which 1 at European level) in these sectors. Since 2020 he is responsible of IEIIT Genova site (10 researchers; 10 collaborators, per year, on average) and of the machine learning group (5 PhD students, 1 collaborator). Since 2018 he has been an adjunct professor at the University of Genoa for master’s degree and doctorate courses on the subject of machine learning. He is member of the following university committees. Engineering technology for strategy and security – Strategos Master degree since 2018, national PhD in AI in Italy started in 2021. He is co-author of over 100 international scientific papers, 2 patents and is member of the SAE G-34/EUROCAE WG-114 ‘AI in Aviation’ Committee.