Qualitest Group (UK)
27 March – 13:00-14:00
Ramón Gómez de la Serna room
AI Testing challenge – Safety, Privacy and Security
In the last few years we see a growing demand for testing of mission critical systems that includes or rely on AI technology – self driving cars, advanced medical appliances, decision supporting applications in aerospace and defense Command and Control systems and many more from safety and security points of view.
While we all can quickly identify how challenging testing the safety of a self-driving car is, we also need to consider how to make sure that data and machine learning will not generate one of the next undesired behaviors:
• Race, ethnicity or Gender offensive decision or biasness per data used for learning
• AI Fairness
• Ethical usage of AI
In my presentation I will explain the different challenges around testing of AI system including all related
• AI testing – domain of problem explanation (5-10 minutes)
• Safety of AI systems (10 minutes)
o Key concerns
o How to select training data for ML
o How to validated safety – key AI testing techniques
• Security concerns that AI presents (10 minutes)
• Privacy concerns (5 minutes) – usage of data by AI for model training, but also how AI mustn’t
infringe privacy and comply with privacy acts (e.g GDPR)
• Further explanations (10-15 minutes) on how to test non- deterministic systems like AI, how to
transform, legacy, rule-based systems testing into AI relevant, and types of machine learning
(Supervised, unsupervised and semi-supervised) and how they can affect the safety of AI systems.
Why does AI Testing Requires a Different Approach?
Expected results are not clear-cut; tests don’t either pass or fail and testers must evaluate the results and determine if they are “good enough”. Testing AI requires specialized test design to create the right data samples and the right amount of data to assure that the application is providing accurate results, that are safe to consume, and comply with the different privacy acts. AI systems can be trained on hundreds of thousands of records; with real-world input combinations that can go into the millions and higher. When exhaustive testing becomes impossible, testers need data science skills and must understand the principles of AI to effectively design AI tests and interpret the results. The risks are high; AI technology is expensive, and the projects may be key components of an organization’s competitive strategy which means quality is critical.
Aviram Shotten is heading the knowledge and innovation group in Qualitest, world’s largest pure-player testing company. KI group is responsible for solution architecture, product development, presales, training, processes and knowledge management. In his 15 years in testing, Aviram oversaw dozens of system and quality engineering projects delivery across Financial Services, medical devices, aerospace and defense, media and entertainment sectors.