STARWEST 2022 Concurrent Session : Testing Machine Learning Models

Conference archive

SEE PRICING & PACKAGES

Wednesday, October 5, 2022 - 11:30am to 12:30pm

Testing Machine Learning Models

More companies are building ML and AI systems and applications, but they lack the same rigorous testing because many Testers don't know how to approach testing them. After having built and tested many different ML models and systems and talking to ML teams of small and large organizations, one thing always stands out: "We need better testing and automation in our MLOps lifecycle." So, we'll start by demystifying Machine Learning by breaking down a Prediction application so the audience better understands the "magic algorithms" behind the scenes. We'll explore an example of ML systems that didn't have proper testing (resulting in millions of dollars lost). They will learn what MLOps is and then we can dive into Behavioral Testing examples that are close to what we do as testers currently, but with an ML twist. We'll learn about Adversarial Attacks and how we can exploit security weaknesses and "fool" the model. Lastly, we'll discuss AI Fairness and how testing can help us catch and prevent harmful biases to protect our users. We need more Testers in the world of AI and this presentation is a great way to get them started on that path.

Stealth Startup

Carlos Kidman is a Director of Engineering at an AI company, but was formerly an Engineering Manager at Adobe. He is also an instructor at Test Automation University with courses around architecture, design, containerization, and Machine Learning. He is the founder of QA at the Point, which is the Testing and Quality Community in Utah, and does consulting, workshops, and speaking events all over the world. He streams programming and other tech topics on Twitch, has a YouTube channel, builds open source software like Pylenium and PyClinic, and is an ML/AI practitioner. He loves fútbol, anime, gaming, and spending time with his wife and kids.