Agile + DevOps East 2023 Concurrent Session : Addressing Security Risks In LLM-Based Applications

Conference archive

SEE PRICING & PACKAGES

Wednesday, November 8, 2023 - 11:45am to 12:45pm

Addressing Security Risks In LLM-Based Applications

Large Language Models continue to grow in popularity as people experiment, applying them to problems and pushing new code into production applications. Growing along with this popularity is an engineering approach that advocates outsourcing more and more of an application’s functionality to these LLMs. But what seems like an advantage on the surface masks different costs and risks. Ultimately, you may end up with less reliable code that’s harder to troubleshoot and fix, accruing technical debt along the way. There’s also the potential increase in attack surface from integrating LLMs into your application, giving attackers more vectors to explore. All isn’t lost. With the right approach, you can have a balance that addresses these issues. In this presentation, we’ll look at the risks involved in engineering applications with LLM functionality and outline steps you can take to reduce your exposure.

Nathan Hamiel
Kudelski Security

Nathan Hamiel is Senior Director of Research at Kudelski Security, where he leads the fundamental and applied research team. Part of the Innovation group, his team focuses on privacy, advanced cryptography, emerging technologies, and special projects. Nathan is focused on emerging and disruptive technologies and his research includes new approaches to difficult security problems and the safety, security, and privacy of artificial intelligence. Nathan has presented his research at global security events, including Black Hat, DEF CON, HOPE, ShmooCon, SecTor, ToorCon, and many others. He is also a veteran member of the Black Hat review board, where he serves as the track lead for the AI, ML, and Data Science track