Talk Title: Towards AI Security – An Interplay of Stress-Testing and Alignment
Talk Abstract: As large language models (LLMs) become increasingly integrated into critical applications, ensuring their robustness and alignment with human values is paramount. This talk explores the interplay between stress-testing LLMs and alignment strategies to secure AI systems against emerging threats. We begin by motivating the need for rigorous stress-testing approaches that expose vulnerabilities, focusing on three key challenges: hallucinations, jailbreaking, and poisoning attacks. Hallucinations—where models generate incorrect or misleading content—compromise reliability. Jailbreaking methods that bypass safety filters can be exploited to elicit harmful outputs, while data poisoning undermines model integrity and security. After identifying these challenges, we propose alignment methods that embed ethical and security constraints directly into model behavior. By systematically combining stress-testing methodologies with alignment interventions, we aim to advance AI security and foster the development of resilient, trustworthy LLMs.
Bio: Furong Huang is an Associate Professor of the Department of Computer Science at the University of Maryland. Specializing in trustworthy machine learning, Security in AI, AI for sequential decision-making, and generative AI, Dr. Huang focuses on applying principles to solve practical challenges in contemporary computing to develop efficient, robust, scalable, sustainable, ethical, and responsible machine learning algorithms. She is recognized for her contributions with awards including best paper awards, the MIT Technology Review Innovators Under 35 Asia Pacific, the MLconf Industry Impact Research Award, the NSF CRII Award, the Microsoft Accelerate Foundation Models Research award, the Adobe Faculty Research Award, three JP Morgan Faculty Research Awards and Finalist of AI in Research - AI researcher of the year for Women in AI Awards North America.