Online · Pacific Time (PT)

Agentic AI Frontier Seminar

A seminar series on Agentic AI: models, tools, memory, multi-agent systems, online learning, and safety, featuring leading researchers and industry experts.

Join us! Register here to get seminar updates.

Recordings (with speaker consent) will be posted on our YouTube channel.

Incoming Seminar

Online · 2025-11-21 · 09:00–10:00 PT

Talk Title: Reasoning Guardrails for the Agentic Web

Assistant Professor · Muhao Chen · UC Davis

Talk Abstract: Large language models are shifting from text predictors to agents that operate on the web and within software systems. This transition amplifies safety risks across dialogue, the environments agents control, and compliance with real-world policies. This talk presents ThinkGuard, a reasoning-based guardrail trained via mission-focused distillation to acquire deliberative thinking without costly manual annotation. By reasoning over goals, constraints, and latent hazards, ThinkGuard generalizes to implicit, complex, and previously unseen risks. I will also outline guardrail extensions that enforce policy compliance for web and system agents, and omnimodal guardrails that vet and steer interactions involving images, audio, and video. Together, these techniques transform guardrails from brittle filters into adaptive, explanatory safety layers that preserve utility while measurably reducing failure in agentic workflows.

Bio: Muhao Chen is an Assistant Professor at the Department of Computer Science, UC Davis, where he leads the Language Understanding and Knowledge Acquisition (LUKA) Group. He received his Ph.D. from the Department of Computer Science at UCLA, and B.S. in Computer Science from Fudan University. His research focuses on robust and accountable ML, particularly on accountability and security issues of large language models and agentic AI. He is a co-founder and the secretary of ACL Special Interest Group in NLP Security (SIGSEC). His work has been recognized with EMNLP Outstanding Paper Awards (2023, 2024), an ACM SIGBio Best Student Paper Award (2020), faculty research awards from Amazon (2022, 2023) and Cisco, and funding support from multiple NSF, DARPA, IARPA, and industry grants.

Focus Areas

Foundation Models & Core Capabilities

Agent Infrastructure & Tooling

Learning, Adaptation & Feedback

Multi-Agent Systems & Social Intelligence

Evaluation, Safety & Alignment

Applications & Vertical Use Cases

Interface & Interaction Design

Governance, Ethics & Ecosystem Building

Organizing Committee

Photo of Ming Jin

Ming Jin

Virginia Tech

He is an assistant professor in the Bradley Department of Electrical and Computer Engineering at Virginia Tech. He works on trustworthy AI, safe reinforcement learning, foundation models, with applications for cybersecurity, power systems, recommender systems, and CPS.

Photo of Shangding Gu

Shangding Gu

UC Berkeley

He is a postdoctoral researcher in the Department of EECS at UC Berkeley. He works on AI safety, reinforcement learning, and robot learning.

Photo of Yali Du

Yali Du

KCL

She is an associate professor in AI at King’s College London. She works on reinforcement learning and multi-agent cooperation, with topics such as generalization, zero-shot coordination, evaluation of human and AI players, and social agency (e.g., human-involved learning, safety, and ethics).