Securing AI Innovation: A Proactive Approach
Monday, September 22, 2025, 2:30 PM - 3:15 PM
Amphitheatre

The increasing deployment of Large Language Models (LLMs) and agentic solutions introduces complex security challenges, often due to insufficient integrated governance, proactive threat modeling, dedicated red teaming, and AI-specific detection. Securing this evolving landscape requires foresight and understanding AI's unique attack surface. This talk provides practical insights from a year of securing and attacking AI deployments, revealing common security missteps and critical vulnerabilities in production AI systems. We emphasize proactive measures like AI-specific threat modeling and targeted red team exercises, plus robust governance and response frameworks. Designed for executive leadership and technical professionals, this session offers actionable guidance to navigate AI security complexities and foster resilient AI adoption.