Compass' Ryan Glynn on Why LLMs Shouldn't Make Security Decisions - But Should Power Them
hashtag
Episode:
74
calendar-lines
Date:
Jan 27, 2026
Ryan Glynn, Staff Security Engineer at Compass, has a practical AI implementation strategy for security operations. His team built machine learning models that removed 95% of on-call burden from phishing triage by combining traditional ML techniques with LLM-powered semantic understanding.
He also explores where AI agents excel versus where deterministic approaches still win, why tuning detection rules beats prompt-engineering agents, and how to build company-specific models that solve your actual security problems rather than chasing vendor promises about autonomous SOCs.
Topics discussed:
Language models excel at documentation and semantic understanding of log data for security analysis purposes
Using LLMs to create binary feature flags for machine learning models enables more flexible detection engineering
Agentic SOC platforms sometimes claim to analyze data they aren't actually querying accurately in practice
Tuning detection rules directly proves more reliable than trying to prompt-engineer agent analysis behavior
Intent classification in email workflows helps automate triage of forwarded and reported phishing attempts effectively
Custom ML models addressing company-specific burdens can achieve 95% reduction in analyst workload for targeted problems
Alert tagging systems with simple binary classifications enable better feedback loops for AI-assisted detection tuning
Context gathering costs in security make efficiency critical when deploying AI agents across diverse data sources
Query language complexity across SIEM platforms creates challenges for general-purpose LLM code generation capabilities
Explainable machine learning models remain essential for security decisions requiring human oversight and accountability
Share:
Recommended Resources
Ready for less noise
and more control?
See Panther in action. Book a demo today.




