Policies Don't Fail – Context Does
Policies are static, while context is dynamic. Effective control requires understanding how intent propagates across the entire system.
Threat analysis, vulnerability research, and field notes from the frontier of AI security.
Policies are static, while context is dynamic. Effective control requires understanding how intent propagates across the entire system.
The actual problem: enabling teams to use AI securely at the speed they operate.
Most CISOs cannot clearly define 'AI security' or articulate how to manage it. The path from this fog to secure-by-default AI deployment starts with understanding where security management breaks down.
Defense-in-depth for AI agents requires securing both input and output layers.
AI agent deployments are creating the most lopsided risk dynamic in modern security.
But the ones who ship anyway are learning painful lessons. Here's what's actually breaking.
When Salesforce Agentforce was compromised, traditional security tools saw valid HTTPS requests. The prompt injection exfiltrating CRM data went undetected.
AI assistants shipped fast to stay competitive. Now companies are quietly backfilling the security controls that should have been there from day one.
Two critical vulnerabilities show how LLM-generated content can bypass traditional security controls and execute malicious code.
Most companies adopted the same AI security strategy: ship features fast, figure out security later. Here's why that doesn't end well.
Traditional security looks for malformed syntax. LLM attacks exploit meaning through natural language. It's a fundamentally different attack surface.