When Salesforce Agentforce was compromised by ForcedLeak (CVSS 9.4), traditional security tools saw valid HTTPS requests to legitimate endpoints. The prompt injection exfiltrating CRM data went undetected.
Traditional SIEM tools are not designed to detect LLM-specific attacks.
The Scenario
Consider a production chatbot that processes support tickets and answers account questions. A prompt injection attack makes it leak PII from the knowledge base. The SIEM captures nothing actionable.
What the Detection Stack Sees
- Valid HTTPS POST to our API endpoint
- Successful authentication (legitimate user session)
- Normal response time and status codes
- Standard cloud logging (Lambda execution, API Gateway metrics)
What It Completely Missed
- The malicious prompt structure
- The LLM generating output that violated our data policy
- Sensitive data being returned in the chatbot response
- The entire attack chain from injection to exfiltration
The Root Issue
SIEM tools log infrastructure events, not LLM behavior.
They see:
- Authentication (who)
- Network connections (where)
- File access (what resources)
- Process execution (what ran)
They don’t see:
- What prompt was sent to the LLM
- What the model actually returned
- Whether output violated data policies
- Model confidence scores
- Token-level risk analysis
Closing the Instrumentation Gap
LLM-specific logging requires instrumentation at the model interaction layer:
- Log every prompt, response, and policy evaluation with risk scores
- Forward LLM telemetry to your SIEM for correlation with infrastructure events
- Build detections for prompt injection patterns, not just API anomalies
- Treat LLM responses as untrusted input to downstream systems
- Implement prevention mechanisms to block repeated exploitation
Most security teams are relying on API metadata alone, missing the attack vectors that matter. LLM security requires instrumentation at the model interaction layer, not just the infrastructure layer.