What’s Actually Breaking
Security questionnaires got 10x harder
Prospects now ask: “How do you prevent prompt injection?” “What’s your data retention policy for LLM inputs?” “Can you prove compliance with AI Act requirements?” Most founders have no answers.
Existing security certifications don’t cover LLM risks
SOC 2 validates your infrastructure controls. It doesn’t address prompt injection, model jailbreaking, or unsafe AI outputs. Auditors are starting to ask AI-specific questions that your compliance program doesn’t cover.
Traditional security tools are blind
Your WAF sees valid HTTPS. Your DLP sees encrypted API calls. Your SIEM logs authentication events. None of them see the prompt injection that exfiltrates customer data through your chatbot.
Shadow AI creates compliance gaps
Developers are using Cursor with personal API keys, pasting production data into Claude Desktop, testing with ChatGPT. Your security team has zero visibility. One customer discovered their support team had been pasting SSNs into ChatGPT for 6 months.
The velocity tax is real
Every AI feature now requires manual security review. 6-12 month approval cycles while your competitor ships in weeks. Teams either wait and lose market position, or bypass security and create risk.
What Actually Works
- Output validation, not just input validation. LLMs generate the exploit payload - you need to validate what comes out, not just what goes in.
- Runtime monitoring with policy enforcement. Block unsafe outputs before they reach users. Log everything for audit trails.
- Centralized LLM access control. One gateway for all LLM API calls. Provision keys, enforce policies, track spend, audit usage.
- Defense-in-depth for LLM apps. Treat LLM responses as untrusted input to downstream systems. Principle of least privilege for AI agents.
The AI security market is fragmenting between developer tools (fast, self-serve) and enterprise platforms (comprehensive, top-down). The winners will nail both: low friction for developers, compliance coverage for enterprise buyers.
If you’re shipping AI features to customers, you need answers to security questions before prospects ask them. Because “we’re working on it” loses deals.
What security questions are your enterprise prospects asking about AI features?