Back to Blog

What LiteLLM Ditching Delve Means For Your AI Security

Your AI operations rely on robust security, but what happens when a compliance partner is accused of faking data? LiteLLM just ditched Delve, and you need to know why.

Admin
Mar 31, 2026
3 min read
What LiteLLM Ditching Delve Means For Your AI Security
What LiteLLM Ditching Delve Means For Your AI Security

Editorial Note

Reviewed and analysis by ScoRpii Tech Editorial Team.

Imagine building critical AI applications only to discover the foundation of their security compliance might be, well, shaky. You're not alone in that concern. Today, LiteLLM, the AI gateway trusted by millions of developers worldwide, dropped a bombshell: they’re publicly cutting ties with controversial compliance startup Delve and initiating a complete redo of their security certifications with a new auditor.

Key Details

If you're one of the millions of developers relying on LiteLLM's popular AI gateway, you've likely seen their public announcement. LiteLLM is severing its relationship with Delve, a compliance startup, and is set to overhaul its entire security certification process. This isn't just a minor vendor swap; it signifies a serious commitment to addressing security concerns at a foundational level, impacting potentially critical infrastructure for AI development.

The catalyst for this decisive action stems from significant accusations leveled against Delve. The startup has been embroiled in controversy, specifically alleged to have misled customers regarding their compliance status. Reports indicate Delve was generating fake data and utilizing auditors who were perceived to be rubber-stamping reports, rather than conducting rigorous, independent assessments. This situation created a dangerous illusion of security for companies that entrusted their compliance needs to Delve, jeopardizing their own integrity.

LiteLLM, with CTO Ishaan Jaffer at the helm of their technical operations, has explicitly stated they will redo their security certifications with a different company and auditor. While the specific new partners haven't been named, this move signals a pivot towards ensuring robust, verifiable compliance. Companies like Vanta are known in the industry for their comprehensive security compliance platforms, representing the kind of rigorous standards LiteLLM is likely seeking to uphold moving forward.

Why This Matters

For you, the developer or enterprise leveraging AI, this news isn't just about two companies. It's about trust in the underlying infrastructure that powers your innovations. When a crucial component like an AI gateway, used by millions, discovers its compliance assurances might be built on shaky ground, it raises immediate questions about the integrity of your own AI systems. You need to know that your data, your models, and your users are truly protected, not just receiving a checklist sign-off.

This situation highlights a growing imperative within the tech industry: the critical need for genuine, transparent security compliance. In a world where AI is rapidly integrating into every sector, you can't afford to cut corners on security. This development serves as a stark reminder for all companies to rigorously vet their compliance partners. Don't simply accept certifications at face value; probe the methodologies, understand the auditing process, and ensure your chosen partners prioritize legitimate security over superficial appearances. Your reputation, and the security of your users, depends on it.

The Bottom Line

Ultimately, the message for you is clear: don't take your AI security compliance for granted. If you're using LiteLLM, rest assured they're taking decisive action to re-establish trust and robust security. But more broadly, if your operations rely on third-party compliance services, it's time to ask tougher questions and conduct deeper due diligence. Demand transparency, scrutinize audit reports, and partner with companies committed to genuine security. This event is a wake-up call for the entire AI ecosystem to prioritize verifiable trust above all else. Your vigilance today will safeguard your AI initiatives tomorrow.

Originally reported by

TechCrunch

Share this article

What did you think?