scroll it
synack-OWASP-AI-LLM-Top10-Blog

How the OWASP Top 10 for LLM Applications Supports the AI Revolution

0% read

The OWASP Foundation recently introduced a new version of the OWASP Top 10 for Large Language Model Applications—which, as its name suggests, describes “the top 10 most critical vulnerabilities often seen in LLM applications”—to help defenders secure the ever-increasing number of services into which the tech industry is incorporating this take on artificial intelligence.

“The OWASP Top 10 for Large Language Model Applications started in 2023 as a community-driven effort to highlight and address security issues specific to AI applications,” OWASP said on its website. “Since then, the technology has continued to spread across industries and applications, and so have the associated risks. As LLMs are embedded more deeply in everything from customer interactions to internal operations, developers and security professionals are discovering new vulnerabilities—and ways to counter them.”

Saying that AI has “continued to spread” since 2023 is an understatement. These technologies are practically everywhere these days: CES 2025 featured countless manufacturers incorporating AI into TVs, refrigerators, lawn mowers and robots; Apple and Google made chatbots and generative AI tools core parts of their mobile platforms; and seemingly every company’s website features some kind of LLM-backed chatbot, search engine or other tool.

There’s no one-size-fits-all solution for securing these products and services—or for exploiting them. Convincing a chatbot to disclose sensitive information requires a different skillset than remotely controlling an autonomous lawn mower, for example, even though both are based on some form of “AI.” The consequences would also be quite different for each, especially if the chatbot is operated by a healthcare company or an organization in a similarly regulated sector.

The same could be said of vulnerabilities in web apps; however, OWASP has provided a list of the 10 most common vulns in web apps since 2003. That list has changed a lot over the years as companies have embraced new programming languages, frameworks and back-end infrastructure, but it’s always served as a valuable reference for infosec professionals who need to know what kinds of attacks the products and services they have to defend are likely to face.

That’s why Synack is a sponsor of the OWASP Top 10 for Large Language Model Applications. Although no two evaluations of AI-based tools or LLMs will be exactly alike—which is why the depth of talent afforded by the Synack Red Team is crucial for this kind of security testing—having a common standard that’s supported by an organization that’s been helping secure the web since before “chatbot” became a household term benefits this entire industry.

“The OWASP Top 10 for LLM project has been a critical resource for ethical hackers seeking to deepen their understanding of emerging AI risks,” Synack Red Team Community Director Ryan Rutan said in a statement ahead of the new version’s publication. “OWASP’s new guidance and resources will benefit the security research community and help CISOs find actionable solutions to new vulnerabilities. Synack is proud to support this important initiative.”

The software and services we rely on are poised to continue introducing new LLM-enabled AI features. As that happens, it will be critical to test those features, whether for unintentional biases and missing safeguards or unintended capabilities and vulnerabilities. The rush to embrace these technologies doesn’t nullify the need to make informed decisions about their security and safety.

Read more about how Synack uses and helps organizations test AI/LLM security on our Solutions page.