scroll it
synack-blog-banner-repost-RH-ISAC-AI (1)

How to Stay Secure Amid AI Mania

23
Oct 2023
Wade Lance, Global Field CISO
0% read

Generative AI has unlocked enormous opportunities in the cybersecurity arena, from streamlined vulnerability management to faster incident response. But the technology has also opened up new attack paths that still need to be addressed.

Picture this: A French-speaking security researcher finds a critical vulnerability in a major U.S. retailer’s mobile app. They draft an email warning, but they run it by an AI chatbot to fix English language snafus before notifying the company.

Now imagine an attacker has been prowling the same large language model app for sensitive information. Using carefully crafted prompts, this bad actor goads the generative AI into sharing technical details of the French researcher’s submission before the retailer can patch. Next thing you know, there’s a cybersecurity breach. 

That turn of events isn’t far-fetched: The OWASP Foundation has released a Top 10 list of vulnerabilities affecting large language models, and Sensitive Information Disclosure made the cut. “LLM applications can inadvertently disclose sensitive information, proprietary algorithms, or confidential data, leading to unauthorized access, intellectual property theft, and privacy breaches,” OWASP said, adding that one possible attack scenario is “crafted prompts used to bypass input filters and reveal sensitive data.” 

As field CISO for Synack, I’ve seen firsthand how generative AI is here to stay, risks notwithstanding. AI technology is crucial to maintaining competitiveness across a range of industries. Security is no exception. 

But as the example above shows, generative AI can also be a jackpot for bad actors. Striking a balance between embracing the technology and adding safeguards will be key to avoiding breaches. The board is watching: According to a recent Proofpoint survey of 650 board members around the world, 59% view tools like ChatGPT as a security risk.

Organizations leveraging AI to enhance their capabilities should pause and ask a few important questions. How are we ensuring the AI we use isn’t making us vulnerable? How are we hardening AI infrastructure and addressing privacy concerns? How are we identifying any complex or security-sensitive tasks too delicate to assign to AI? 

As with most technologies, our people training will determine our success or failure. Are we training people when it is acceptable to use a public AI engine – and when they must use a private engine based on the content? Are we developing “sanitization scripts” and other processes so that sensitive concepts are removed from submissions? Are we training our people on the use of those scripts and testing their accuracy with that and other processes?  Are managers being trained to query their teams on their use of AI and how to support as well as keep them in policy?

The answers could determine whether your organization can effectively capitalize on the AI frenzy or if you make headlines as a victim of uniquely AI-driven cyber vulnerabilities.

AI’s potential comes with pitfalls

It’s easy to see why organizations are so enthusiastic about AI. A recent GitLab survey found that nine in ten DevSecOps teams are using AI in software development or plan to use it. Senior policymakers and intelligence officials were buzzing about the technology at the Billington Cybersecurity Summit earlier this month, touting its transformational potential (while warning that bad actors are using it, too). The U.S. federal government maintains a running list of agencies’ AI use cases, from identifying high-risk Social Security claims to helping cyber analysts better understand anomalies and potential threats via probabilistic models.

In the security testing space, AI can be used to automate data collection for reconnaissance, enhance scanning by improving the accuracy of automated tools and reduce noise by adding intelligence from outside sources like social media. AI can also equip testers to find the best course of action for exploiting a target – and help generate high-quality reports so security researchers can spend more time testing. Finally, AI techniques can be used to analyze results and, with proper privacy safeguards in place, allow exploitable software flaws at one organization to be applied at others so they can all be fixed. And that’s the tip of the iceberg for how AI can help overtaxed security teams improve their organization’s cyber posture. 

That said, AI can’t do everything. Even powerful generative AI platforms like ChatGPT struggle when faced with abstract problems. Their capacity to write code is convenient but still error-prone: A recent Stanford University study found that participants using an AI assistant wrote less secure code than their manual-only counterparts. The creativity of human intelligence should still come into play when building secure software and testing it for vulnerabilities.

Cyber defenders will have to evolve to incorporate AI and keep pace with attackers who are already using it. By striking the right balance between a human-led and AI-driven approach, organizations stand the best chance of realizing AI’s enormous potential while steering clear of alarming new vulnerabilities.