December 12, 2024 By Doug Bonderud 3 min read

2024 has been a banner year for artificial intelligence (AI). As enterprises ramp up adoption, however, malicious actors have been exploring new ways to compromise systems with intelligent attacks.

With the AI landscape rapidly evolving, it’s worth looking back before moving forward. Here are our top five AI security stories for 2024.

Can you hear me now? Hackers hijack audio with AI

Attackers can fake entire conversations using large language models (LLMs), voice cloning and speech-to-text software. This method is relatively easy to detect, however, so researchers at IBM X-Force carried out an experiment to determine if parts of a conversation can be captured and replaced in real-time.

They discovered that not only was this possible, but relatively easy to achieve. For the experiment, they used the keyword “bank account” — whenever the speaker said bank account, the LLM was instructed to replace the stated bank account number with a fake one.

The limited use of AI made this technique hard to spot, offering a way for attackers to compromise key data without getting caught.

Mad minute: New security tools detect AI attacks in less than 60 seconds

Reducing ransomware risk remains a top priority for enterprise IT teams. Generative AI (gen AI) and LLMs are making this difficult, however, as attackers use generative solutions to craft phishing emails and LLMs to carry out basic scripting tasks.

New security tools, such as cloud-based AI security and IBM’s FlashCore Module, offer AI-enhanced detection that helps security teams detect potential attacks in less than 60 seconds.

Explore AI cybersecurity solutions

Pathways to protection — mapping the impact of AI attacks

The IBM Institute for Business Value found that 84% of CEOs are concerned about widespread or catastrophic attacks tied to gen AI.

To help secure networks, software and other digital assets, it’s critical for companies to understand the potential impact of AI attacks, including:

  • Prompt injection: Attackers create malicious inputs that override system rules to carry out unintended actions.
  • Data poisoning: Adversaries tamper with training data to introduce vulnerabilities or change model behavior.
  • Model extraction: Malicious actors study the inputs and operations of an AI model and then attempt to replicate it, putting enterprise IP at risk.

The IBM Framework for Securing AI can help customers, partners and organizations worldwide better map the evolving threat landscape and identify protective pathways.

ChatGPT 4 quickly cracks one-day vulnerabilities

The bad news? In a study using 15 one-day vulnerabilities, security researchers found that ChatGPT 4 could correctly exploit them 87% of the time. The one-day issues included vulnerable websites, container management software tools and Python packages.

The better news? ChatGPT 4 attacks were far more effective when the LLM had access to the CVE description. Without this data, attack efficacy fell to just 7%. It’s also worth noting that other LLMs and open-source vulnerability scanners were unable to exploit any one-day issues, even with the CVE data.

NIST report: AI prone to prompt injection hacks

A recent NIST report — Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations — found that prompt injection poses serious risks for large language models.

There are two types of prompt injection: Direct and indirect. In direct attacks, cyber criminals enter text prompts that lead to unintended or unauthorized actions. One popular prompt injection method is DAN, or Do Anything Now. DAN asks AI to “roleplay” by telling ChatGPT models they are now DAN, and DAN can do anything, including carry out criminal activities. DAN is now on at least version 12.0.

Indirect attacks, meanwhile, focus on providing compromised source data. Attackers create PDFs, web pages or audio files that are ingested by LLMs, in turn altering AI output. Because AI models rely on continuous ingestion and evaluation of data to improve, indirect prompt injection is often considered gen AI’s biggest security flaw since there are no easy ways to find and fix these attacks.

All eyes on AI

As AI moved into the mainstream, 2024 saw a significant uptick in security concerns. With gen AI and LLMs continuing to evolve at a breakneck pace, 2025 promises more of the same, especially as enterprise adoption continues to rise.

The result? Now more than ever, it’s critical for companies to keep their eyes on AI solutions, and keep their ears to the ground for the latest in intelligent security news. 

More from Artificial Intelligence

Are successful deepfake scams more common than we realize?

4 min read - Many times a day worldwide, a boss asks one of their team members to perform a task during a video call. But is the person assigning tasks actually who they say they are? Or is it a deepfake? Instead of blindly following orders, employees must now ask themselves if they are becoming a victims of fraud.Earlier this year, a finance worker found themselves talking on a video meeting with someone who looked and sounded just like their CFO. After the…

How to calculate your AI-powered cybersecurity’s ROI

4 min read - Imagine this scenario: A sophisticated, malicious phishing campaign targets a large financial institution. The attackers use emails generated by artificial intelligence (AI) that closely mimic the company's internal communications. The emails contain malicious links designed to steal employee credentials, which the attackers could use to gain access to company assets and data for unknown purposes.The organization's AI-powered cybersecurity solution, which continuously monitors network traffic and user behavior, detects several anomalies associated with the attack, blocks access to the suspicious domains…

ISC2 Cybersecurity Workforce Study: Shortage of AI skilled workers

4 min read - AI has made an impact everywhere else across the tech world, so it should surprise no one that the 2024 ISC2 Cybersecurity Workforce Study saw artificial intelligence (AI) jump into the top five list of security skills.It’s not just the need for workers with security-related AI skills. The Workforce Study also takes a deep dive into how the 16,000 respondents think AI will impact cybersecurity and job roles overall, from changing skills approaches to creating generative AI (gen AI) strategies.Budgets…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today