June 14, 2024 By Jonathan Reed 4 min read

How many companies intentionally refuse to use AI to get their work done faster and more efficiently? Probably none: the advantages of AI are too great to deny.

The benefits AI models offer to organizations are undeniable, especially for optimizing critical operations and outputs. However, generative AI also comes with risk. According to the IBM Institute for Business Value, 96% of executives say adopting generative AI makes a security breach likely in their organization within the next three years.

CISA Director Jen Easterly said, “We don’t have a cyber problem, we have a technology and culture problem. Because at the end of the day, we have allowed speed to market and features to really put safety and security in the backseat.” And no place in technology reveals the obsession with speed to market more than generative AI.

AI training sets ingest massive amounts of valuable and sensitive data, which makes AI models a juicy attack target. Organizations cannot afford to bring unsecured AI into their environments, but they can’t do without the technology either.

To bridge the gap between the need for AI and its inherent risks, it’s imperative to establish a solid framework to direct AI security and model use. To help meet this need, IBM recently announced its Framework for Securing Generative AI. Let’s see how a well-developed framework can help you establish solid AI cybersecurity.

Securing the AI pipeline

A generative AI framework should be designed to help customers, partners and organizations to understand the likeliest attacks on AI. From there, defensive approaches can be prioritized to quickly secure generative AI initiatives.

Securing the AI pipeline involves five areas of action:

  1. Securing the data: How data is collected and handled
  2. Securing the model: AI model development and training
  3. Securing the usage: AI model inference and live use
  4. Securing AI model infrastructure
  5. Establishing sound AI governance

Now, let’s see how each area is oriented to address AI security threats.

Learn more about AI cybersecurity

1. Secure the AI data

Hungry AI models consume massive amounts of data, which data scientists, engineers and developers will access for development purposes. However, developers might not have security high on their list of priorities. If mishandled, your sensitive data and critical intellectual property (IP) could end up exposed.

In AI model attacks, exfiltration of underlying data sets is likely to be one of the most common attack scenarios. Therefore, security fundamentals are the first line of defense to protect these data sets. AI security fundamentals include:

2. Secure the AI model

When developing AI applications, data scientists frequently use pre-existing, freely available machine learning (ML) models sourced from online repositories. However, like any open-source library, security is frequently not built in.

Every organization must consider the AI security risks versus the benefits of accelerated model development. However, without proper AI model security, the downside risk can be significant. Remember, hackers have access to online repositories as well, and backdoors or malware can be injected into open-source models. Any organization that downloads an infected model is wide open to attack.

Furthermore, API-enabled large language models (LLMs) present a similar risk. Hackers can target API interfaces to access and exploit data being transported across the APIs. And LLM agents or plug-ins with excessive permissions further increase the risk for compromise.

To secure AI models, organizations should:

3. Secure the AI usage

When AI models first became widely available, waves of users rushed to test the platforms. It wasn’t long before hackers were able to trick the models into ignoring guardrails and generate biased, false or even dangerous responses. All this can lead to reputational damage and increase the risk of costly legal headaches.

Attackers can also attempt to analyze input/output pairs and train a surrogate model to mimic the behavior of your organization’s AI model. This means the enterprise can lose its competitive edge. Finally, AI models are also vulnerable to denial of service attacks, where attackers overwhelm the LLM with inputs that degrade the quality of service and ramp up resource use.

Best practices for AI model usage security include:

  • Monitoring for prompt injections
  • Monitoring for outputs containing sensitive data or inappropriate content
  • Detecting and responding to data poisoning, model evasion and model extraction
  • Deploying machine learning detection and response (MLDR), which can be integrated into security operations solutions, such as IBM Security® QRadar®, enabling the ability to deny access and quarantine or disconnect compromised models.

4. Secure the infrastructure

A secure infrastructure must underpin any solid AI cybersecurity strategy. Strengthening network security, refining access control, implementing robust data encryption and deploying vigilant intrusion detection and prevention systems around AI environments are all critical for securing infrastructure that supports AI. Additionally, allocating resources towards innovative security solutions tailored for safeguarding AI assets should be a priority.

5. Establish AI governance

Artificial intelligence governance entails the guardrails that ensure AI tools and systems are and remain safe and ethical. It establishes the frameworks, rules and standards that direct AI research, development and application to ensure safety, fairness and respect for human rights.

IBM is an industry leader in AI governance, as shown by its presentation of the IBM Framework for Securing Generative AI. As entities continue to give AI more business process and decision-making responsibility, AI model behavior must be kept in check, monitoring for fairness, bias and drift over time. Whether induced or not, a model that diverges from what it was originally designed to do can introduce significant risk.

More from Artificial Intelligence

AI and cloud vulnerabilities aren’t the only threats facing CISOs today

6 min read - With cloud infrastructure and, more recently, artificial intelligence (AI) systems becoming prime targets for attackers, security leaders are laser-focused on defending these high-profile areas. They’re right to do so, too, as cyber criminals turn to new and emerging technologies to launch and scale ever more sophisticated attacks.However, this heightened attention to emerging threats makes it easy to overlook traditional attack vectors, such as human-driven social engineering and vulnerabilities in physical security.As adversaries exploit an ever-wider range of potential entry points…

Are successful deepfake scams more common than we realize?

4 min read - Many times a day worldwide, a boss asks one of their team members to perform a task during a video call. But is the person assigning tasks actually who they say they are? Or is it a deepfake? Instead of blindly following orders, employees must now ask themselves if they are becoming a victims of fraud.Earlier this year, a finance worker found themselves talking on a video meeting with someone who looked and sounded just like their CFO. After the…

How to calculate your AI-powered cybersecurity’s ROI

4 min read - Imagine this scenario: A sophisticated, malicious phishing campaign targets a large financial institution. The attackers use emails generated by artificial intelligence (AI) that closely mimic the company's internal communications. The emails contain malicious links designed to steal employee credentials, which the attackers could use to gain access to company assets and data for unknown purposes.The organization's AI-powered cybersecurity solution, which continuously monitors network traffic and user behavior, detects several anomalies associated with the attack, blocks access to the suspicious domains…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today