January 31, 2025 By Doug Bonderud 4 min read

“A computer can never be held accountable, therefore a computer must never make a management decision.”

– IBM Training Manual, 1979

Artificial intelligence (AI) adoption is on the rise. According to the IBM Global AI Adoption Index 2023, 42% of enterprises have actively deployed AI, and 40% are experimenting with the technology. Of those using or exploring AI, 59% have accelerated their investments and rollouts over the past two years. The result is an uptick in AI decision-making that leverages intelligent tools to arrive at (supposedly) accurate answers.

Rapid adoption, however, raises a question: Who’s responsible if AI makes a poor choice? Does the fault lie with IT teams? Executives? AI model builders? Device manufacturers?

In this piece, we’ll explore the evolving world of AI and reexamine the quote above in the context of current use cases: Do companies still need a human in the loop, or can AI make the call?

Getting it right: Where AI is improving business outcomes

Guy Pearce, principal consultant at DEGI and member of the ISACA working trends group, has been involved with AI for more than three decades. “First, it was symbolic,” he says, “and now it’s statistical. It’s algorithms and models that allow data processing and improve business performance over time.”

Data from IBM’s recent AI in Action report shows the impact of this shift. Two-thirds of leaders say that AI has driven more than a 25% improvement in revenue growth rates, and 72% say that the C-suite is fully aligned with IT leadership about what comes next on the path to AI maturity.

With confidence in AI growing, enterprises are implementing intelligent tools to improve business outcomes. For example, wealth management firm Consult Venture Partners deployed AIda AI, a conversational digital AI concierge that uses IBM watsonx assistant technology to answer potential clients’ questions without the need for human agents.

The results speak for themselves: Alda AI answered 92% of queries correctly, 47% of queries led to webinar registrations and 39% of inquiries turned into leads.

Missing the mark: What happens if AI makes mistakes?

92% is an impressive achievement for Alda AI. The caveat? It was still wrong 8% of the time. So, what happens when AI makes mistakes?

For Pearce, it depends on the stakes.

He uses the example of a financial firm leveraging AI to evaluate credit scores and issue loans. The outcomes of these decisions are relatively low stakes. In the best-case scenario, AI approves loans that are paid back on time and in full. In the worst case, borrowers default, and companies need to pursue legal action. While inconvenient, the negative outcomes are far outweighed by the potential positives.

“When it comes to high stakes,” says Pearce, “look at the medical industry. Let’s say we use AI to address the problem of wait times. Do we have sufficient data to ensure patients are seen in the right order? What if we get it wrong? The outcome could be death.”

As a result, how AI is used in decision-making depends largely on what it’s making decisions about and how these decisions impact both the company making the decisions and those the decision affects.

In some cases, even the worst-case scenario is a minor inconvenience. In others, the results could cause significant harm. 

Explore AI cybersecurity

Taking the blame: Who’s accountable if AI gets it wrong?

In April 2024, a Tesla operating in “full self-driving” mode struck and killed a motorcyclist. The driver of the vehicle admitted to looking at their phone prior to the crash despite active driver supervision being required.

So who takes the blame? The driver is the obvious choice and was arrested on charges of vehicular homicide.

But this isn’t the only path to accountability. There’s also a case to be made in which Tesla bears some responsibility since the company’s AI algorithm failed to spot the victim. Blame could also be placed on governing bodies such as the National Highway Traffic Safety Administration (NHTSA). Perhaps their testing wasn’t rigorous or complete enough.

One could even argue that the creator(s) of Tesla’s AI could be held liable for letting code that could kill someone go live.

This is the paradox of AI decision-making: Is someone at fault, or is everyone at fault? “If you bring all the stakeholders together who should be accountable, where does that accountability lie?” asks Pearce. “With the C-suite? With the whole team? If you have accountability that’s spread over the entire organization, everyone can’t end up in jail. Ultimately, shared accountability often leads to no accountability.”

Drawing the line: Where does AI end?

So, where do organizations draw the line? Where does AI insight give way to human decision-making?

Three considerations are key: Ethics, risk and trust.

“When it comes to ethical dilemmas,” says Pearce, “AI can’t do it.” This is because intelligent tools naturally seek the most efficient path, not the most ethical. As a result, any decision involving ethical questions or concerns should include human oversight.

Risk, meanwhile, is an AI specialty. “AI is good in risk,” Pearce says. “What statistical models do is give you something called a standard error, which lets you know if what AI is recommending has a high or low potential variability.” This makes AI great for risk-based decisions like those in finance or insurance.

Finally, enterprises need to prioritize trust. “There are declining levels of trust in institutions,” says Pearce. “Many citizens don’t feel confident that the data they share is being used in a trustworthy manner.”

For example, under GDPR, companies need to be transparent about data collection and handling and give citizens a chance to opt-out. To bolster trust in AI use, organizations should clearly communicate how and why they’re using AI and (where possible) allow customers and clients to opt out of AI-driven processes.

Decisions, decisions

Should AI be used for management decisions? Maybe. Will it be used to make some of these decisions? Almost certainly. The draw of AI — its ability to capture, correlate and analyze multiple data sets and deliver new insights — makes it a powerful tool for enterprises to streamline operations and reduce costs.

What’s less clear is how the shift to management-level decision-making will impact accountability. According to Pearce, current conditions create “blurry lines” in this area; legislation hasn’t kept pace with increasing AI usage.

To ensure alignment with ethical principles, reduce the risk of wrong choices and engender stakeholder and customer trust, businesses are best served by keeping humans in the loop. Maybe this means direct approval from staff is required before AI can act. Maybe it means the occasional review and evaluation of AI decision-making outcomes.

Whatever approach enterprises choose, however, the core message remains the same: When it comes to AI-driven decisions, there’s no hard-and-fast line. It’s a moving target, one defined by possible risk, potential reward and probable outcomes.

More from Artificial Intelligence

AI and cloud vulnerabilities aren’t the only threats facing CISOs today

6 min read - With cloud infrastructure and, more recently, artificial intelligence (AI) systems becoming prime targets for attackers, security leaders are laser-focused on defending these high-profile areas. They’re right to do so, too, as cyber criminals turn to new and emerging technologies to launch and scale ever more sophisticated attacks.However, this heightened attention to emerging threats makes it easy to overlook traditional attack vectors, such as human-driven social engineering and vulnerabilities in physical security.As adversaries exploit an ever-wider range of potential entry points…

Are successful deepfake scams more common than we realize?

4 min read - Many times a day worldwide, a boss asks one of their team members to perform a task during a video call. But is the person assigning tasks actually who they say they are? Or is it a deepfake? Instead of blindly following orders, employees must now ask themselves if they are becoming a victims of fraud.Earlier this year, a finance worker found themselves talking on a video meeting with someone who looked and sounded just like their CFO. After the…

How to calculate your AI-powered cybersecurity’s ROI

4 min read - Imagine this scenario: A sophisticated, malicious phishing campaign targets a large financial institution. The attackers use emails generated by artificial intelligence (AI) that closely mimic the company's internal communications. The emails contain malicious links designed to steal employee credentials, which the attackers could use to gain access to company assets and data for unknown purposes.The organization's AI-powered cybersecurity solution, which continuously monitors network traffic and user behavior, detects several anomalies associated with the attack, blocks access to the suspicious domains…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today