April 24, 2018 By Kelly Ryver 3 min read

Humanity has been fascinated with artificial intelligence (AI) for the better part of a century — from Aldus Huxley’s “Brave New World” and Gene Roddenberry’s “Star Trek” to the “Matrix” trilogy and the most recent season of “The X-Files.”

AI-based algorithms, specifically machine learning algorithms, enable news-curating apps such as Flipboard to deliver content to users that match their individual tastes. Reuters uses AI to review social media posts, news stories and readers’ habits to generate opinion editorials. The city of Atlanta is installing smart traffic lights based on AI algorithms to help alleviate traffic congestion. AI is also being used to control street lights, automate certain elements of office buildings, automate customer service chat features and perform concierge services at hotels and offices around the world.

Japan has been experimenting with AI and robots since the late 1960s, due largely to the country’s exceptionally long life expectancy and low birth rates. In fact, Japanese researchers have been working on artificially intelligent robots that are so sophisticated and mimic human thought and behavior so closely that they can serve as companions and assistants for the elderly and infirm. These machines are also being put to work in a wide variety of industries.

What else could AI be used for? It could, in theory, learn to write its own code, construct its own algorithms, correct its own mathematical proofs and write better programs than its human designers. At the 2017 Black Hat conference, during a 25-minute briefing titled “Bot vs. Bot for Evading Machine Learning Malware Detection,” the presenter demonstrated how an AI agent can compete against a malware detector by proactively probing it for blind spots that can be exploited. This simplistic approach to creating better malware detection engines is essentially game theory with a two-player game between machines.

The Current State of AI: All Science, No Philosophy

There are dozens of examples of beneficial uses for AI in the world today, and possibly hundreds more to come. But what happens when AI demonstrates capabilities that it is not supposed to have?

A major technology company proved through a chatbot experiment that AI could be taught to recognize human speech patterns on a social media site and, after learning the natural language patterns, develop patterns of its own. This was a forward-thinking and quite exciting experiment, but it proved that humans are extremely poor teachers and that the social world is an even poorer classroom. It was not long before the bot began picking speech patterns based the sentiments that were most prevalent on social media: hatred, envy, jealousy, rage and so forth. The experiment was ultimately canceled.

The unexpected result of this novel experiment is something everyone working in the field of AI and machine learning should have paid attention to — because it will happen again, albeit in a different fashion. AI and machine learning are so new that very few people truly understand the technology, and even fewer understand it well enough to work with it every day or employ it to find creative solutions to difficult problems.

As both AI and machine learning grow in popularity, the number of AI-based products is growing in parallel. Today, AI is used as a marketing tactic everywhere. It is integrated into everything, leveraged to gain competitive advantage over imaginary competitors, mentioned on every ad for commercially available off-the-shelf (COTS) security products, sold as a magic bullet for every defensive security problem and employed with impunity without an ounce of philosophical oversight.

AI Versus AI: A Malware Arms Race

It is only a matter of time before threat actors of all calibers employ AI to break down defensive barriers faster than any security product or antivirus detection engine can stop them, much less a team of humans accustomed to being reactive with security. AI-based malware could wreak havoc on an unprecedented scale in many different areas and sectors, including:

  • National power grids and modernized industrial control systems;
  • Aerospace and defense;
  • Nuclear programs, particularly those that utilize war game scenarios;
  • Satellite and telecommunications networks;
  • Commodities, foreign exchanges and futures trading; and
  • Artificial neural networks, especially when used within large constructs such as the Internet of Things (IoT) and autonomous vehicles.

If companies around the world are interested in employing AI and machine learning to solve difficult problems, they must incorporate both offensive and defensive capabilities into the technology. It is also necessary to test any artificially intelligent product by pitting it against both humans and other machines, which some imaginative companies are already beginning to do.

Product development and solution design teams should consider moving away from a purely defensive security posture to one that is more offensive in nature. An artificially intelligent system will likely only get one chance to defend itself against an aggressive and more advanced adversary.

Visit the Adversarial Robustness Toolbox and contribute to IBM’s ongoing research into Adversarial AI Attacks

More from Artificial Intelligence

AI and cloud vulnerabilities aren’t the only threats facing CISOs today

6 min read - With cloud infrastructure and, more recently, artificial intelligence (AI) systems becoming prime targets for attackers, security leaders are laser-focused on defending these high-profile areas. They’re right to do so, too, as cyber criminals turn to new and emerging technologies to launch and scale ever more sophisticated attacks.However, this heightened attention to emerging threats makes it easy to overlook traditional attack vectors, such as human-driven social engineering and vulnerabilities in physical security.As adversaries exploit an ever-wider range of potential entry points…

Are successful deepfake scams more common than we realize?

4 min read - Many times a day worldwide, a boss asks one of their team members to perform a task during a video call. But is the person assigning tasks actually who they say they are? Or is it a deepfake? Instead of blindly following orders, employees must now ask themselves if they are becoming a victims of fraud.Earlier this year, a finance worker found themselves talking on a video meeting with someone who looked and sounded just like their CFO. After the…

How to calculate your AI-powered cybersecurity’s ROI

4 min read - Imagine this scenario: A sophisticated, malicious phishing campaign targets a large financial institution. The attackers use emails generated by artificial intelligence (AI) that closely mimic the company's internal communications. The emails contain malicious links designed to steal employee credentials, which the attackers could use to gain access to company assets and data for unknown purposes.The organization's AI-powered cybersecurity solution, which continuously monitors network traffic and user behavior, detects several anomalies associated with the attack, blocks access to the suspicious domains…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today