Cyberattacks are on the rise as ransomware continues to plague companies across all industries and malicious actors look to nab bitcoin payouts and steal personal data. The first quarter of 2018 also saw a spike in both distributed denial-of-service (DDoS) attack volume and duration.

But despite the prevalence of these external threats, a February 2018 report found over one in four attacks start inside corporate networks. These insider threats can be devastating, especially if employees have privileged accounts. Plus, threats may go undetected for months if companies aren’t looking inward.

Enterprises need a new way to break bad behavior that takes the guesswork out of identifying accidental (or acrimonious) employee incidents. With that in mind, artificial intelligence (AI) may offer the next iteration of insider attack security.

Cyberattacks: Insider Threats by the Numbers

According to the report, the number of insider attacks varies significantly by sector. In manufacturing, just 13 percent of threats stem from insiders. In the public sector, 34 percent of all incidents start with authorized users. Health care tops the insider threats list with 56 percent of incidents tied to human error or intentional misuse.

In 17 percent of insider breaches, mistakes — rather than malice — were the underlying cause. Employees might send emails to the wrong recipient, improperly delete classified information or misconfigure privacy settings. While intention matters when it comes to discipline and long-term staffing decisions, it has no bearing on the impact of a data breach. Employees who mistakenly click on malicious links or open infected email attachments can subject organizations to the same types of IT disasters that stem from targeted outsider attacks.

The worst-case scenario when it comes to insider threats, according to ITWeb, is a hybrid attack that includes both internal and external actors. Described as a “toxic cocktail,” it’s incredibly difficult to detect and mitigate this type of incident.

IT Security: Need for Speed

The Department of Energy saw a 23 percent boost in cybersecurity spending in 2018, while the Nuclear Regulatory Commission received a 33 percent increase, according to GCN. But no matter how much money organizations invest in cybersecurity, humans remain the weak link in the chain. GCN suggests moving IT security “from human to machine speed” to both detect and resolve potential issues.

Insider threats also took center stage at the 2018 RSA Conference. Juniper Networks’ CEO, Rami Rahim, spoke about the “unfair advantage” criminals enjoy because of the internet since it eliminates the typical constraints of time, distance and identity.

So, it’s no surprise industry experts like Randy Trzeciak of the CERT Insider Threat Center see a role for AI in defending corporate networks against insider threats. Trzeciak noted in a 2018 RSA Conference interview with BankInfoSecurity that “insiders who defraud organizations exhibit consistent potential risk indicators.”

AI offers a way to detect these potential risk patterns more quickly without the inherent bias of human observers — which is critical given the nature of insider attacks. Since these attacks stem from authorized access, organizations may not realize they’ve been breached until the damage is done.

Teaching AI Technology

AI assisting security professionals makes sense in theory, but what does this look like in practice? According to VentureBeat, training is an essential part of the equation. For cybersecurity controls, this means teaching AI to recognize typical patterns of insider threat behavior effectively. These might include regular file transfers off corporate networks onto physical media or private email accounts — or strange account activity that doesn’t coincide with regular work shifts. Individually, these signs could be outliers. But when detected in concert by AI tools, they’re a cause for concern.

Also concerning is the double-edged nature of intelligence tools. As noted by Health IT Security, AI could be used to both bolster and undermine health data security. There’s also an emerging category of adversarial AI tools designed to automatically infiltrate networks and custom-design attack vectors that can compromise security.

The philosophy of AI development also matters. As shown by recent experiments that released AI-enabled bots into the world of social media, artificial intelligence tools can learn the wrong lessons just as easily as the right ones.

What does this mean for AI as insider defense?

Applied Learning

Insider threats are now a top priority for organizations. Despite good intentions, employees may unwittingly expose critical systems to malware, ransomware or other emerging threats. Given the sheer number of mobile- and cloud-based endpoints, it’s impossible for human security experts to keep pace with both internal and external threats, especially when inside actors may go undetected.

AI offers a way to detect common patterns of compromise and network abuse, restrict access as applicable and report actions taken to IT professionals. The next step toward breaking bad behavior is to implement AI and train it to recognize key patterns, disregard signal noise and accelerate security from human to machine speed.

Learn more about adversarial AI and the IBM Adversarial Robustness Toolbox (ART)

More from Artificial Intelligence

AI and cloud vulnerabilities aren’t the only threats facing CISOs today

6 min read - With cloud infrastructure and, more recently, artificial intelligence (AI) systems becoming prime targets for attackers, security leaders are laser-focused on defending these high-profile areas. They’re right to do so, too, as cyber criminals turn to new and emerging technologies to launch and scale ever more sophisticated attacks.However, this heightened attention to emerging threats makes it easy to overlook traditional attack vectors, such as human-driven social engineering and vulnerabilities in physical security.As adversaries exploit an ever-wider range of potential entry points…

Are successful deepfake scams more common than we realize?

4 min read - Many times a day worldwide, a boss asks one of their team members to perform a task during a video call. But is the person assigning tasks actually who they say they are? Or is it a deepfake? Instead of blindly following orders, employees must now ask themselves if they are becoming a victims of fraud.Earlier this year, a finance worker found themselves talking on a video meeting with someone who looked and sounded just like their CFO. After the…

How to calculate your AI-powered cybersecurity’s ROI

4 min read - Imagine this scenario: A sophisticated, malicious phishing campaign targets a large financial institution. The attackers use emails generated by artificial intelligence (AI) that closely mimic the company's internal communications. The emails contain malicious links designed to steal employee credentials, which the attackers could use to gain access to company assets and data for unknown purposes.The organization's AI-powered cybersecurity solution, which continuously monitors network traffic and user behavior, detects several anomalies associated with the attack, blocks access to the suspicious domains…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today