Today’s security operations centers (SOC) have to manage data, tools and teams dispersed across the organization, making threat detection and teamwork difficult. There are many factors driving complex security work. Many people now work from home with coworkers in far-away places. The cost and maintenance of legacy tools and the migration to cloud also make this more complex. So do hybrid environments and the multiple tools and vendors in use. Taking all these factors into account, the average analyst’s job has become more difficult than ever. Often, tracking down a single incident requires hours or even days of collecting evidence. That’s where artificial intelligence (AI) in cybersecurity comes in.

Analysts might spend a lot of time trying to gather data, sifting through gigabytes of events and logs and locating the relevant pieces. While they try to cope with the sheer volume of alerts, attackers are free to come up with ever more inventive ways of conducting attacks and hiding their trails.

What AI in Cybersecurity Can Do

AI makes the SOC more effective by reducing manual analysis, evidence gathering and threat intelligence correlation — driving faster, more consistent and accurate responses.

Some AI models can tell what type of evidence to collect from which data sources. They can also locate the relevant among the noise, spot patterns used in many common incidents and correlate with the latest security data. AI in cybersecurity can generate a timeline and attack chain for the incident. All of this leads the way to quick response and repair.

AI security tools are very effective in finding false positives. After all, most false positives follow common patterns. X-Force Red Hacking Chief Technology Officer Steve Ocepek reports that his team sees analysts spending up to 30% of their time studying false positives. If an AI can take care of those alerts first, humans will have more time and less alert fatigue when they handle the most important tasks.

The Human Element of AI Security

While the demand for skilled SOC analysts is increasing, it is getting harder for employers to find and retain them. Should you instead aim to completely automate the SOC and not hire people at all?

The answer is no. AI in cybersecurity is here to augment analyst output, not replace it. Forrester analyst Allie Mellen recently shared a great take on this issue.

In “Stop Trying To Take Humans Out Of Security Operations,” Allie argues that detecting new types of attacks and handling more complex incidents require human smarts, critical and creative thinking and teamwork. Often effectively talking to users, employees and stakeholders can lead to new insights where data is lacking. When used along with automation, AI removes the most boring elements of the job. This allows analysts time for thinking, researching and learning, giving them a chance to keep up with the attackers.

AI helps SOC teams build intelligent workflows, connect and correlate data from different systems, streamline their processes and generate insights they can act on. Effective AI relies on consistent, accurate and streamlined data. The workflows created with the help of AI in turn generate better quality data needed to retrain the models. The SOC teams and AI in cybersecurity grow and improve together as they augment and support each other.

Is it time to put AI to work in your SOC? Ask yourself these questions first.

Register for the webinar: SOAR

More from Artificial Intelligence

AI and cloud vulnerabilities aren’t the only threats facing CISOs today

6 min read - With cloud infrastructure and, more recently, artificial intelligence (AI) systems becoming prime targets for attackers, security leaders are laser-focused on defending these high-profile areas. They’re right to do so, too, as cyber criminals turn to new and emerging technologies to launch and scale ever more sophisticated attacks.However, this heightened attention to emerging threats makes it easy to overlook traditional attack vectors, such as human-driven social engineering and vulnerabilities in physical security.As adversaries exploit an ever-wider range of potential entry points…

Are successful deepfake scams more common than we realize?

4 min read - Many times a day worldwide, a boss asks one of their team members to perform a task during a video call. But is the person assigning tasks actually who they say they are? Or is it a deepfake? Instead of blindly following orders, employees must now ask themselves if they are becoming a victims of fraud.Earlier this year, a finance worker found themselves talking on a video meeting with someone who looked and sounded just like their CFO. After the…

How to calculate your AI-powered cybersecurity’s ROI

4 min read - Imagine this scenario: A sophisticated, malicious phishing campaign targets a large financial institution. The attackers use emails generated by artificial intelligence (AI) that closely mimic the company's internal communications. The emails contain malicious links designed to steal employee credentials, which the attackers could use to gain access to company assets and data for unknown purposes.The organization's AI-powered cybersecurity solution, which continuously monitors network traffic and user behavior, detects several anomalies associated with the attack, blocks access to the suspicious domains…

Topic updates

Get email updates and stay ahead of the latest threats to the security landscape, thought leadership and research.
Subscribe today