AI won’t boost cybersecurity, that’s cutting corners.

First published in cybernews.

The artificial intelligence (AI) cybersecurity market is projected to reach hundreds of billions of dollars soon. But an expert tells Cybernews that the current lovefest with AI is actually dangerous.

This week, a new report by Techopedia said that the value of the global AI cybersecurity market is expected to surpass $133 billion in 2030.

The numbers are huge, but they surprise no one. Fully deployed security AI solutions are cheaper in the long run, and organizations with AI cybersecurity identify and contain data breaches 100 days earlier than those without these tools.

On the other hand, cybercrime is expected to cost the world’s internet users $9.22 trillion in 2024. By 2028, that figure will be almost $14 trillion.

That’s exactly the point, cybersecurity expert Patrick Hayes, a former CISO and current chief strategy and product officer for Third Wave Innovations, tells Cybernews in an interview. He says that AI is not a cybersecurity panacea and believes that IT and security professionals should use caution with generative AI.

“I’ve been in this business almost 28 years, and I’ve seen these cycles spin through. Artificial intelligence is actually the most exciting cycle, but it’s also the one we could get wrong the most, quite frankly,” said Hayes.

He explains that AI has been used by attackers – for years – through phishing and vishing attacks that overcome language barriers and successfully reach intended targets. The bad guys have also successfully trained and corrupted large language models (LLMs) to return false information and feed misinformation to data sources.

For example, Microsoft said just this week that state-backed hackers from Russia, China, and Iran have been using tools from Microsoft-backed OpenAI to hone their skills and trick their targets.

“We need human critical thinking to use AI to solve and prevent problems. We’re adopting AI far faster than we have the ability to understand how to adopt it properly.”

Bad actors many steps ahead

According to Hayes, reports that the AI cybersecurity market will expand massively over the coming years are a bit misleading.

“The investor market has been going quite strong in supporting AI-related technologies and solutions, but the buyer’s market has been lukewarm with the exception of developers, neophytes, or folks that don’t completely understand the application of AI,” Hayes told Cybernews.

“I think that we’re a little early in market adoption to truly understand what the benefit of AI is going to be other than transcribing notes from a Zoom call or prompting you for a news article or your LinkedIn posts.”

Hayes admits that, yes, the use of AI in cybersecurity can be truly beneficial – depending, of course, on how much data you have at your disposal and how you’re utilizing it. That’s because humans are “inherently lazy” and will stop validating outcomes if they think they’re valid – “as opposed to validating the source of information.”

But the problem is, he says, that the information source isn’t always valid. It can be manipulated. Hayes stresses that criminals and bad actors have been using AI for a long time and distorting the perception of what is real and what isn’t.

“Phishing emails are starting to get better written. When was the last time you had a terribly written phishing email, right? The grammar was terrible, the translations were terrible, the spelling was off,” says Hayes.

“Criminal actors now actually have AI to use to their advantage, and humans are still the highest attack vector. Nation-states know this, criminals know this, and bad actors know this. They’re targeting folks like you and me when we’re looking at well-written emails, deepfake videos, or fake photos.”

The underground world of AI is also creative. Indeed, you can get a package that will allow you to create a totally fake OnlyFans business – this way, income is generated from a deepfake, AI-generated OnlyFans persona. “

An OnlyFans virtual model is an AI-generated simulation of a person used to create content for an OnlyFans account. The AI allows the creator to iteratively generate images and videos based on custom prompts and parameters,” explains one website, offering to “add superpowers” to OnlyFans account owners,

More than that, criminals operating on the dark web use malware builders and organize ransomware attacks. All this is driven by AI. As Hayes puts it, you just put in your prompts, and they’ll spit out a ransomware attack on the other side, packaged up for you to execute with everything that you would need.

“We’re still many steps behind the criminals. There’s no motivation for the average person to fully understand how to use AI and how to respond to it, whereas criminals absolutely have it – it’s a financial benefit,” says Hayes.

“If I’m a threat hunter or a security analyst, AI helps me to be faster, it creates more productivity. But the motivation of creating more productivity isn’t the same as a financial gain or an attack on an organization or a successful spread of false information.”

The cybersecurity veteran fears that the industry is going to run right into the fire. He says that will happen if organizations keep cutting corners by using AI to accomplish things – “gray matter” – previously dealt with by humans.

Human analysts still outshine machines

Just like, say, the Super Bowl commentators can use data created in real-time by generative AI tools, so bad actors can use AI to gather real-time information about the targeted organization and its vulnerabilities during an ongoing cyberattack.

“Most of these firms don’t change the way they treat incoming attacks based on whether their systems have been patched or not because they’re big and slow. But the attackers can hammer away at these companies in real-time,” says Hayes.

“They’re breaking in, and they’re able to do it unrecognized. So, of course, for an organization, the best way to combat that is to have the same use of generative AI.”

Unfortunately, this is now what vendors are training customers to do, Hayes adds. That is, they should be telling the market that firms need to build reactive programs using generative AI tools in order to understand how criminals would gain access to their systems – but they’re not.

“Organizations don’t think like criminals, right? Attackers will always be ahead of us because we don’t think like attackers, and we don’t use technology like attackers. We get lazy because we use it for productivity, and we don’t use it for offensive outcomes,” explains Hayes.

“Organizations say, wow, the use of generative AI is going to save us so many headcounts this year. That’s the thing I fear the most – that we’re going to forget that humans are the best at detecting abnormal things because they’re humans. We sense when something doesn’t seem right. As far as I know, AI has not developed any emotional equivalent to humans.”

Hayes admits he “fundamentally agrees” with Elon Musk, the billionaire founder of SpaceX and owner of X, formerly known as Twitter, in that he thinks that we’re adopting AI far faster than we have the ability to understand how to properly do it.

“I’m not saying we need to stop using AI. But we’re using it to cut corners. I’ve had technologists come to me and say, hey, we don’t even need first-line support anymore. We can just have AI running chatbots and taking phone calls that would query the data and provide responses back to the human on the other side of the phone,” Hayes recalls.

“That may work when you have a problem with your credit card or when you want to book tickets to an event. How will getting a response from an automated bot work when it comes to cybersecurity when we’re already dealing with an industry that’s truly hard to understand for most business and IT users?”

Quite obviously, the human on the other side – when their organization is under attack – will not be calm, cool, and collected. On the contrary, the individual will surely be under great stress.

“The person needs to be able to communicate back to their organization why their company is under attack and what they should do. I’ve gone through multiple incident response scenarios as well as living them live, and the emotional equivalent that an analyst brings to those situations far outshines some of the pure raw data that you bring to those situations as well. Humans are not binary,” says Hayes.

China and Russia hoarding talent

The expert is not full of hope that the grave situation will improve – technology companies have kicked 2024 off with more layoffs, and people in the cybersecurity industry have also suffered. But he has some optimism.

“I project a magnificent breach. I certainly hope it’s not going to be a breach of a government organization or anything in critical infrastructure. But I think that’s probably going to drive an outcome that’s different than the path we’re headed down right now,” says Hayes.

What could be the outcome? The expert envisions moderate adoption of AI in combination with individuals who understand how to operate the new tools. Of course, small companies won’t be able to afford the tech, and that could be key.

“A small company is going to be the leaping off point into a large breach, and it’ll get a lot of notoriety. And, heaven forbid, politicians get involved because they’ve proven that they don’t understand technology – regulating AI is not necessarily the solution,” claims Hayes.

“But there might be a moment of pause in this industry just to say, wait a second. If we’re going to do this, we need to do this with the right mix of people and the implementation of this technology.”

To the West and the United States in particular, the current trajectory is also dangerous because adversarial countries like China or Russia are hoarding talent and inserting thousands of gifted students in government AI programs.

In the US, the tech industry is witnessing thousands of people being pushed back into the job market as a result of massive layoffs and companies clamoring toward their bottom line. AI helps, too. Besides, says Hayes, “In America, ambition and laziness are conflicting with each other.” “I just don’t believe we’re developing those people at the same speed and rate. Those countries just have a far vaster amount of talent that are focused on AI development,” adds Hayes. “In the US, would you go and work for Google, which pays you $300,000 or $400,000 a year, or would you go and work for the US government, which might pay you $70,000 a year? And in countries like China or Russia, there’s no motivation to go make more money at a private company.”

“In a capitalist market, firms compete for talent and pay them more. But then the company might implement AI and lay off 4,000 people.”

Our Blog

Stay updated with the latest in the industry

Want to learn more about Third Wave. Keep up with the latest news and trends.

secure-logo
Third Wave Innovations, a pioneering force in risk management, offers a powerful mix of technology and expertise.
Help

5 Cowboys Way Suite 300, Frisco, Texas 75034

© 2024 Third Wave Innovations, Inc.