Human Vigilance is Required Amid AI-Generated Cybersecurity Threats

Originally posted in Security Boulevard

From casual users to international corporations, people are flocking to artificial intelligence tools to boost their productivity. But they’re not the only ones leaning on AI to make their day-to-day easier. Cyber attackers also are deploying AI to finetune phishing emails, hunt for vulnerabilities in a target’s security systems and unleash attacks in real-time.

And fraudsters often have the upper hand. That’s because cyber attackers are focused on a lure far more potent than fixing grammar in an email or streamlining daily processes; they’re implementing AI to gain massive financial rewards for themselves.

For the rest of us, it’s easy to get sidetracked by the frills and features of AI, but too many organizations are deploying AI with little understanding of how it works or its repercussions. And as we let our guard down, hackers are churning out increasingly sophisticated attacks, thanks to AI.

While many organizations are adopting AI at an alarming pace to gain efficiencies and lower operating costs through technology and headcount reduction, they may also be sacrificing their security. With staff reductions becoming commonplace, those left behind struggle to keep up with the latest threats and properly maintaining systems. This is the opportunity that could provide a window for bad actors to jump through and launch a cyberattack. Now, and into the future, caution, along with the proper cybersecurity tools and strategies, is more important than ever.

Alarms Raised

With AI, cyberattacks are increasingly successful and lucrative. Cybercrime is forecasted to cost the world a whopping $9.5 trillion this year, according to Cybersecurity Ventures. And those damages are expected to grow by 15% per year over the next two years, totaling $10.5 trillion annually by 2025.

According to a 2023 cybersecurity report from Sapio Research and Deep Instinct, 75% of senior cybersecurity professionals surveyed had witnessed an increase in attacks over the past 12 months, and 85% of those attributed the increase to generative AI.

Attackers have long used AI to refine their work — adjusting their writing style, language translation and tone, producing malicious code and training large language models (LLMs) with disinformation. Attackers are now using AI to create attacks-as-a-service on the dark web, lowering the barrier to entry for criminals looking to exploit people and organizations for profit. Cybersecurity experts have been raising alarms for some time now.

In March 2023, a report from Europol, a European police agency, said that with ChatGPT, it is now possible for attackers to “impersonate an organization or individual in a highly realistic manner even with only a basic grasp of the English language.” What’s more, by using LLMs, online fraud can be “created faster, much more authentically and at a significantly increased scale,” the report says.

And in February 2024, Microsoft detailed how attackers from China, Iran and Russia were using its AI to write code that evades detection or to uncover vulnerabilities in potential victims’ technology tools.

In most attacks, cybercriminals are still taking advantage of human error. They’re betting on us clicking on a bad link, sharing sensitive information amid a spear phishing campaign and failing to update software with the latest patches.

AI tools certainly can prevent some attacks. IBM’s Cost of a Data Breach 2023 global survey found that the tools accelerated breach identification and containment by 100 days, on average.

But, as threats are escalating, many organizations are taking their eyes off cybersecurity efforts in favor of other AI uses, even replacing humans with AI tools and bots. When it comes to cybersecurity, however, the move to AI must be paved with plenty of offensive strategies. Immediately removing humans from the equation is not the answer.
Human Problem, Human Solution

What I know from nearly 30 years in cybersecurity is that bad actors, nation states and other criminals are always a step ahead. The rapid replacement of humans for AI tools and bots will only make organizations more vulnerable to cyberattacks.

To truly secure an organization’s data, intellectual property and other sensitive information, humans must be part of the equation for the foreseeable future. We have something that AI tools don’t have — empathy, intuition and critical-thinking skills. And when we sense something is wrong and that an attack is underway, we can be relentless in finding the resolution, especially when working in collaboration with other humans.

Early Days for AI

Rightfully so, leaders and technologists have hailed AI as transformative, and it’s already reshaping how organizations operate. But it’s still early days for the technology. Don’t get distracted by its bells and whistles.

These escalating threats make it vital for businesses and organizations to move forward with AI, but cautiously. Organizations must ensure that the right human resources remain in place, and that they’re using AI properly — whether for cybersecurity or other activities.

Bottom line, AI can’t be our only defense against AI-generated threats. Humans flagging a potential breach and working with other humans to address it remains a critical piece of the puzzle. I fear that a massive breach is on the horizon if we don’t move forward with smart, human-powered strategies. The best defense still requires people, process and technology.

Our Blog

Stay updated with the latest in the industry

Want to learn more about Third Wave. Keep up with the latest news and trends.

secure-logo
Third Wave Innovations, a pioneering force in risk management, offers a powerful mix of technology and expertise.
Help

5 Cowboys Way Suite 300, Frisco, Texas 75034

© 2024 Third Wave Innovations, Inc.