How these emerging AI trends impact cybersecurity

The advancing scope of AI and additional challenges it creates

Free White and Blue Butterflies Illustration Stock Photo

The rising trend of AI and autonomous learning machines in the realm of cybersecurity is growing. However, AI is now presenting a trident of problems the world will have to reconcile within the coming years. We’ve already discussed the implementation of language modules like ChatGPT in a malware context. However, other developments in AI will revolutionize how threat actors implement attacks via phishing and misinformation.

Not too long ago, algorithmic AI modules were trained on images of people and created fake photos (This Person Does Not Exist), or something eerily similar to its original. Today, AI has entered a new phase. Both visual and audio modules are vastly more capable than ever, and it presents a cybersecurity conundrum.

We’re still at square one

Before diving into trending AI technology and learning models, it’s important to observe the current state of cybersecurity. As it goes, the picture is not pretty. Outside of organizations with enough capital, resources, and staff to mitigate malware campaigns, the regular person is left to their own devices in breach scenarios. It hasn’t changed much since the inception of phishing and scam emails.

Phishing remains the most powerful tool in the hacker’s toolbox because it relies on human error. Deception via falsified messages, links, emails, and even voice calls are all part of the same phishing web. If you can trick the recipient(s), you can deliver your payload of malware, embed it into networks, or steal credentials for future attacks. Though phishing has been around for over a decade, it’s still a problem.

Thus, the basis of the most effective and popular way for attackers to breach systems remains phishing attacks. Social engineering and manipulation circumvent even the best defenses, and any tool that empowers phishing is a dangerous one. Enter now the state of AI and its developing trends.

Voice and Text Modules

We’ve touched on some of the dangers presented by learning modules like ChatGPT, in that it could be prompted to develop malicious code. While the model is designed to reject malicious prompts, there are workarounds. Even if not, even if by the time of this article’s publishing ChatGPT developed a way to completely ignore and reject all prompts for intrusive code, the premise is the same: AI models can theoretically develop rapid-fire code to be used for malicious purposes.

Even if this theoretical code or malicious message isn’t immediately effective, it further promotes the idea of an autonomous web of attackers. Overwhelming cybersecurity defenses with a deluge of attacks is a strategy by some malicious third parties. Using AI models to constantly build and rebuild malicious code is a method current cybersecurity will struggle to protect against.

But it’s the social engineering aspect that has the potential to spread misinformation and deceive recipients flying the red flags.

Voice is now a real concern. While AI voice such as text-to-speech has existed for years, the audio clearly indicated a non-human speaker. Now, AI models are trained on thousands of hours of available audio from virtually any available source, maintaining not just a near-identical sound, but even maintaining the cadence and rhythm of the original speaker. The concept of deepfake isn’t new, the technology has existed for several years. To the observant eye, there are obvious tells, but with advancements in AI and how models are trained to replicate or mimic the sound, it is harder to identify.

This opens a pandora’s box of misinformation, falsified news, and cybersecurity concerns. Imagine, for example, an important political figure or business CEO giving a statement utilizing deepfake footage combined with an AI voice that mimics them perfectly. Or, close enough that enough people – potentially numbering in the tens of thousands – would believe it. Corrections would be made along with the eventual discovery of the falsified source, but the immediate impact and damage would be felt.

Combined with paid models like ElevenLabs, Midjourney, and DALL-E, the potential for deadly misinformation is boundless. If checks and balances are not introduced, we’re entering an entirely new era of misleading info.

Considering the struggles with identifying and countering phishing scams, the deception that AI-generated images and audio is capable of will have serious damning impacts.

The future

As it stands, current AI-trained modules pull from sources with limited checks, meaning they’re working with a near-limitless supply of information. Without examining the contextual threats and how this will impact people, industries, and infrastructure, it paints a dangerous picture indeed.

AI tools do have a place in the modern world and can be used to help protect people and information. But as with every new tech development, their use can be exploited for malicious purposes.

Getting ahead of the game is more important than ever. For IT services, contact Bytagig today.

Share this post: