The rapid evolution of AI based images will present a serious cybersecurity threat

AI-content and growing concerns in IT and cybersecurity sectors, what to know

Free Bionic Hand and Human Hand Finger Pointing Stock Photo

It seems like only yesterday the concept of randomly generated AI images touched the horizon of potential cybersecurity problems. Well, the future is here, and it’s scarier than ever.

First, let’s establish some context. The early quarter of 2023 (and much of 2022) saw the evolving development of AI image generators (DALL-E, Stable Diffusion, DeepAI) – essentially glorified digital blenders – which compiled sources of images from the internet to generate visual content based on its prompt. So, if you typed the prompt “man working at a computer,” the AI platform would create this image to the best of its abilities, drawing from learning models and similar media to make a “unique” representation of the prompt. That’s a simplification of the process, but the overall idea.

Since this evolution, AI has found itself at the forefront of expanding technology, for better or worse. There’s a serious discussion to have about the ethics of sourcing images from digital artists (or other media formats) without consent, but that’s a discussion for a different article. Unfortunately, it means AI threats promise a bleak future for the cybersecurity industry.

Social engineering at its most dangerous

The problem is AI generators have become convincingly good. Typically, early models and iterations of AI-generated images had obvious tells. Creating images of people, for example, would create warped impressions of the subject with anatomical problems and visual discrepancies – so even at a glance, it was easy to realize the image was AI-generated. Even if not, an inspection would catch those discrepancies.

Now, however, AI image generation has reached the point where prompts can create near-photorealistic interpretations of the prompt. No doubt, months from when this article is published, it will be even better. So why does this create a serious cybersecurity dilemma? Social engineering comes to mind.

If you don’t see the massive security dilemma a prompt-based image generator can create, we’ll create a reasonable hypothetical. From duping companies to the implementation of falsified news or media, the dangers are boundless.

An image can be generated and tagged with fake news or misinformation on popular social media platforms. Anything from politics to an emergency event can have a supplemented AI-generated image to quickly dupe readers into believing whatever the headline is. It is (usually) corrected, but what damage can be done in the short term? Social engineering is already a serious fault line in the cybersecurity realm, with advanced techniques constantly used to dupe recipients. With near-flawless images, the problem expands.

There are no immediate checks to identify whether an image is AI-generated. From fake social media accounts used for phishing campaigns to falsified news or political espionage, randomly prompted images at the fingertips of malicious parties are a volatile concept.

Prompt engineering combined with AI

The problems mentioned coincide with the advancement of other AI modules such as ChatGPT, in which malicious actors attempt to automatically generate code to deploy for threat campaigns. While filters typically catch said attempts, it provokes a reality: that hackers and threat actors have these AI tools at their disposal. Even if there are filters and defense mechanisms in place to thwart attempts to generate code, there will be a tipping point: hackers will have the means to easily create code for hack attempts (or otherwise).

Combine this with AI image generation, and you can see how the issue worsens. Even with hypothetical methods to check and block these developments, the speed and versatility of hacker tools present a serious challenge. As is with everything in cybersecurity, any technological advance can be used for malicious purposes too.

AI image generators also allow hackers to develop fraud campaigns – from misleading “advertisements” to falsified charities. Any visual resource allowing malicious actors to deceive readers with artificial images can and will be used. There’s no sign of development slowing on these AI-based programs either – visual or otherwise, so the problem is fast approaching reality for all cybersecurity and IT sectors.

What can we do?

Until there’s a direct arrest and limitation on what AI models can source (and we are long past those days), identifying AI-generated content will be increasingly difficult.

However, when considering business IT and networks, referring to a “zero trust” policy is still an effective way to avoid AI-based security mishaps. Zero trust is “trust until verify,” meaning if content appears suspicious, it should be ignored until properly verified. Additionally, awareness training on the potential hazards of AI-generated images is also worth consideration, and something CISO leaders and cybersecurity staff should be readily aware of.

Other Considerations

The volume of cyber threats we encounter daily grows, and unfortunately, AI has introduced a brand new conundrum. However, that does not mean you’re on your own. Using expert MSP resources can help protect your business data while keeping up-to-date with the latest IT and cybersecurity changes.

For more information, you can contact Bytagig today.

Share this post: