ChatGPT, AI, and the future of Cybersecurity

How AI learning potentially impacts IT and cybersecurity

Free Blue Bright Lights Stock Photo

If the artificial intelligence apocalypse comes, it might not be through killer robots from the future. AI is fast becoming a major player in the IT space because of its modern applications. That hasn’t come without controversy, however, as demonstrated with ChatGPT.

Learning machines aren’t particularly new, advancing over the years with different learning models. What remains controversial in creative media spaces, for instance, is how AI models are trained – using written and visual content from uncredited sources and without the consent of the original creators. But these are not the only use cases for AI. GhatGPT, a free-to-use language generator prompt by OpenAI, has been used for a variety of things, and malicious code is one of them.

First, it’s important to note malicious code has not been successfully generated with ChatGPT. The chatbot is designed in such a way as to decline the creation of malicious code. But, threat actors are seeking ways to manipulate the chatbot into generating code usable for malicious purposes. It’s referred to as prompt engineering, the attempt to force the chatbot into creating malicious code prompts.

Does that mean, right now, hackers are capable of easily engineering dangerous code en masse? No. Could they in the future? Absolutely.

ChatGPT has now entered the cybersecurity arena, where every innovation and advancement is equally usable for bad and good. Because ChatGPT learns, so can its filters to detect attempts to generate threat code. In the same vein, threat actors will find ways to abuse ChatGPT and seek loopholes. When one loophole is resolved, another opens, as is the nature of cybersecurity.

We’re far from seeing mass adoption of ChatGPT by attackers, and farther still from generating perfect malicious code or prompts to readily deploy. But this highlights a growing possibility that, if possible, threat actors will take advantage of AI resources to utilize in their attack campaigns. Automated attacks are already part of malicious arsenals. Using AI language generation to create – or aid in the creation – of hostile code seems like an incoming reality versus a speculative one.

However, the development of dangerous code isn’t the sole issue.

ChatGPT for Harassment and Phishing

There’s more than one way to bypass security barriers, and phishing remains one of the hackers’ best tools. Social engineering can upend even the best IT security setups, relying on human error. There’s little to stop a threat actor from manipulating GPT to create a prompt used for phishing. Since phishing attempts take advantage of trusted communications and contacts, a language creation prompt can “smooth over” common tells involved with phishing.

A phishing email, for example, only needs to “alert” its recipient and convince them to click a link or visit a domain. In theory, if generated by ChatGPT, this could appear convincing enough to deceive those not paying attention to the message.

There is also potential for misinformation. ChatGPT has shown it’s capable of creating templates for social media posts (or similar) created in such a way that mimics official-sounding sources. It could even mimic trusted social media accounts or news sources to spread incorrect information. While it’s obvious this misinformation would be corrected, the short-term damage is an easily achievable goal. Prompt manipulation also only needs input from sources on the internet. ChatGPT can mimic the style of an input text, taken from virtually anywhere. And since there’s no protection or regulatory wall to shield copy/pasted text, ChatGPT can emulate writing styles. With phishing and the intent to spread misinformation, it’s a deadly combination.

Furthermore, while ChatGPT’s learning model and development aim to mitigate malicious use, it’s the premise most worrying. ChatGPT is one type of learning AI prompt tool. In a few years, how many will there be? In 5? 10? What we’re looking at is a future driven by fast, agile AI-based attacks where tools generate code and messages to bypass networks. While bigger networks and organizations will no doubt have the resources to fend off these threats, what about SMBs, smaller enterprises, and most importantly, regular people?

Conclusion

The implications behind AI-based technology, text generation, and written prompts paint an uncertain road. All technological advancements have a learning curve, but AI prompts and programs are the first of their nature to emulate human behavior. In many cases, very convincingly. The legal complications are also complex, and what limitations are needed to assure malicious use of resources like ChatGPT is limited.

While we’re still far from a web of threatening AI, it’s a dangerous reality indeed. A nonstop, agile, endless frequency of attacks from learning machines using automatically generated code, messages, and phishing scams directed at an endless supply of targets. Beating the curve will make all the difference.

Still, concerned about IT security? You can always reach out for help. Contact Bytagig today for more information.

Share this post: