The Cybersecurity Implications of Chat GPT (AI Chat)

The Cybersecurity Implications of Chat GPT (AI Chat)

Chat GPT has been everywhere since November 2022. The AI Chat has made headlines across the world for its ability to respond to questions in a human-like manner. You can ask the bot about the weather, speak to it in other languages and even ask it to write an essay for you.

As a company offering cybersecurity in Miami, we also notice the more nefarious uses of the bot, including its ability to assist with coding, writing emails and more.

In fact, there are some cybersecurity implications of Chat GPT that you need to know about.

Security Implications of Chat GPT

Malware Creationcon

Creating malware has long been something that a person with coding skills would perform. However, Chat GPT can write malicious code for malware that can do a few things:

  • Listens to user input
  • Sends the response to a specific URL

For example, a piece of coding can be used to “listen” for credit card information and then when the information is input into a form, it will send it to a URL controlled by a hacker. Since the coding can be made with AI, it opens the possibility for anyone to create malicious code with just a few commands sent to the chatbot.

Hackers can also, in theory, have the bot create other forms of malicious software or at least components of it. For example, new viruses can be built with even more advanced features that are written by AI.

Additionally, it will be possible to create ransomware and other cybersecurity threats without the person needing to know how to code.

There will certainly be errors and limitations in the coding that require coding tweaks, but some of the coding that the AI produces works flawlessly and without much editing required. For cybersecurity professionals, there will need to be stricter controls in place and vigilance against the rise in threats over the coming months.

However, the threat of an AI bot of this magnitude goes well beyond just creating malicious software.

Well-written Phishing Emails

Nearly 25% of all data breaches involve phishing emails and stolen credentials. Thankfully, many of the emails people receive are written in poor fluency, so it’s easier to identify the email is a fake.

Chat GPT can be used to write phishing emails that are:

  • Well-written
  • In the proper tone 

We do want to mention that the bot isn’t designed to do things, such as write phishing emails. If you ask the bot directly to write one of these emails, it will tell you that it is not designed to create harmful or malicious content.

However, small tweaks to your request can bypass these restrictions.

Due to the high volume of phishing emails and their success in helping hackers infiltrate email accounts and even businesses, we expect a major uptick in attacks using Chat GPT’s AI.

In the future, hopefully, the developers of Chat GPT will add much tighter restrictions on the bot so that it doesn’t allow someone to have a phishing email written and that it won’t produce malicious coding.