Magazine Button
What are the security implications of using ChatGPT?

What are the security implications of using ChatGPT?

CybersecurityDeep DiveEnterprise SecuritySoftwareTop Stories

Countries across the globe are wary of the potential cybersecurity implications of latest tech phenomenon that is ChatGPT.

In light of recent ChatGPT concerns in the news, Richard Forrest, Legal Director at UK law firm, Hayes Connor, expresses major apprehensions that a considerable proportion of the population lacks a proper understanding of how generative AI, such as ChatGPT, operates. This situation, he fears, could lead to the inadvertent disclosure of private information and therefore a breach of GDPR.

As such, he urges businesses to implement compliance measures to ensure employees in all sectors, including healthcare and education, are remaining compliant.

This comes after a recent investigation by Cyberhaven revealed that sensitive data makes up 11% of what employees copy and paste into ChatGPT. In one instance, the investigation provided details of a medical practitioner who inputted private patient details into the chatbot, the repercussions of which are still unknown. Forrest says this raises serious GDPR compliance and confidentiality concerns.

Due to the chatbot’s recent appraisals of being able to assist business growth and efficiency, there has been an increase in users across many sectors. However, concerns have arisen after a number of employees have been found to be negligently submitting sensitive corporate data to the chatbot, as well as sensitive patient and client information.

As a result of these ongoing privacy fears, several large-scale companies, including JP Morgan, Amazon and Accenture, have since restricted the use of ChatGPT by employees.

Forrest weighs in on the matter: “ChatGPT and other similar Large Language Models (LLMs) are still very much in their infancy. This means we are in unchartered territory in terms of business compliance and regulations surrounding their usage.

“The nature of LLMs, like ChatGPT, has sparked ongoing discussions about the integration and retrieval of data within these systems. If these services do not have appropriate data protection and security measures in place, then sensitive data could become unintentionally compromised.

“The issue at hand is that a significant proportion of the population lacks a clear understanding of how LLMs function, which can result in the inadvertent submission of private information. What’s more, the interfaces themselves may not necessarily be GDPR-compliant. If company or client data becomes compromised due its usage, current laws are blurred in terms of which party may be liable.

“Businesses that use chatbots like ChatGPT without proper training and caution may unknowingly expose themselves to GDPR data breaches, resulting in significant fines, reputational damage and legal action. As such, usage as a workplace tool without proper training and regulatory measures is ill-advised.

“It is the onus of businesses to take action to ensure regulations are drawn up within their business and to educate employees on how AI chatbots integrate and retrieve data. It is also imperative that the UK engages in discussions for the development of a pro-innovation approach to AI regulation.”

I was intrigued to hear more about what industry professionals in the cybersecurity space thought about this new technology.

Darren Guccione, CEO, Keeper Security

Following a data leak of the ChatGPT app that compromised stored user conversations and payment information, an Italian watchdog immediately – but temporarily – banned the service in their country. The watchdog expressed concerns about whether ChatGPT had the right to store the data and, additionally, what it was doing to protect children from accidentally accessing age-inappropriate results.

A primary security implication of using ChatGPT, other AI chatbots or any application that asks for personal information, involves your privacy. The concern is that malicious actors will exploit a vulnerability in the software or gain access to your data another way. After the ChatGPT breach, the Italian watchdog saw the data that was exposed and worried the company was not doing enough to educate users about what happens with the data they provide. The makers of ChatGPT were given the opportunity to have the ban lifted by including information about how and why contributed data is used on their website, gain consent to use the data and place age restrictions on the website.

Users sharing their information online – whether with an AI tool or on any other website – need to take their own precautions to protect their information by considering what they’re sharing and where. Anytime you’re asked for information – whether it be from websites, chatbots, friends, family, co-workers, your doctor – consider if the site needs that information, whether this is the safest way to share it and how the site will store it. Just as you wouldn’t hand out your personal information to a stranger over the Internet, you should also consider what information you’re giving to an AI chatbot, especially a nosey one.

As more websites and apps utilise chatbots, there’s also the potential for abuse from bad actors posing as chatbots who can manipulate you into revealing information you wouldn’t normally. The more you educate yourself about spotting phishing attempts and malicious websites, the less likely you are to become a victim. 

While specific concerns with AI will continue to be addressed as they arise, bad actors will attempt to use chatbots for social engineering and phishing. Some of the most common signs of a phishing email are poor grammar and spelling. By asking AI to write the email, the bad actors can not only rectify these mistakes, but also potentially include prompts to make the language more persuasive as if an actual marketing professional wrote it. This same technique can be used when creating written copy or user testimonials on a malicious website to make it appear more realistic.

Pål Aaserudseter, Security Engineer at Check Point Software

In my opinion, blocking services like ChatGPT will have limited impact. Sure, companies and schools etc. still block social media platforms like YouTube, Facebook and the likes from their offices due to bandwidth limitations and the potential for distractions (among several other things), but this does not limit the users at home on their private devices. Even though Italy has blocked ChatGPT, it’s easy to circumvent access using a VPN solution for instance.

The issue at hand is that most users are not aware of the risk involved by using the likes of ChatGPT. Sure, it’s a great tool, but uploading information about yourself or others may have a huge and negative impact if this information is suddenly available to anyone – think medical data and the likes. The potential of data-misuse is by far one of my greatest concerns regarding ChatGPT. Not just for creating havoc, but what if personal data can be used for targeted marketing purposes (wait, isn’t this already happening?) to an extent that we have never seen before. We’re talking social manipulation on a scale where the buyer finds it natural to spend money on whatever the criminals put in front of them.

The positive effect on this is that hopefully the EU AI Act, which was supposed to be finalised in March this year, will be in effect much faster. As it is now, there are no rules and regulations in place. It is up to the users/developer’s ethical compass to use ChatGPT in a responsible manner. We need regulations in place to make sure that OpenAI, Microsoft, Google and others developing AI technologies, makes sure that access, usage, privacy and data is protected and controlled in a secure fashion.

Julia O’Toole, CEO of MyCena Security Solutions

It’s not surprising that Europol has issued a new report warning organisations and consumers about the risks associated with ChatGPT, as the tool has the potential to completely reform the phishing world, in favour of the bad guys.

When criminals use ChatGPT, there are no language or culture barriers. They can prompt the application to gather information about organisations, the events they take part in, the companies they work with, at phenomenal speed. They can then prompt ChatGPT to use this information to write highly credible scam emails. When the target receives an email from their ‘apparent’ bank, CEO or supplier, there are no language tell-tale signs the email is bogus. The tone, context and reason to carry out the bank transfer give no evidence to suggest the email is a scam. This makes ChatGPT-generated phishing emails very difficult to spot and dangerous.

When it comes to protecting against ChatGPT phishing scams, users must be wary of links received in emails. If an email arrives with a link, never click on the link. As a habit, verify its authenticity first. For example, if your bank calls you asking for personal information, hang up and call back the bank via the phone number found on their website.

Furthermore, never use just one root password, even with variations, such as JohnSmith1, John$mith1!, Johnsmith2, to protect all your online accounts. If one password is phished, criminals can find its variations and access everything. The same threat applies when using password managers. Because all your passwords are saved behind one single password, the risk of losing everything is even higher. If that master password is phished, criminals can open all your accounts at once.

Instead, users should think of passwords the same way as keys to their house, office or car. They don’t need to know the grooves or make their own keys. They just need to find the right key or password to use it. The easiest way is to use tools to generate strong unique passwords like ‘7D£bShX*#Wbqj-2-CiQS’ or ‘kkQO_5*Qy*h89D@h’ but don’t centralise them behind a master key or identity. That way, passwords can be generated, impossible to break and changed at will, without the risk of a single point of failure, so that in case one password is phished because of a ChatGPT-generated email, it will only impact one online account.

Click below to share this article

Browse our latest issue

Intelligent CISO

View Magazine Archive