With the recent rise in AI programs, and the advancements that seem to be coming thick and fast, the question of whether AI can be harmful to your business is one that crops up again and again.
As security teams grapple with the new threats facing their companies, programs like ChatGPT can present them with challenges they may never have been confronted with before.
ChatGPT is used by employees around the world. It gives almost-instant answers to any questions they may have, breaking topics down, and sharing information in different formats for their benefit.
Its AI functionality allows it to write emails, suggest fresh ideas, create strategy documents and more, based on the information provided to it.
While it’s caused controversy around data protection in Italy (which has temporarily banned it) and plagiarism concerns, particularly in academic contexts, ChatGPT has become a useful tool for teams who might sometimes need an extra pair of hands.
Predominantly, from the information fed into it. Employees inputting sensitive data to ChatGPT may not be thinking of the consequences when they’re looking for a quick fix to a nagging problem.
According to recent research, sensitive data makes up 11% of what employees put into the system. That could include things like company financial data, PII and PHI.
Copying and pasting sensitive company documents into ChatGPT has quickly become a bad habit for employees who aren’t thinking of potential confidentiality issues and GDPR risks.
It’s not entirely clear how confidential information is handled from ChatGPT’s side. However, in their FAQ’s, they do state: ‘we are not able to delete specific prompts from your history. Please don’t share any sensitive information in your conversations.’
It remains to be seen whether ChatGPT is a security risk right now, but there are always dangers associated with sharing sensitive data in a non-secure environment. You’ll always run the risk of data breaches and leakages, reputational damage, and financial losses.
There could well be a security risk in the future too. The National Cyber Security Centre has said that AI and Large Language Models (LLM) could help write malware in the future, and its natural language skills could also aid hackers in producing realistic phishing attacks.
For businesses, the amount of sensitive data that’s being put into ChatGPT could cause problems if hackers managed to infiltrate ChatGPT itself.
Chief Technology Officer at Metomic, Ben Van Enckevort, says, "Whilst AI progress (ChatGPT et al) are an extremely exciting advancement in tech, managers should be aware of how their teams are using them - particularly with regard to the data that's been shared with these services.
It's another factor security teams will need to take into consideration when they're thinking about their data security strategy. The rapid pace of change also means security professionals will need to be on the ball when it comes to keeping up with the latest threats."
It's important to remember that ChatGPT and other AI tools are currently hosted by third parties. In the future, companies may create their own AI systems but for now, any AI tool should be treated in the same way any other third party would be.
Educating your employees on the risks associated with ChatGPT is the best thing you can do to prevent sensitive information being shared with AI programs.
While some companies like JPMorgan Chase and Amazon have apparently banned employees from using ChatGPT and programs like it, you don’t need to go to those lengths to ensure they’re using them correctly.
Make sure everyone is aware that ChatGPT isn’t secure and despite its conversational nature, it shouldn’t be trusted with company secrets or customer information.
Setting up regular training sessions to update staff on the latest developments in AI is also a good idea. Your employees are one of your best defences against cybersecurity attacks so building up your human firewall is essential.