Generative AI tools like ChatGPT are transforming business operations. But can you trust AI with your sensitive information?
AI is built on data. Models like ChatGPT absorb, process, and remember the data you share. Prompts and user inputs are often stored, temporarily or permanently, so they can be used to enhance the tools. This can lead to privacy risks.
Samsung’s ChatGPT data breach
Companies using generative AI have leaked proprietary data without knowing it. For example, in 2023, Samsung banned its employees from using ChatGPT after a leak of their product source code.
AI privacy concerns in business: Addressing the risks
According to Cisco, 64% of cybersecurity professionals are concerned about AI tools leaking data.
If your organization handles personally identifiable information (PII) about your customers or partners, you need to address this risk. Staying safe requires a combination of policies, procedures, training, and technology.
AI prompt injection attacks
Generative AI can also be used in cyber attacks. Cybercriminals are using AI tools to create more convincing phishing messages, while deepfake technology is enabling sinister new types of scams.
In prompt injection attacks, cybercriminals craft questions that trick AI into revealing sensitive information. The OWASP Foundation lists prompt injection among its top 10 risks for large language model applications.
Generative AI privacy: ChatGPT no retention mode and more
OpenAI and other providers of generative AI tools are aware of these risks. Some tools include features and settings that may help to protect your privacy. For example:
-
ChatGPT no retention mode: You can opt out of allowing your data to be used to train and enhance the tools.
-
Data anonymization: OpenAI claims that user data is stripped of identifiers and regular privacy audits are performed.
-
Compliance updates: Companies update their privacy policies to align with regulations like GDPR or CCPA.
However:
-
It’s unclear how long conversations are stored, particularly for business accounts.
-
Businesses still lack tools for auditing, monitoring, or deleting large volumes of chatbot interactions.
-
AI providers face a constant trade-off between making their models smarter and protecting their users’ privacy.
In summary, using generative AI tools such as ChatGPT can put your privacy at risk, especially when sensitive credentials or PII are involved.
How to protect your privacy when using ChatGPT or other generative AI tools
Avoid AI credential leakage: Never share sensitive data
When you write prompts, never input passwords, social security numbers, financial details, or any other sensitive information.
Make this a policy in your organization and provide regular training so that everyone is aware.
Enable privacy features
Whenever possible, use generative AI tools in no training or incognito mode.
Limit access to generative AI tools for roles with access to high-value information.
Use secure endpoints and verify business accounts
Always prefer business-licensed versions of AI tools with clearly documented privacy protections.
Deploy B2B credential and PII monitoring solutions
If your data does fall into the wrong hands, proactive monitoring can help you avert disaster. Credential and PII monitoring platforms such as Cybercheck can:
-
Continuously scan the web and dark web for your company’s leaked data.
-
Instantly alert you if users’ login details or other sensitive information are found in risky places such as the dark web, including after potential AI leaks.
Privacy-first AI is the future of cybersecurity
Generative AI might help to create a better world, but its benefits come with risks. To innovate and remain safe, your organization needs a proactive security strategy that encompasses user education, usage policies, and credential monitoring.