In our last blog about the risks of generative AI, we discussed this form of artificial intelligence (AI) and how it’s changing the way we work. In this blog, we’ll explore how Proofpoint can help you protect your data in generative AI like ChatGPT.
Generative AI is revolutionising how we work. At Proofpoint, there’s a lot of discussion about the future state of generative AI apps like ChatGPT. The use of AI and machine learning (ML) is not new at Proofpoint. These capabilities are built into our core platforms to protect people and defend data. With generative AI, or gen AI, we see multiple opportunities. Not only can gen AI drive even greater insights and efficiency for your teams, but it also presents Proofpoint with an opportunity to scale our products and drive the business forward.
But there are also risks to using these advanced tools. The approach to public gen AI tools varies by company and can range from acceptable use policies to mitigating controls.
Here are some questions we hear from our customers, and how Proofpoint can help:
1. Who is using ChatGPT in my company?
Proofpoint helps you understand who is accessing ChatGPT and how often—whether through proxy traffic or at the endpoint. ChatGPT is one of hundreds of AI apps on the market. Thanks to the built-in gen AI app category Proofpoint created, you can use our platform to efficiently gain visibility into this category of shadow IT and apply controls to more than 600 URLs. You can do this by user, group or department.
2. How can I block access to ChatGPT?
With Proofpoint, you can block users from accessing specific sites such as ChatGPT or gen AI app category. People-centric policies allow you to apply dynamic access controls based on user risk. That means you can dynamically block access for a user based on their:
- Vulnerability (such as failing security awareness training or exhibiting careless behaviour, like clicking on phishing links)
- Privilege (access to sensitive data)
3. How can I allow users to use ChatGPT while also preventing the loss of sensitive data?
There are very real concerns about the risk of data loss related to generative AI. Users can copy and paste almost six pages of 12-point font into the ChatGPT prompt. Prompt splitters can split even larger pieces of text into separate chunks.
Another layer to mitigate risk is by limiting copy/paste into the ChatGPT prompt. You can block pasting or limit the number of pasted characters allowed. This mitigates critical data leak scenarios, like users submitting large amounts of confidential source code into the chat to optimise it or pasting full meeting transcripts to summarise them.
You can also block all files or files with sensitive data from being uploaded into ChatGPT via browser extensions for analysis. Real-time notifications (pop-ups) to the user can let them know why their action was blocked. You can also include links to company policy on acceptable use.
4. How can I allow users to use ChatGPT while also monitoring its use?
Some businesses don’t want to limit ChatGPT or generative AI website use. But monitoring for safe use or context of use for investigations is important. As users interact with the ChatGPT prompt, you can capture meta and screenshots for visibility. Layer in visibility into the files in use and the source of those files, and you quickly gain a picture of the potential risks.
You can also set up alerts on visits to gen AI sites to signal that further investigation is needed. Alerts can be triggered for any or a subset of your users, such as high-risk users.
5. Where do I start? And how can I educate users?
You can start your journey by first communicating an acceptable use policy with employees. An explicit policy enables employee awareness and fosters accountability. Here are some steps to consider:
- Publish a list of pre-approved use cases that list what type of data inputs are acceptable for public versus company-hosted (or third-party hosted internal) generative AI tools. For example, call out that only public data can be input in public tools while customer data is out of bounds.
- Make sure users review and revise the output from generative AI tools and not just copy and paste.
- Create a process to review and approve new generative AI tools and use cases.
- Inform users that you reserve the right to monitor and record the use of these tools if that’s part of your plan.
Proofpoint can help you educate users on the safe use of generative AI with our security awareness kit and training module. With Proofpoint, you get tailored cybersecurity education online that’s targeted to the vulnerabilities, roles and competencies of your users. We provide education in bite-sized chunks, so it creates sustainable habits. And you get all the metrics that your CISO needs.
Learn more
We invite you to reach out to your account team to find out more about how Proofpoint can help you to safeguard your data in generative AI like ChatGPT. To learn more about defending against data loss and insider threats, check out our Getting Started with Information Protection webinar series. Also learn about a Proofpoint solution that can help by downloading the Proofpoint Sigma information protection platform brief.
If you want to learn about AI and ML (AI/ML) in information security, join our Cybersecurity Leadership certification programme. This programme will teach you about the risks posed by AI/ML to cybersecurity as well as best practices for adopting AI/ML to protect your people and data more effectively.