The use of generative AI (GenAI) has surged over the past year. This has led to a shift in news headlines from 2023 to 2024 that’s quite remarkable. Last year, Forbes reported that JPMorgan Chase, Amazon, and several US universities were banning or limiting the use of ChatGPT. What’s more, Amazon and Samsung were reported to have found employees sharing code and other confidential data with OpenAI’s chatbot.
Compare that to headlines in 2024. Now, the focus is on how AI assistants are being adopted by corporations everywhere. J.P. Morgan is rolling out ChatGPT to 60,000 employees to help them work more efficiently. And Amazon recently announced that by using GenAI to migrate 30,000 applications onto a new platform it had saved the equivalent of 4,500 developer years as well as $260 million.
The 2024 McKinsey Global Survey on AI also shows how much things have changed. It found that 65% of respondents say that their organisations are now using GenAI regularly. That’s nearly double the number from 10 months ago.
What this trend indicates most is that organisations feel the competitive pressure to either embrace GenAI or risk falling behind. So, how can they mitigate their risks? That’s what we’re here to discuss.
Generative AI: A new insider risk
Given its nature as a productivity tool, GenAI opens the door to insider risks by careless, compromised or malicious users.
- Careless insiders. These users may input sensitive data – like customer information, proprietary algorithms, or internal strategies – into GenAI tools. Or they may use them to create content that does not align with a company’s legal or regulatory standards, like documents with discriminatory language or images with inappropriate visuals. This, in turn, creates legal risks. Additionally, some users may use GenAI tools that are not authorised, which leads to security vulnerabilities and compliance issues.
- Compromised insiders. Access to GenAI tools can be compromised by threat actors. Attackers use this access to extract, generate, or share sensitive data with external parties.
- Malicious insiders. Some insiders actively want to cause harm. So, they might intentionally leak sensitive information into public GenAI tools. Or, if they have access to proprietary models or datasets, they might use these tools to create competing products. They could also use GenAI to create or alter records to make it difficult for auditors to identify discrepancies or non-compliance.
To mitigate these risks, organisations need a mix of human-centric technical controls, internal policies and strategies. Not only do they need to be able to monitor AI usage and data access, but they also need to have measures in place – like employee training – as well as a solid ethical framework.
Human-centric security for GenAI
Safe adoption of this technology is top of mind for most CISOs. Proofpoint has an adaptive, human-centric information protection solution that can help. Our solution provides you with visibility and control for GenAI use in your organisation. And this visibility extends across endpoints, the cloud and the web. Here’s how:
Gain visibility into shadow GenAI tools:
- Track the use of over 600 GenAI sites by user, group or department
- Monitor GenAI app usage with context based on user risk
- Identify third-party AI app authorisations connected to your identity store
- Receive alerts when corporate credentials are used for GenAI services
Enforce acceptable use policies for GenAI tools and prevent data loss:
- Block web uploads and the pasting of sensitive data to GenAI sites
- Prevent typing of sensitive data into tools like ChatGPT, Gemini, Claude, Copilot, and more
- Revoke access authorisations for third-party GenAI apps
- Monitor the use of Copilot for Microsoft 365 and alert when sensitive files are accessed via emails, files, and Teams messages
- Apply Microsoft Information Protection (MIP) labels to sensitive files, ensuring Copilot uses the same labels for new content generated from these files
Monitor for insider threats with dynamic GenAI policies:
- Capture metadata and screen captures before and after users access GenAI tools
Train employees on acceptable use of GenAI tools:
- Educate users on the safe use of GenAI with videos, posters, interactive modules, and newsletters
- Automate customised training for your highest-risk users
Additionally, you can use Proofpoint managed services expertise to optimise your information protection programme for GenAI adoption and apply best practices.
Learn more
Reach out to your account team to learn how Proofpoint can help you implement acceptable use policies for GenAI tools and test drive our solution.
To see a demo of our information protection solution for GenAI, watch our on-demand webinar, “Top Data Loss User Cases on Endpoints”.