Proofpoint's Commitment to Responsible Development
and Use of AI and ML

At Proofpoint, our core mission is the protection of people and the defense of data. We take this responsibility seriously and extend it to the responsible use of Artificial Intelligence (AI) and Machine Learning (ML).

In line with our commitment to customer data security:

Secure Development and Use of AI and ML

  • Our framework for the development and use of AI and ML models includes comprehensive guidelines and policies that detail the inputs, components, design, and intended use of our models, ensuring our framework is understandable, and transparent.
  • Proofpoint has a Responsible AI council that oversees our efforts across the company. The Responsible AI counsel convenes regularly, and brings together representatives of our core research, policy and engineering teams dedicated to responsible AI including legal, engineering, security, product management and our executive business partners who are accountable for implementation.
  • Security approval is required for the deployment of new products and features including AI functionality.

Model Usage & Processing

  • Proofpoint’s federated approach to AI utilizes various models and techniques.
  • Proofpoint AI model training data is curated and audited to avoid bias and to maximize model efficacy.
  • Proofpoint AI model training data is resilient to adversarial or user-driven influences.
  • Proofpoint’s AI models are developed with appropriate guardrails and safety measures to protect against the injection of potentially malicious content Additionally, Proofpoint’s AI models do not have a general prompt interface, further minimizing the potential for an adversary to input malicious prompts.
  • Proofpoint does not disclose the AI models used in our cyber security and protection services. These details are trade secrets, and maintaining their trade secret status is vital for Proofpoint’s ability to innovate and continue to detect and stop the threats that target our customers.

AI Model Data

  • AI model data (training data, production data, prompts and model output) is confidential.
  • AI model training and production data are stored separately from each other.
  • Any AI models created are treated with the same protection and confidentiality as the data they were derived from.
  • Access to Proofpoint AI models and data is limited to only the Proofpoint individuals and services that require it.

Customer Data

  • Proofpoint’s access to customer data and content used is role-based and restricted based on least access privilege in accordance with Proofpoint’s access control policies and standards.
  • Proofpoint systems and AI models do not send customer data outside of Proofpoint’s secure computing environment.
  • Proofpoint’s AI models only process data relevant to the particular service.

Outputs of Proofpoint AI Models

  • Proofpoint’s systems and AI models are designed so that our customers’ data is hidden. This means that each customer can only see its own data, and no other customers’ data.
  • Outputs of Proofpoint AI models avoid the disclosure of customers’ confidential information and customers’ intellectual property.
  • Proofpoint utilizes ongoing performance, efficacy and efficiency monitoring and retraining of models with pre-existing data as a guardrail so that AI models do not generate unintended outputs.
  • Product dashboards and logs include forensic output which are the results of our detection stack, providing enhanced transparency on how Proofpoint’s systems and AI models arrived at a particular detection decision.

Security and Privacy

  • Security and privacy are at the core of Proofpoint’s services, and this includes our AI models. Proofpoint’s services and AI models endure rigorous security testing, with stringent physical, electronic, and procedural safeguards to ensure system integrity.
  • Proofpoint’s AI model development life cycle includes controls for vulnerable dependencies and supply chain attacks.
  • AI models hosted by Proofpoint are subject to security reviews to assess security threats specific to AI models.
  • Access to Proofpoint’s AI models and data is logged and audited.
  • Proofpoint’s security incident response plan includes AI models.

We understand the importance of trust in the digital age and assure you that Proofpoint is dedicated to upholding the highest standards of data security and model integrity. Thank you for your continued trust in us.

Legislative Landscape: The Artificial Intelligence Act

In March 2024, the Artificial Intelligence Act was adopted by the European Parliament ("AI Act"). It is the first comprehensive legal framework that regulates AI. The AI Act regulates AI technologies according to a four-tier risk scale: "unacceptable," "high," "limited" and "minimal" based on the potential risk of harm and risk to fundamental rights.

The requirements range from increased transparency for limited-risk applications, to new requirements regarding data quality, documentation, and oversight for high-risk systems, to a ban on those considered unacceptably risky. The AI Act applies to any AI system used or providing outputs within the EU. According to the European Commission, the vast majority of AI systems are minimal risk and as a consequence, there are no restrictions to minimal-risk AI systems.

Proofpoint’s use of AI falls under minimal risk.

© 2024. All rights reserved. The content on this site is intended for informational purposes only.
Last updated May 30, 2024.