Collaboration_Press_Release

Proofpoint Introduces 6 Responsible AI Principles

Share with your network!

Artificial intelligence (AI) has become omnipresent in our lives in just a few years. It powers everything from movie recommendations to self-driving cars. And while AI advancements have enhanced cybersecurity, the rapid development and increased use of AI introduces inherent risks.

AI systems enable the scale, speed, and quality of services. They need to process sensitive data and make decisions rapidly, often without direct human oversight. The crucial question then becomes how to ensure the proper protection, effective governance, and responsible use of AI systems.

These considerations are top of mind for Proofpoint as we develop and deploy AI systems. We want to enhance user protection against cybersecurity risks and mitigate potential AI harms. Thus, we have embraced responsible AI, with an aim to position our company as a leader in this evolving field.

We want to be transparent in our approach and provide a model for others. In this post, we outline our six principles and describe how we safeguard them. But first, we’ll provide some background, explain why these principles matter, and unpack the importance of responsible AI.

Background

Proofpoint is committed to preserving our customers’ trust. This is reflected in our six foundational principles for responsible AI.

  1. Accountability
  2. Transparency
  3. Explainability
  4. Privacy and Security
  5. Safety
  6. Fairness

These principles align with our core values and underscore our dedication to maintaining our people-centric focus. To uphold them, we implemented an assessment process to determine the readiness of our AI systems for responsible production deployment.

We have also created an oversight body – the Proofpoint Responsible AI Council (RAIC) – to support our AI teams in adhering to these principles. These initiatives are designed to help ensure that those who are internal and external to our organisation are aligned with our responsible approach to AI.

The importance of responsible AI

AI systems can learn and predict information without human involvement, but they lack human-like judgment, such as critical thinking, ethical understanding, and intuition. These systems can only mirror their architecture and training data.

These limitations are one reason why it’s so important to incorporate AI ethics into AI systems. Typically, these ethics encompass the interrelated concepts of responsible and ethical AI:

  • Responsible AI pertains to the design, development, and deployment of AI with appropriate guardrails
  • Ethical AI relates to the societal and moral implications of Ai, such as “good” and “bad” use cases for AI systems

With AI responsibility and ethics in mind, flaws must be addressed in the training datasets that underpin the AI systems. Flaws can lead to the replication or amplification of biases. In turn, this can result in collective and individual harm. We have seen the headlines about this side of AI, including Amazon’s sexist recruitment tool, Google’s racist facial recognition software and Facebook’s polarizing news bubbles.

As the use of AI systems evolves, so will its associated risks. That is why it is critical to consider the role of AI systems in all stages of product development and deployment.

Our responsible AI principles

Proofpoint used the concepts of responsible AI to produce the six principles outlined below. They are integral to every action that we take related to our use of AI. They uphold our human-centric commitments and company values. And they are intended to resonate with people who are internal as well as external to our company.

We are excited to share these principles with you. We hope that they can inform your efforts to develop a responsible approach to AI systems.

1: Accountability

“We ensure human oversight and responsibility over AI systems.”

What does this entail?

Accountability is the idea that those involved in the development, deployment, and use of AI systems should take responsibility for the outcomes and impacts of those systems. This includes explaining and justifying the key decisions and actions taken during the model’s life cycle.

How do we ensure this?

Through established governance, Proofpoint has enforcement mechanisms to help ensure accountability (e.g., AI system assessments), as well as opportunities for redress in case of negative consequences.

2: Transparency

“We are clear and honest about our use of AI systems.”

What does this entail?

Transparency refers to openness and clarity in the design, development, and deployment of AI systems. That means ensuring that every AI decision-making process is understandable and accessible to the people who are using the systems.

How do we ensure this?

Proofpoint ensures transparency through our clear communication about our AI systems to relevant stakeholders. We communicate throughout the life cycle of the systems. This includes clearly explaining our decision-making processes and providing algorithmic details.

3: Explainability

“We aim to explain how and why our AI systems make their decisions.”

What does this entail?

Explainability refers to the ability of AI systems to clearly explain their decision-making processes in a way that’s easily understandable.

How do we ensure this?

Proofpoint works to uphold this principle by making the inner workings of our AI systems as transparent as possible. We help users, stakeholders, and regulatory bodies to understand the factors that influence the system’s outputs through supporting data when required.

4: Privacy and security

“We build and deploy AI systems with privacy and security in mind.”

What does this entail?

The privacy and security principle demands that personal data and confidential organisational information is kept safe. It also requires that the design and the deployment architecture of AI systems helps to protect systems from cybersecurity threats.

How do we ensure this?

Proofpoint protects privacy and upholds the security of AI systems through an established “privacy by design” programme and an information security programme. Such programmes reflect our company’s commitment to privacy and security across all aspects of the business. (See our privacy and trust commitments.)

5: Safety

“We evaluate the potential risks of the use of AI systems and mitigate them appropriately.”

What does this entail?

Safety refers to designing and deploying AI systems in a way that minimises harm, risks, and unintended consequences.

How do we ensure this?

Proofpoint upholds this principle by implementing rigorous testing and risk assessment protocols. These measures help us to identify and mitigate potential harm associated with AI systems. Our commitment to safety includes deploying AI systems with fail-safes and guardrails that allow for rapid human intervention.

6: Fairness

“We mitigate unwanted bias in our AI systems to the extent practical.”

What does this entail?

Fairness is a commitment to ensure that AI systems are designed and implemented in a way that avoids unintentional biases and discrimination.

How do we ensure this?

Proofpoint upholds this AI principle by actively addressing and mitigating biases in data and algorithms. It is important to note that we recognise there are inherent biases in the data based on the cyber landscape. In those instances, we analyse the data and the implications of these biases to mitigate them as much as possible.

Our responsible AI commitments

Proofpoint is committed to applying these responsible AI principles across all our products and AI systems. We have translated the six principles into practical guidelines that shape every stage of product development and deployment. We established an oversight committee as well.

Responsible AI assessment

To help operationalise our AI principles, Proofpoint created a responsible AI assessment and a model card to ensure that all AI systems align with our standards and best practices. The questions cover all principles. They help to foster accountability within development teams and ensure that the oversight committee has a comprehensive understanding of our AI systems.

Questions in the assessment include these examples:

  • What impact could false positives and false negatives have on the system’s purpose, performance and affected targets of the analysis?
  • How do you continuously ensure the safety of the AI system in production?
  • To what extent is the data on which the AI system is trained representative of the intended use case?

The responses, along with a subsequent review of the assessment, enable us to identify necessary action items before we deploy an AI system in our products. We link a model card to each AI system to summarise key information and explain the system’s capabilities and limits.

Proofpoint Responsible AI Council (RAIC)

RAIC is a central, cross-disciplinary committee that fosters and supports a culture of responsible AI at Proofpoint. It provides guidance, support and oversight regarding our AI systems and their responsible design, development, and deployment.

Through our commitment to our six principles of responsible AI, Proofpoint strives to be an active contributor to shaping a positive future for AI use in the world.

Join the team

At Proofpoint, our people – and the diversity of their lived experiences and backgrounds – are the driving force behind our success. We have a passion for protecting people, data, and brands from today’s advanced threats and compliance risks.

We hire the best people in the business to:

  • Build and enhance our proven security platform
  • Blend innovation and speed in a constantly evolving cloud architecture
  • Analyse new threats and offer deep insight through data-driven intelligence
  • Collaborate with our customers to help solve their toughest cybersecurity challenges

If you’re interested in learning more about career opportunities at Proofpoint, visit the careers page.

 

About the authors

Othman Benchekroun, Proofpoint TAM

Othman Benchekroun is a Technical Project Manager at Proofpoint. In his role, he enables cross-team collaboration and fosters innovation. Othman has a background in AI and machine learning and holds a Master of Science (MSc) in data science from the Federal Polytechnic School of Lausanne, Switzerland.

Tetiana Kodliuk, AI Engineering

Tetiana Kodliuk drives the AI Engineering team for Information Protection and Cloud Security at Proofpoint. Together with her talented team, Tetiana brings innovation and drives AI strategy across multiple Proofpoint Sigma products – DLP, Data Classification, ITM, CASB – reflected in nine patents and various publications and participation in major AI conferences. Tetiana is deeply passionate about using AI technology responsibly and driving ethical AI in the cybersecurity industry. Tetiana holds Ph.D. in mathematics from the National Academy of Science in Ukraine.

Kimberly Pavelich, PSAT Senior Product Manager

Kimberly Pavelich spearheads the delivery of compelling, threat-driven security awareness and training content as the Proofpoint Security Awareness and Training (PSAT) Senior Product Manager. In crafting her approach to security awareness training, she draws on a wealth of expertise cultivated over a decade of working within Canada’s intelligence sector and various international post-conflict communities. Kimberly holds a master’s degree in strategic studies from the University of Calgary and is a political science Ph.D. candidate at Carleton University.