People Centric Insider Threats

A Guide for Insider Risk Teams: 10 Tips for Monitoring User Activity While Protecting Privacy

Share with your network!

As security teams, we often face a tough dilemma: how can we monitor users for risky activity without compromising their privacy? It’s a delicate balance. There’s a fine line between ensuring security while also respecting the confidentiality of sensitive employee data. However, achieving this balance isn’t only possible, it’s essential. It must be part of any insider risk programme for it to be both trustworthy and effective.

In honour of Data Privacy Week, in this blog post I’ll walk you through 10 best practices to help you build a robust insider risk programme that meets both your data privacy and security needs.

1: Involve privacy and legal stakeholders early

From the very beginning, you should invite the right people to the table. During the programme’s design phase, reach out to privacy councils or worker councils to get them involved. This will ensure that important privacy aspects are addressed right from the start. These councils can offer invaluable insights into the ethical and legal considerations that must be taken into account.

Once involved, keep these stakeholders close throughout the journey. Regular updates about the programme’s goals, scope and processes will help foster trust between security teams and privacy advocates. In fact, demonstrating that privacy has been top of mind all along, the insider risk team will equally be the face of privacy advocacy.

Example

If you’re rolling out an insider risk management programme, involve your privacy officer in the planning stages. This will ensure your programme is compliant with GDPR and other data protection regulations. When privacy concerns are addressed proactively, personally identifiable information (PII) won’t be used in ways that could lead to violations.

2: Define programme scope and reporting thresholds

One of the most critical components of any insider risk programme is clear boundaries. Define what constitutes risky activity, which is typically aligned to existing conduct, compliance or security policies. It is also important to clearly define which behaviours result in a certain risk level as well as at which point risky behaviours require a deeper inquiry or investigation. Not only does this help reduce the likelihood of overreach, but it also ensures monitoring stays proportionate to the risk. Keep in mind that although organisational policies are defined and shared broadly, thresholds and detection capabilities should only be shared with those with a need to know.

Example

Let’s say your programme detects users downloading large amounts of sensitive data. Set a threshold that only triggers an alert when someone downloads more than their typical number of files. Other thresholds might be when they are considered a flight risk or when they are circumventing a security control. This keeps the scope narrow, and it provides a much lower possibility that an analyst will review innocuous behaviour. There’s never a complete guarantee that this won’t happen in the world of risk mitigation. However, that is why implementing the next eight best practices is so critical.

3: Be transparent, but guide the message thoughtfully

In many organisations, the insider risk programme is often shrouded in mystery. Unfortunately, this can breed rumours and distrust. Avoid this issue by communicating transparently and proactively where you can. Doing so sends a clear message that the programme aligns with your organisation’s goals and its core values.

It’s also crucial to share stories about your programme’s positive impact. And make sure to remind everyone about privacy mechanisms that are in place as well as the overall purpose of your programme. Transparency helps demystify the process and reassures employees that their privacy is being respected.

While transparency is important, so is discretion. The details about triggered alerts and investigation details shouldn’t be shared beyond designated groups. This will ensure your programme isn’t undermined and prevent people from circumventing controls.

Example

When your programme starts to show positive results, share a case study internally with any sensitive details redacted. Include details about how the programme helped detect a potential security breach before it escalated and discuss the likely financial impact or reputational damage. This will help highlight the value of the programme and build employee trust. After all, apart from the damaging malicious insider, most employees dedicate years of hard work to their companies and want to protect them.

4: Implement clear sharing policies

When it comes to details about your insider risk programme, you should have clear policies about what’s acceptable to share. Ensure your information sharing policies consider:

  • Training and awareness initiatives
  • Programme scope and purpose
  • Which technology is used to support the programme
  • Any results of your analyses
  • Details on detection and thresholds, the subject(s) of inquiries and bystanders
  • The outcome of formal investigations

It’s extremely important not to over- or under-classify data. Doing so can shroud your programme in mystery, which can lead to misunderstandings and mistrust. It can also create inefficiencies and prevent important data sharing with other key partners, which helps mitigate risk from various angles.

The goal is to strike a balance that bolsters the purpose of the programme, protects the reputation of those under inquiry, avoids compromising investigations and fosters the right information sharing with the right people at the right time. By carefully defining who has access to specific data and enforcing those policies, you reduce the risk of data leaks or misuse.

Example

If your organisation uses a case management system to store confidential investigation data, make sure the right people have access to this data. This will help them satisfy their role in the investigation by giving them a complete picture. Also, avoid giving everyone on those teams access to all the cases. Another example would be about benchmarks or metrics when in larger organisational meetings. Ensure that confidential details about triggers aren’t exposed to other teams that could have an active threat that would effectively go undetected by controls.

5: Use technology that meets privacy standards

Your technology stack plays a huge role in safeguarding privacy. So, make sure your tools align with your organisation’s privacy requirements. Here are some capabilities to consider:

  • User anonymisation
  • Pseudonymisation
  • Visual capture using window-only focus
  • Encryption
  • Customisable role-based access controls
  • Enhanced monitoring based on high-risk technical behaviours

Your technology should dynamically help you detect risky activity. At the same time, it should give you enough flexibility so that you can adjust the depth of data that’s collected for investigations. What’s more, it should also streamline how data is secured and stored. This includes enabling you to store it in certain geographical regions based on your organisation’s privacy standards.

Example

If your team uses a security monitoring tool that logs user activity, make sure it has a built-in feature to anonymise the user data it collects. In other words, it shouldn’t store the full name or employee ID of a user who triggers an alert. Instead, it should log a unique anonymised identifier that allows analysts to investigate the behaviour without exposing sensitive personal information.

6: Ensure data flow and ABAC controls across tools

Insider risk teams often rely on shared technologies, such as security information and event management (SIEM) systems and case management tools, to help identify and respond to potential threats. However, integrating these tools without careful attention to data flow and access controls can lead to unintended exposure of sensitive data. That’s why it’s crucial to have a plan for protecting data as it flows across the various tools used in your security stack.

To maintain the privacy of your organisation’s data, it’s important to ensure that attribute-based access control (ABAC) is enforced across all tools. This means defining clear access levels for different teams based on their role and the data they require.

Example

Suppose your organisation uses a SIEM to aggregate and analyse security logs. Sensitive metadata—such as PII or privileged HR data—shouldn’t be ingested into the SIEM at all. Instead, the insider risk team should have separate access to it. This enables them to investigate risk without exposing this data to security operations analysts. Security operations teams should only have access to event data, logs and alerts that are necessary for their role. It is vitally important that there be no visibility into sensitive user data outside the designated insider risk team that is trusted with this data.

7: Establish strong oversight and accountability

Insider risk teams need oversight too. This is especially true when they’re entrusted with using technology that monitors user activity. To prevent any misuse, remove cognitive bias from analysis and to ensure trust, implement an oversight or ‘watch-the-watcher’ programme. This should include formal audits of anyone with access to the technology, such as those with admin-level access who can make changes to detection policy and those performing triage and analysis.

To ensure audits are impartial, they might be done by a separate compliance team or by someone assigned to that role within your internal insider risk team. This type of oversight shows there’s a commitment to both privacy and security. And it helps maintain the team’s ethical standards. No one should be exempt from oversight. Even the highest levels of access should be monitored for potential misuse. Remember that any incident when triaged can lead to a case being presented in court and must be able to withhold scrutiny related to user privacy laws.

Example

A change was made that surfaced a risky behaviour that wouldn’t have surfaced prior to the change, which ultimately leads to the finding of an infringement. In this case, there needs to be detailed justification for the change, including when it was made and who made it. This helps show that it was an approved change and not created to focus on finding one particular individual.

Implement strong layered oversight as part of the case review process. This will ensure there is no unconscious bias, and that the data reported is factual.

It’s also important to audit workflow-related items, such as alerts that are remediated. This ensures analysts aren’t abusing their access or approving their own generated alerts, which would effectively make them an exception to the rule.

8: Handle conflicts of interest with clear procedures

Conflicts of interest are inevitable in insider risk programmes. So, it’s essential to have written procedures in place for handling them. If an analyst is working on a case that presents a potential conflict, there should be a clear handoff process. This will ensure someone else can take over the investigation, and objectivity or privacy won’t be compromised. By clearly defining these procedures, you reduce the risk of bias and ensure that investigations remain fair and transparent.

Example

Suppose an analyst is investigating the suspicious behaviour of a colleague they know personally. To eliminate any bias, there should be a policy requiring the analyst to immediately hand off the investigation to another analyst or to management. This prevents any potential conflict of interest from clouding the investigation.

9: Incorporate privacy training into team development

Privacy isn’t just something that applies to the organisation. Instead, it’s something that should be fundamental to the workings of your security team. Regular privacy training related to user activity monitoring (UAM) is vital for this perspective. It ensures analysts understand the boundaries of their access as well as their responsibilities. And it ensures they’re trained to be aware of cognitive biases. Just like an acceptable use policy, these rules should be strictly enforced with clear and serious consequences for violations.

Example

As part of your team’s annual training, conduct a simulation where an analyst must investigate a potential insider threat while adhering to strict privacy policies. This reinforces the idea that the team mustn’t access data unnecessarily and must stay within the limits of their role.

10: Continuously tune data collection to avoid overreach

At the end of the day, insider risk analysts only want access to the data they need to do their jobs, to reduce risk and protect the organisation. By nature, this does involve internal employee data. However, it’s important to continuously review and refine your monitoring practices to ensure that unnecessary data isn’t being collected. For instance, sensitive categories like medical or legal documents should be explicitly excluded from monitoring to avoid any unintended privacy breaches.

Example

If your technology starts logging excessive amounts of data from employees’ personal calendars or meeting notes, due to an approved change, review your monitoring rules and adjust the scope to be as minimally invasive as possible while still ensuring the ability to mitigate risk to the organisation can be effective. It may be a good idea to get input from the core stakeholders referenced earlier for this.

A balanced approach is possible

Taking user privacy into account while monitoring for risky activity doesn’t have to be an either/or situation. After all, insider risk programmes also try to identify user data misuse or exposure by internal employees, and they will partner with privacy and HR for remediation. This is an integral part of defending employees.

By following these best practices, you can feel confident that you’re building a programme that balances the needs of data security and privacy effectively. The unified goal is to create a safer, more secure environment where employee privacy is respected, and insider risks are mitigated.

Learn more

Read more about working cross-functionally by downloading our ebook Best Practices for Human-Centric Human Risk Management.

Learn about the privacy and access control features for Proofpoint ITM.