- According to studies, insider threats cause higher average damages per incident than external attacks because insiders already possess access and trust.
- The three types are malicious insiders (deliberate harm), negligent insiders (mistakes and ignorance), and compromised insiders (hijacked accounts).
- Technical detection through DLP, UEBA, and centralized logging must be combined with organizational measures like least privilege, separation of duties, and clear offboarding processes.
- In Germany, data protection, DSGVO (GDPR), and the Works Constitution Act set strict limits on employee monitoring that must be observed when implementing detection measures.
- The most effective protection is a combination of technical controls, a healthy corporate culture, and processes that make misconduct difficult without placing employees under general suspicion.
The Underestimated Danger
When organizations think about cyber threats, most picture hackers in hoodies breaking into systems from the outside. The reality looks different. A significant portion of all security incidents originates not from external attackers but from individuals who already have legitimate access to systems and data: employees, former employees, contractors, and business partners.
The Ponemon Institute puts the average cost of an insider threat incident at over 15 million US dollars for large enterprises. For SMEs, the absolute numbers are smaller, but relative to revenue, they can be existentially threatening. An employee who systematically copies customer data and engineering drawings before switching to a competitor can hit a mid-sized company harder than any ransomware attack.
The topic is sensitive because it touches the tension between security and trust. No organization wants to place its employees under general suspicion. At the same time, it would be negligent to ignore the risk. The art lies in implementing measures that provide effective protection without poisoning the corporate culture. And that is precisely what this article is about.
The Three Types of Insider Threats
Insider threats can be divided into three fundamentally different categories, each with its own motivations, patterns, and countermeasures.
Malicious Insiders
Malicious insiders act deliberately to harm the organization. The motivations are varied: financial gain, revenge after conflicts or termination, ideological conviction, or recruitment by competitors or foreign intelligence services.
Typical scenarios include theft of intellectual property (engineering drawings, source code, customer lists, formulas), sabotage of systems (deleting data, introducing malware, manipulating production systems), selling credentials or confidential information to third parties, and fraud (manipulating financial data, redirecting payments, billing fraud).
Malicious insiders are hard to detect because they access systems within the scope of their normal work that they are legitimately authorized to use. The difference between normal usage and data theft often lies only in the intent and context, not in the technical access patterns themselves.
An anonymized example: a developer at a software company had decided to leave after a performance review perceived as unfair. In the weeks before his departure, he systematically copied the source code of the core products to a private cloud storage. The access didn't raise flags because he worked with the source code daily as a developer. It was only when the employee appeared at a direct competitor and a conspicuously similar product emerged there that suspicion was raised. The forensic investigation showed that the data exfiltration had occurred over weeks.
Negligent Insiders
Negligent insiders cause security incidents not deliberately but through ignorance, carelessness, or circumventing security policies for convenience. This type is quantitatively the most common and causes the most damage in aggregate, even though individual incidents are usually less severe than malicious actions.
Typical scenarios include sending confidential documents to the wrong recipients, using insecure passwords or sharing credentials, bypassing security policies (for example, transferring work data to private USB drives because the VPN connection is too slow), clicking on phishing links or opening malicious attachments, improper disposal of confidential documents, and misconfiguring cloud services leading to data exposure.
The critical difference from the malicious insider: negligent insiders don't want to cause harm. They cut corners because security measures are perceived as obstructive, or they make mistakes because they lack awareness of the consequences. This also means the countermeasures are different: training, user-friendly security solutions, and a culture that treats security as self-evident rather than a tedious obligation.
Compromised Insiders
Compromised insiders are employees whose credentials or devices have been taken over by external attackers. The employee themselves is not the attacker but the unwitting tool. From a detection perspective, however, it appears as though the employee is performing the harmful actions.
Compromise can occur through successful phishing attacks (credentials captured), malware on the work device, insecure personal devices used for work purposes, or compromised passwords from data breaches at other services.
This type is particularly insidious because the actions technically originate from a legitimate user account. An attacker who logs in with a stolen administrator's credentials has the same rights as the real administrator. Classic perimeter security is ineffective because the attacker is already "inside."
Recognizing Warning Signs
Insider threats often announce themselves through warning signs that are identifiable with attentive observation. These warning signs alone prove nothing, but their accumulation or combination should prompt closer examination.
Technical Warning Signs
- Unusual access times: Access to sensitive systems at unusual times (nights, weekends) that don't fit the normal work pattern.
- Mass data access: An employee suddenly downloads significantly more data than usual in their normal workflow.
- Access to unusual resources: A marketing department employee accesses financial data they don't need for their work.
- Use of unauthorized storage media: USB drives, private cloud storage, or email forwarding to personal addresses.
- Circumvention of security controls: Disabling endpoint protection, using VPN tunnels or Tor browser, encrypting data before transfer.
- Unusual network activity: Large volumes of data leaving the network, particularly to unknown destinations.
Behavioral Warning Signs
- Dissatisfaction: Open or concealed frustration with the organization, supervisors, or colleagues, especially after conflicts, passed-over promotions, or salary negotiations.
- Financial pressure: Recognizable financial difficulties that could motivate data theft or fraud.
- Resignation or pending departure: The period between resignation and actual departure is statistically the most critical for data theft.
- Policy violations: Repeated, deliberate violations of security policies indicate a general disregard for rules.
- Excessive overtime: Working at unusual hours when less oversight occurs can be a warning sign, but doesn't have to be.
It is critical to evaluate these warning signs in context. A developer working on the weekend because a release is due shows no suspicious behavior. The same developer who downloads source code en masse on the weekend after giving notice, however, does. The context makes the difference.
Technical Detection Measures
Various technologies support the detection of insider threats. None is sufficient on its own, but in combination they form an effective detection system.
Data Loss Prevention (DLP)
DLP systems monitor data flow and prevent unauthorized transfer of sensitive data. They can operate on three levels:
Network DLP monitors network traffic and detects when classified data leaves the corporate network via email, web upload, or other channels. Example: an email with a file classified as "Confidential" to an external address is blocked and the security officer is notified.
Endpoint DLP monitors activities on endpoints, including copying to USB drives, printing, screenshots, and file transfers. Especially during the period between resignation and departure, endpoint DLP can provide valuable indicators.
Cloud DLP controls data flow in cloud applications and prevents sensitive data from being uploaded to unauthorized cloud storage.
For SMEs, a full DLP implementation is often too complex and costly. A pragmatic approach focuses on the most critical data (intellectual property, customer data, financial data) and the most likely exfiltration channels (email, cloud storage, USB).
User and Entity Behavior Analytics (UEBA)
UEBA systems create behavioral profiles for users and systems and detect deviations from normal patterns. When an employee who normally accesses ten files per day between 8 AM and 6 PM suddenly downloads hundreds of files at night, UEBA triggers an alert.
The advantage of UEBA over rule-based systems: UEBA also detects novel patterns not covered by predefined rules. The disadvantage: UEBA systems require a learning phase, generate false positives, and require qualified personnel to evaluate alerts.
For smaller organizations that cannot justify a dedicated UEBA tool, many SIEM solutions offer basic behavior analysis capabilities. The audit logs of Microsoft 365 and Azure Active Directory also contain information that can be used for manual anomaly detection.
Centralized Logging and SIEM
Centralized logging of all security-relevant events is the fundamental prerequisite for any insider threat detection. If you don't know who accessed which data when, you can neither detect suspicious behavior nor reconstruct what happened after the fact.
Relevant log sources for insider threat detection include authentication logs (successful and failed logins, MFA events), file access logs (who opened, copied, or deleted which files), email logs (recipients, attachments, external forwarding), VPN and network access logs, cloud service logs (SharePoint, OneDrive, Azure, AWS), print logs, and USB activity logs on endpoints.
Logs alone are useless if they aren't analyzed. Define alerting rules for obvious warning signs (for example: mass file downloads by a user who has resigned) and plan regular log data reviews for subtler patterns.
Privileged Access Monitoring
Administrative and privileged access poses a special risk due to far-reaching permissions. Privileged Access Management (PAM) solutions log and control all activities of privileged users, can create session recordings, and provide just-in-time access that is only active for the duration of a specific task.
For SMEs, a full PAM implementation is often oversized, but individual elements like logging of administrative access, separation of admin and user accounts, and restriction of permanent admin rights are pragmatic steps that significantly reduce risk.
Organizational Measures
Technical measures are necessary but not sufficient. Organizational measures address the root causes and create conditions that make insider threats harder to execute.
Least Privilege Principle
The principle of least privilege states that every user should only receive the permissions they actually need for their current task. A properly implemented authorization concept is the foundation for this. Nothing more and nothing less. This sounds self-evident but is frequently violated in practice.
Typical violations: employees retain permissions from former roles (permission accumulation). Administrators permanently work with admin accounts, even for everyday tasks. Trainees receive the same permissions as experienced employees. And projects are generously granted access that is not revoked after the project ends.
A regular access review (at least semi-annually for critical systems) ensures that actual permissions match current needs. Automated tools for permission analysis can help identify excessive rights.
Separation of Duties
Separation of duties ensures that critical processes are not controlled by a single individual. When the person who creates purchase orders is not the same person who approves payments, fraud becomes significantly harder.
Typical applications include separation of ordering and payment approval, separation of development and production deployment, separation of user management and system administration, and separation of data backup and data deletion.
For small organizations where full separation of duties is not feasible due to staffing, the dual-authorization principle offers an alternative: two people must jointly approve critical actions.
Clean Onboarding and Offboarding Processes
The employee entry and exit phases are particularly critical for insider threat prevention.
During onboarding, permissions are granted that should correspond to the actual job profile. A standardized onboarding process with role-based permission packages prevents new employees from receiving too many rights.
During offboarding, all access must be deactivated promptly and completely. This includes not just the AD account but also VPN access, cloud services, API keys, physical access media, and all other systems the employee had access to. A defined offboarding process with a checklist is essential here.
Practice shows that access is regularly forgotten during offboarding, especially for cloud services and SaaS applications procured in a decentralized manner. A central overview of all access per employee (ideally from the identity management system) is the foundation for complete offboarding.
Corporate Culture and Open Communication
Employees who feel valued, treated fairly, and heard pose a lower insider threat risk than frustrated, marginalized, or pressured employees. This is not naive hope but a research-backed correlation.
This doesn't mean that a good corporate culture prevents insider threats. But it reduces the motivation for malicious behavior and fosters an atmosphere where employees are more likely to report suspicious behavior, comply with security policies, and admit mistakes before they escalate.
Concrete measures: open feedback channels, fair conflict resolution processes, transparent decisions regarding promotions and salary adjustments, and a security culture that rewards reporting security incidents rather than punishing it.
Legal Boundaries of Monitoring
Detecting insider threats requires monitoring of employee activities. In Germany, data protection law, DSGVO (GDPR), and the Works Constitution Act set strict boundaries that must be observed. Violations of these boundaries can lead not only to significant fines but also make collected evidence inadmissible in court.
Data Protection Requirements
Processing of employee data is permissible under Section 26 BDSG (or Section 28 BDSG-new) when it is necessary for the employment relationship. Security measures protecting the organization generally fall under this permission but must be proportionate.
Concretely, this means: you may not indiscriminately read all employees' entire email traffic. However, upon concrete suspicion and in compliance with the proportionality principle, you may conduct targeted investigations. Mass surveillance without cause is impermissible. Targeted, proportionate monitoring based on justified suspicion, however, is permissible.
Works Council and Co-determination
If your organization has a works council, many technical monitoring measures are subject to co-determination under Section 87(1)(6) of the Works Constitution Act. This applies to the introduction and use of technical devices intended to monitor employee behavior or performance.
DLP systems, UEBA tools, and comprehensive logging solutions typically fall under this co-determination requirement. This means: you must inform the works council before introduction and conclude a works agreement governing the scope of monitoring, the purpose limitation of collected data, access authorizations, and retention periods.
This is not an obstacle but can even be an advantage. A works agreement creates legal certainty for both sides and signals to employees that monitoring follows clear, transparent rules and is not arbitrary.
Proportionality in Practice
Proportionality requires that the monitoring measure is suitable (it must actually be capable of achieving the security objective), necessary (there is no less intrusive means that achieves the same purpose), and appropriate (the severity of the intrusion into employee rights is in reasonable proportion to the pursued security objective).
Practical example: automatic alerting on mass file downloads by a terminated employee is generally proportionate. Permanent recording of all screen content for all employees generally is not.
In cases of concrete suspicion of criminal offenses (data theft, fraud, sabotage), more extensive measures are permissible that would be disproportionate in normal operations. But even here: carefully document the suspicion, the balancing assessment, and the measure. When in doubt, legal counsel or an attorney specializing in employment law should be consulted.
Practical Examples and Lessons Learned
The following anonymized examples illustrate typical insider threat scenarios and the lessons from them.
The Frustrated Administrator
A system administrator at a mid-sized company was passed over for a promotion. In the following weeks, he began building backdoor access into various systems. When he was eventually terminated, he used these backdoors to access the systems from outside and delete data.
Lesson Learned: Administrator accounts must be subject to special monitoring. Separating personal accounts from admin accounts, regular reviews of admin activities, and a clean offboarding process that also identifies hidden access points would have prevented or limited the damage. Additionally, change detection on critical systems would have been helpful in identifying the creation of new access points.
The Accidental Data Exposure
An HR department employee had stored the salary overview of all employees in a SharePoint folder that was accidentally shared with the entire organization. The misconfiguration went unnoticed for three months until an employee discovered the data and shared it among colleagues.
Lesson Learned: Negligent insider threats can be reduced through automated permission checks, data classification, and regular access reviews. A DLP system would have detected the sensitive personnel data in the shared folder and raised an alert. And training on proper handling of confidential data in SharePoint might have prevented the error in the first place.
The Compromised Sales Account
A sales employee clicked on a phishing link and entered their credentials on a fake Microsoft 365 login page. The attacker used the account to send internal phishing emails to the accounting department that appeared to come from the sales employee. Since the emails came from an internal address, they were not flagged as suspicious.
Lesson Learned: MFA would have prevented the account takeover. A UEBA system would have detected the unusual activities (mass emails to internal addresses, access at unusual times, access from an unknown geographic region). And the phishing attack itself might have been prevented through better awareness.
Building an Insider Threat Program
A structured insider threat program connects technical and organizational measures into a coherent overall concept. For SMEs, we recommend a pragmatic, phased approach.
Stage 1: Build Foundations
- Implement the least privilege principle for all users and systems
- Establish clean onboarding and offboarding processes with checklists
- Set up centralized logging for authentication, file access, and administrative actions
- Introduce regular access reviews (semi-annually for critical systems)
- Activate MFA for all users, or at minimum for privileged accounts and remote access
Stage 2: Improve Detection
- Define alerting rules for obvious anomalies (mass downloads, access at unusual times, access to unusual resources)
- Introduce DLP for the most critical data and channels (email, cloud storage)
- Implement privileged access monitoring for administrator accounts
- Extend the offboarding process with technical review (check last activities of departing employees)
- Conclude a works agreement on technical monitoring (if a works council exists)
Stage 3: Mature the Program
- Activate UEBA capabilities or introduce a dedicated UEBA tool
- Include insider threat scenarios in the risk assessment and update regularly
- Extend the incident response plan with insider threat scenarios
- Conduct regular tabletop exercises for insider threat scenarios
- Formalize collaboration between IT security, HR, and legal departments
Insider Threats in the ISMS
ISO 27001 does not address insider threats as a standalone topic, but the relevant controls span multiple areas of Annex A.
A.5.9 and A.5.10 (Inventory and acceptable use) define which assets exist and how they may be used.
A.5.15 to A.5.18 (Access control) govern who may access which resources and how access is managed.
A.6.1 to A.6.6 (People security) address screening, employment terms, awareness, disciplinary procedures, and responsibilities after termination of employment.
A.8.10 to A.8.12 (Data security) cover information deletion, data masking, and data leakage prevention.
Integrating insider threat measures into your ISMS means viewing these controls not in isolation but as an interconnected system covering the entire employee lifecycle — from hiring through daily work to departure.
The risk assessment should include insider threat scenarios as a separate threat category. In ISMS Lite, insider threat risks can be created as a separate category and linked to the appropriate controls and measures. It is important to not only consider the malicious insider but also to assess and treat negligent behavior and compromised accounts as independent risks.
Insider threats will never be fully eliminable as long as people have access to systems and data. But with a well-designed program that combines technical controls, organizational measures, and a healthy corporate culture, you can reduce the risk to an acceptable level. And that is precisely the goal of an ISMS.
Further Reading
- Berechtigungskonzept erstellen: Von der Planung bis zur Umsetzung
- User Lifecycle: Eintritt, Austritt und Rollenwechsel sauber abbilden
- Logging und Monitoring: Strategie für den Mittelstand
- Social Engineering im Unternehmen: Methoden, Beispiele und Gegenmaßnahmen
- Sicherheitsvorfall erkennen und richtig melden: Der komplette Leitfaden
