- AI dramatically lowers the barrier to entry for attackers: personalized phishing, convincing deepfakes, and automated vulnerability scanning are possible with freely available tools.
- On the defender side, AI improves anomaly detection, accelerates incident response, and reduces false positives in SIEM systems.
- The greatest immediate risk for organizations is not the AI-powered attack but the uncontrolled leakage of confidential data to AI services by their own employees.
- Every organization needs an AI policy that governs which data may be entered into which AI tools and which use cases are approved.
- AI replaces neither security experts nor fundamental security measures. It is a powerful tool that amplifies existing processes but requires solid foundations.
Between Hype and Reality
Hardly a week goes by without a new article proclaiming that AI is either revolutionizing cybersecurity or ushering in the apocalypse. The truth lies, unsurprisingly, somewhere in between. Artificial intelligence is indeed fundamentally changing the cybersecurity landscape, but not in the sensationalist way that headlines suggest.
What is actually happening: AI makes existing attack methods more effective, more accessible, and harder to detect. At the same time, it gives defenders more powerful tools to identify and respond to threats. And it creates an entirely new risk category because organizations are using AI tools without fully understanding their security implications.
For SMEs, a sober assessment matters more than alarmism. You don't need a multi-million-dollar budget for AI-powered security solutions. But you do need an understanding of how AI changes the threat landscape, what opportunities it offers, and what new risks AI usage within your organization entails.
AI as a Tool for Attackers
Available AI technology has significantly expanded the attacker's toolkit. This is less about science fiction scenarios and more about the improvement and scaling of existing attack methods.
AI-Generated Phishing
The most obvious application of AI on the attacker side is creating convincing phishing messages. Large language models can generate personalized emails in seconds that are linguistically perfect, imitate a company's communication style, and contain content tailored to the recipient.
Where phishing emails used to stand out through awkward language, generic greetings, and obvious errors, AI produces texts that are virtually indistinguishable from real business emails. An attacker can feed a language model with publicly available information about an organization (website, press releases, social media profiles) and generate highly personalized phishing campaigns for dozens of employees — each individually tailored.
This fundamentally changes the economics of phishing. Previously, spear phishing was time-consuming and therefore expensive. An attacker had to manually research and draft each email. AI reduces this effort to near zero. What was previously only worthwhile for high-value targets (CEOs, CFOs) now becomes economically viable for broader attacks on mid-market companies.
The practical consequence: the traditional indicators for phishing detection (language errors, generic greetings, conspicuous phrasing) are losing their reliability. Awareness training must adapt and place greater emphasis on contextual warning signs: is the request plausible? Did it come through the expected channel? Does it create unusual time pressure?
Deepfakes and Voice Cloning
Deepfake technology has reached a maturity level that makes it usable for practical attacks. Video deepfakes can overlay a person's face into a video call in real time. Audio deepfakes can convincingly mimic a person's voice with just a few seconds of training material.
The use cases in attack contexts are diverse. A deepfake video call from the supposed CEO ordering an urgent wire transfer — similar to classic CEO fraud. A voice clone calling IT support and requesting a password reset for the CEO's account. A forged voice message from a supervisor asking an employee to send confidential files to an external address.
In January 2024, a case became public where employees of a multinational company transferred 25 million US dollars after being deceived in a video conference with multiple deepfake versions of their colleagues and superiors. This shows that deepfakes are no longer just a theoretical risk.
For SMEs, this means: verification through a second channel becomes even more critical for unusual instructions. When the CEO orders an immediate wire transfer via video call, verification must occur through a separate, pre-agreed channel — for example, a callback to the known mobile number.
Automated Vulnerability Scanning and Exploit Generation
AI models can analyze software code and identify potential vulnerabilities. What is a valuable tool for security researchers is also available to attackers. AI can accelerate the vulnerability discovery process and, in some cases, generate exploit code that takes advantage of a discovered vulnerability.
Until now, exploit development was the domain of highly specialized attackers. AI lowers the barrier here as well. A less experienced attacker can use AI tools to find vulnerabilities in web applications, APIs, or configurations that they would not have discovered manually.
At the same time, the threat should be realistically assessed: most attacks on SMEs don't require zero-day exploits. The majority of successful attacks exploit known vulnerabilities that weren't patched or misconfigurations. AI-powered exploit generation exacerbates an existing problem but doesn't create a fundamentally new one.
AI-Powered Malware and Evasion
Attackers are experimenting with AI to develop malware that better evades security solutions. AI can modify malicious code so that it is not recognized by signature-based detection systems without changing its functionality. The automated adaptation of attacks to the target environment is also made possible by AI.
Polymorphic malware that slightly changes its code with each execution has existed for decades. AI elevates this concept to a new level because the variations are smarter and therefore harder to detect. For the defense side, this means that purely signature-based detection is increasingly insufficient and behavior-based approaches are growing in importance.
AI as a Tool for Defenders
The same technology that improves attacks is also available on the defender side and offers considerable potential there.
Anomaly Detection
AI-based anomaly detection is likely the most impactful defensive application. Machine learning models create baselines for normal behavior of users, systems, and networks. Deviations from these baselines are flagged as potential threats.
The advantage over rule-based systems: AI also detects novel attack patterns not covered by predefined rules. A rule-based system triggers an alert when a user downloads more than 100 files in an hour. An AI-based system recognizes that the same user normally downloads 5 files per day and 50 downloads represent a significant deviation, even if the absolute number falls below the rule-based threshold.
In practice, AI-based anomaly detection is found in modern SIEM systems, Endpoint Detection and Response (EDR) solutions, Network Detection and Response (NDR) tools, and Cloud Security Posture Management (CSPM) platforms. Many of these solutions are available as SaaS and therefore accessible to SMEs as well.
Incident Response Automation
AI can automate repetitive tasks in incident response: the initial triage of alerts, enrichment of alerts with contextual information, automatic initiation of containment measures for clear-cut threats, and creation of incident reports.
Security Orchestration, Automation and Response (SOAR) platforms use AI to automatically execute playbooks. When an alert contains a known phishing URL, the system can automatically block the URL, identify affected users, reset their passwords, and create an incident report — all without manual intervention.
For organizations with small IT teams — and that applies to the majority of SMEs — this automation can mean the difference between a timely and a delayed response. When only two people are responsible for IT security, automated triage of hundreds of daily alerts is not a luxury but a necessity.
Reducing False Positives
One of the biggest operational problems in cybersecurity is false positives: alerts that turn out to be harmless. In typical enterprise environments, the false positive rate is 40 to 60 percent, meaning half of all alerts are false alarms. This leads to alert fatigue: analysts become desensitized and miss real threats amid the noise.
AI can significantly reduce the false positive rate by evaluating alerts in context. An isolated failed login attempt is harmless. The same failed attempt, followed by a successful login from an unknown region, followed by the setup of an email forwarding rule, is highly suspicious. AI recognizes these connections and prioritizes alerts accordingly.
Threat Intelligence and Prediction
AI models can analyze large volumes of threat intelligence data and identify patterns that are not apparent to human analysts. This includes correlations between different indicators of compromise (IoCs), detection of new attack campaigns in early phases, prediction of likely attack targets based on industry and profile, and automatic attribution of attacks to known threat actors.
For SMEs, these capabilities are primarily relevant when they are integrated into existing security products. Most modern endpoint protection and email security solutions already use AI-based threat intelligence without requiring you to train your own models.
Risks from Your Own AI Usage
Beyond the impacts of AI on attacks and defense, there is a third dimension that is currently the most acute for SMEs: the risks arising from AI usage within your own organization.
Data Leaks to AI Services
The greatest immediate risk: employees enter confidential company data into AI chatbots and tools. Source code is pasted into ChatGPT to find bugs. Contract texts are uploaded for summarization. Customer data is entered for analysis. Internal strategy documents are used for reformulation.
The problem: the data leaves the organization and is processed on the AI provider's servers. Depending on the provider and terms of use, it may be used to train future models, meaning confidential information could potentially flow into responses to other users.
Samsung experienced this firsthand in 2023 when engineers entered proprietary source code into ChatGPT. The incident led to a company-wide ban on external AI tools and brought the problem to public attention. Since then, numerous organizations have reported similar incidents.
For SMEs, the risk is particularly high because AI tool usage often occurs in an uncontrolled manner. Employees use free ChatGPT accounts to do their work more efficiently without considering the data protection implications. Without a clear AI policy and corresponding training, the leakage of confidential data to AI services is virtually inevitable.
Shadow AI
Shadow AI is the AI equivalent of shadow IT: employees use AI tools that have not been approved or vetted by the IT department. This can be ChatGPT, but also specialized AI tools for text generation, image editing, code assistance, data analysis, or presentation creation.
The challenge: unlike classic shadow IT where unauthorized software is installed, the use of web-based AI tools is difficult to detect and even harder to prevent. An employee who opens ChatGPT in the browser and enters a contract text leaves no traces in most environments that would trigger technical controls.
The solution lies not primarily in bans — which are hard to enforce in practice and counterproductive for productivity — but in providing secure alternatives and clear rules.
Hallucinations and Erroneous Outputs
AI models can generate plausible-sounding but factually incorrect answers (hallucinations). When employees uncritically adopt AI-generated content, this can lead to errors in contracts, false information in customer communications, faulty code in applications, or flawed decision-making foundations.
In the security context, this is relevant when AI is used for creating security policies, assessing risks, or configuring systems. An AI-generated firewall ruleset that looks correct at first glance but contains subtle errors can pose a serious security risk.
Data Protection Risks
Using AI tools to process personal data raises data protection questions. When employees enter customer data, applicant data, or employee data into AI tools, this may constitute a violation of DSGVO (GDPR), particularly when the AI provider is located outside the EU and an adequate level of data protection is not ensured.
The EU AI Act adds another regulatory dimension. Depending on the purpose and risk category of the AI application, additional requirements may apply that must be considered during use.
The AI Policy for Your Organization
Given these risks, every organization needs an AI policy governing the use of AI tools. This policy must be practical — because a policy that blanketly prohibits AI will be ignored. And it must be specific — because abstract formulations like "AI may only be used in accordance with applicable regulations" help nobody.
Core Elements of an AI Policy
Data classification: Clearly define which data categories may be entered into external AI tools and which may not. A simple traffic light system works well:
- Green: Publicly available information, general technical questions, anonymized data.
- Yellow: Internal information without personal data or trade secrets. May be entered into approved, data-protection-compliant AI tools, but not into free consumer services.
- Red: Confidential business information, personal data, source code, trade secrets. May only be entered into self-hosted or contractually secured enterprise AI solutions.
Approved tools: Create a list of approved AI tools that have been vetted regarding data protection, security, and terms of use. For most use cases, enterprise variants with contractual data protection guarantees exist (ChatGPT Enterprise, Azure OpenAI, Google Gemini for Workspace, etc.).
Usage guidelines: Define specific dos and don'ts for the most common use cases: text creation, code assistance, data analysis, research. Make clear that AI-generated results must always be reviewed by a human before they are used.
Responsibilities: The person using an AI tool remains responsible for the result. AI-generated code must go through the same review process as manually written code. AI-generated texts must be checked for accuracy.
Training requirement: All employees who use AI tools must be trained. The training covers the AI policy, data protection aspects, handling hallucinations, and the secure use of approved tools.
Implementation in Practice
Introducing an AI policy works best in three steps:
Step 1: Inventory. Find out which AI tools are already being used in the organization (both openly and as shadow AI). Ask departments directly and without judgment which tools they use and for what purpose. This inventory is the basis for a practical policy.
Step 2: Create and communicate the policy. Create the policy based on the inventory and data protection requirements. Communicate it actively and explain the reasoning. A policy whose purpose employees understand will be followed more readily than one perceived as an arbitrary restriction.
Step 3: Provide secure alternatives. When you restrict AI tools, you must simultaneously offer approved alternatives. An organization that bans ChatGPT but provides no alternative will find that usage continues anyway — just covertly.
AI Risks in the Risk Assessment
AI-related risks should be included as a separate category in your ISMS risk assessment. ISMS Lite supports you in systematically capturing AI-specific risk scenarios and linking them to the appropriate measures. Relevant risk scenarios include:
- Confidential data reaches external providers through employee use of AI tools
- AI-powered spear phishing bypasses existing awareness measures
- Deepfake-based CEO fraud leads to financial losses
- AI-generated erroneous configurations or policies are adopted uncritically
- Dependence on AI-powered security solutions that leave protection gaps upon failure or malfunction
For each risk, you assess probability and impact and define risk reduction measures. Most of the measures described above (AI policy, training, technical controls, verified approval processes) directly address these risks.
What AI Cannot Do and Why Fundamentals Remain Important
Despite all the enthusiasm about AI-powered security solutions, a sober assessment is important: AI is a tool, not a replacement for fundamental security measures.
An AI-based SIEM doesn't help if the log sources are incomplete. An AI-powered endpoint protection doesn't compensate for missing patches. And AI-based anomaly detection is of little use if there are no defined processes to respond to the detected anomalies.
The priorities remain the same: patch management, access management, network segmentation, backup, incident response. If these foundations aren't in place, even the most advanced AI solution won't significantly improve the security level. It can close gaps in detection and shorten response times, but it cannot replace missing fundamentals.
For SMEs, this means: invest in the foundations first. If your access management has gaps, your patch management takes weeks, and your incident response plan sits in a drawer, acquiring an AI-powered security platform is not the right next step.
Outlook: Where Is AI in Cybersecurity Heading?
Several developments are emerging without requiring us to speculate.
AI-powered attacks will become routine. What is considered advanced today (AI-generated phishing, simple deepfakes) will be a standard tool for less skilled attackers within one to two years. The defense must prepare for this.
Autonomous security systems will increase. AI systems will increasingly be capable of not just detecting threats but also automatically responding to them. This carries both opportunities (faster response) and risks (erroneous automatic responses that disrupt business operations).
Regulation is coming. The EU AI Act is just the beginning. Further regulatory requirements for the secure use of AI will follow. Organizations that build AI governance early will be better prepared.
The barrier to entry for attackers will continue to fall. AI democratizes access not only to productive tools but also to offensive capabilities. The defense strategy must assume that even less resourceful attackers possess advanced capabilities.
What this means for your ISMS: AI is not a one-time project you complete and check off. The technology evolves rapidly, the threat landscape changes continuously, and your measures must keep pace. An annual review of AI-related risks and measures as part of the management review is the minimum.
Further Reading
- Social Engineering im Unternehmen: Methoden, Beispiele und Gegenmaßnahmen
- Security Awareness Programm aufbauen: Was Mitarbeiter wirklich wissen müssen
- Informationssicherheitsrichtlinie erstellen: Von der Struktur bis zur Freigabe
- Risikobewertung im ISMS: Methoden und praktische Durchführung
- Logging und Monitoring: Strategie für den Mittelstand
