Jacobsen + Confurius

Digital Violence, Fabricated Identities, Underestimated Risks: Deep Fakes – A Legal Analysis

What the Fernandes/Ulmen Case Reveals About a Growing Gap in the Law, and What Individuals and Businesses Need to Know Now.

I. A Prominent Case With Far-Reaching Implications

One of the most sensational revelations in German media law in the spring of 2026 involved actress and television host Collien Fernandes, who levelled serious allegations against her former husband, actor Christian Ulmen. According to a report by Der Spiegel, Ulmen allegedly created AI-generated pornographic deepfakes of Fernandes over a period of years, created fake social media profiles in her name, and used those profiles to contact men – all without her knowledge or consent. Fernandes filed a criminal complaint with the District Court of Palma de Mallorca, where preliminary proceedings have been initiated. Ulmen is presumed innocent.

The case has triggered a broad societal and political debate. Federal Minister of Justice Stefanie Hubig (SPD) promptly announced a Digital Violence Protection Act, declaring that the creation and dissemination of sexualised deepfakes must be made a criminal offence. The case is therefore more than a celebrity scandal: it exposes a structural gap in German and European law – a gap that affects private individuals, employees, and businesses alike.

II. What Are Deep Fakes?

The term “deep fake” is a portmanteau of “deep learning” and “fake”. It refers to images, videos, or audio recordings that have been manipulated or synthetically generated using artificial intelligence to appear deceptively real. The underlying algorithms are trained on genuine visual and audio material of a person and can then simulate arbitrary situations – statements, actions, or visual scenes that never took place.

Technically, several forms can be distinguished: face-swap deepfakes, in which a person’s face is inserted into an existing video; voice cloning, in which a person’s voice is synthetically replicated; and fully AI-generated content that requires no real footage as a template. The technology is no longer the exclusive domain of Hollywood studios. Freely available apps and cloud services now enable users with little to no technical expertise to create convincingly realistic forgeries within minutes.

III. The Legal Position in Germany: Protection With Gaps

German law offers victims of deepfakes a comparatively broad framework of protection, drawn from several areas of law. Nevertheless, significant gaps remain that the legislature has yet to close.

1. Criminal Law Protection

To date, no specific criminal offence targeting deepfakes exists in Germany. In July 2025, the German Federal Council introduced a bill into the German Federal Parliament to create a new offence under Section 201b of the German Criminal Code (StGB), which would criminalise the dissemination of realistically manipulated AI-generated images or audio recordings, carrying imprisonment of up to two years or, in serious cases, up to five years. The legislative process has not yet been concluded.

Until such a provision enters into force, the following existing offences may apply – though each covers only part of the problem:

  • Section 33 KUG (German Act on the Protection of Copyright in Works of Fine Art and Photography): Distributing a likeness of a person without their consent is a criminal offence punishable by up to one year’s imprisonment. The KUG of 1907 is in principle applicable to AI-generated images, provided the person depicted is recognisable. In practice, however, it has little deterrent effect, as it is a relatively minor offence, and exceptions for images of public figures (Section 23 KUG) create interpretive difficulties in individual cases.
  • Section 201a StGB (Violation of the Highly Personal Sphere of Life by Taking Photographs): This provision is designed for genuine photographic images; its applicability to fully AI-generated content is disputed. Subsection 2 criminalises the unauthorised disclosure of images capable of significantly damaging a person’s reputation – which may in principle apply to deepfake pornography, but the statutory wording presents difficulties.
  • Sections 185 et seq. StGB (Insult, Defamation, Slander): Depicting a person in a pornographic deepfake may qualify as a defamatory statement of fact. Section 187 StGB (knowingly disseminating false facts) is available where the perpetrator has positive knowledge of the falsehood. In practice, evidentiary hurdles and the requirement to file a criminal complaint (Section 194 StGB) complicate prosecution.
  • Section 42 BDSG (Federal Data Protection Act): Unauthorised processing of personal data with intent to cause harm carries imprisonment of up to two years. Since facial images qualify as biometric data of a special category (Art. 9 GDPR), their use in deepfakes without consent is unlawful and may satisfy the elements of this offence.
  • Section 263 StGB; Section 263a StGB (Fraud; Computer Fraud): Where deepfakes are used to induce persons to make financial transfers – for example through cloned CEO voices in so-called “CEO fraud” schemes – criminal liability for fraud is engaged. This is of particular relevance in the corporate context.

2. Civil Law Protection

In civil law, the general right of personality (Allgemeines Persönlichkeitsrecht, Art. 2(1) German Basic Law in conjunction with Art. 1(1) German Basic Law) affords the most comprehensive protection. Victims generally have claims for injunctive relief and removal (Sections 823, 1004 BGB by analogy), as well as for damages and – in cases of particularly serious violations, especially in the intimate sphere – for non-material compensation under the principles developed by the Federal Court of Justice (BGH). Additional claims may arise from the right to one’s own image (Sections 22, 23 KUG) and from the right to data protection-related damages under Art. 82 GDPR.

In the field of unfair competition law, the Berlin Regional Court II (Case 2 O 202/24)) held in August 2025 that notional licence fees may be claimed in civil proceedings even in respect of an AI-generated voice – a landmark signal for the assessment of commercially exploited deepfakes.

3. Platform Liability and the Digital Services Act (DSA)

Under the judgment of the Frankfurt Court of Appeal of 4 March 2025 (case 16 W 10/25), a platform operator is not merely obliged to remove reported deepfake content, but must – following a notification – proactively identify and remove equivalent content that has been technically modified to a minor degree. The court made clear that a globally operating technology company with its own AI infrastructure can be expected to detect and suppress equivalent content by technical means. This paradigm shift substantially strengthens legal protection for victims: a single notification triggers a systematic duty to review.

At European level, Art. 35(1)(k) DSA already requires platforms to label deepfakes where users might mistakenly take them to be genuine. Art. 50 of the EU AI Act further imposes transparency and labelling obligations in respect of AI-generated content. However, the AI Act does not directly give rise to criminal consequences for the creators of deepfakes.

IV. Protection for Private Individuals and Consumers

Deep fakes do not affect celebrities alone. Anyone who publishes publicly accessible photographs on social media profiles – which is today virtually a matter of course – potentially provides sufficient source material for a deepfake. Victims are frequently women; the abuse is predominantly sexual in nature. Federal Minister of Justice Hubig has characterised this as “digital violence” and has emphasised that it can be “just as devastating” as physical abuse.

Anyone who becomes the victim of a deepfake should take the following steps without delay:

  • Preservation of Evidence: Take legally reliable screenshots, record the URL and metadata, and document timestamps.
  • Platform Notification: Report the content to the platform with a specific indication of the legal violation. Under the Frankfurt Court of Appeal case law, this gives rise to duties to review and remove equivalent content.
  • Criminal Complaint: File a complaint with the police or public prosecutor’s office; depending on the circumstances, a formal application for prosecution may also be required (Sections 185 et seq. StGB require such an application). Where appropriate, a complaint may additionally be lodged with the competent data protection supervisory authority.
  • Interim Relief: Claims for injunctive relief may be pursued on an urgent basis by way of an interim injunction; where concrete financial losses have been suffered, a claim for damages should be considered.
  • GDPR Claims: Against platforms – and, where identifiable, against the perpetrator – rights of erasure exist under Art. 17 GDPR, as well as claims for damages under Art. 82 GDPR. Infringements of the GDPR may attract fines of up to 20 million Euro (Art. 83(5) GDPR).

V. Risks and Action Required for Businesses

1. CEO Fraud and Operational Security

For businesses, deepfakes are no longer an abstract future scenario. Particularly acute is so-called “CEO fraud”: perpetrators clone the voice or video image of a board member or managing director and use it to instruct employees – typically in the accounts payable department – to transfer funds to third-party accounts. Such attacks can result in substantial financial losses within minutes. The applicable criminal provision is Section 263 StGB (fraud); in civil law, the wrongdoer is liable in damages.

Businesses would be well advised to establish clear internal approval procedures for payments: a payment instruction transmitted by telephone or video alone should never be sufficient. Recommended measures include two-channel verification (confirmation via a separate, pre-agreed communication channel), technical safeguards, and regular staff training on social engineering and deepfake risks.

2. Employment Law Dimension

Deepfakes may affect the employment relationship in several ways.

  • First, as an attack on employees: where an employee is depicted in a deepfake showing apparent disloyalty, disclosure of trade secrets, or sexual content, serious harm may result to reputation, workplace atmosphere, and – in extreme cases – the continued existence of the employment relationship. Affected employees may assert civil law claims against the creator; the employer in turn faces liability for any unjustified dismissal or formal warning issued in reliance on a deepfake mistakenly believed to be genuine.
  • Second, the deepfake problem arises in the context of workplace data protection: where a business deploys AI tools that process employee images or voices – for example for training purposes, virtual assistants, or monitoring – this is subject to co-determination rights under Section 87(1)(6) German Works Constitution Act (BetrVG) and requires a legal basis under data protection law. The creation of deepfakes using employee data without consent and without a legal basis constitutes a serious GDPR infringement.
  • Third, employers face liability where employees create or disseminate deepfakes in the course of their duties. Businesses should therefore embed clear rules of conduct regarding the use of AI-generated content in their IT policies and, where applicable, in a works agreement (Betriebsvereinbarung), and should specify employment law consequences for breaches.

3. Celebrities and Influencers as Advertising Figures: Contractual Safeguards

Businesses that collaborate with celebrities or influencers face a twofold deepfake risk. On the one hand, third parties may create unauthorised deepfakes of the brand ambassador, falsely implying an endorsement by the business – with adverse effects on brand image and consumer trust. On the other hand, the technology opens up the possibility of scaling the image of advertising figures at low cost, which – absent a clearly regulated legal basis – gives rise to serious violations of personality rights.

The right to one’s own image (Sections 22, 23 KUG) and the right to one’s own voice protect celebrities and influencers against the unauthorised use of their likeness for advertising purposes. A deepfake advertisement conveying the impression that a person endorses a product constitutes a serious infringement of personality rights, regardless of whether the material is AI-generated or of real origin. Businesses that tolerate advertising deepfakes created by third parties on their own platforms face, in addition to injunctive relief claims, significant reputational damage.

Cooperation agreements with influencers and brand ambassadors should therefore contain explicit provisions on deepfake use: Is the deployment of AI-generated likenesses or voices permitted? Under what conditions, for what purposes, and on which platforms? Who may use the AI model, and for how long? Of particular importance are deletion and deactivation clauses: without contractual provision, a trained AI model remains permanently available. The allocation of liability between agency, advertiser, and influencer must equally be addressed in the contract.

VI. Outlook: Tightening Regulation on the Horizon

The legal landscape on deepfakes is in flux as of March 2026. At national level, as already mentiones above, the proposed Section 201b StGB awaits its final parliamentary deliberations, as noted above. At EU level, members of the European Parliament consider the deepfake provisions of the AI Act to require strengthening; votes are imminent. The Federal Minister of Justice has furthermore announced a Digital Violence Protection Act that would oblige platforms to identify perpetrators and block accounts, and would grant victims improved rights of access to information and erasure.

For businesses, this means: even though Section 201b StGB is not yet in force, the regulatory environment has already tightened considerably. The transparency obligations under the EU AI Act are already applicable partly. Case law – in particular the Frankfurt Court of Appeal’s judgment on platform obligations – is setting new standards. And societal sensitivity to the issue has reached a new high in the wake of the Fernandes/Ulmen case.

VII. Summary

Deep fakes are not a peripheral technical phenomenon but a serious legal risk – for private individuals, businesses, and public figures alike. The Fernandes/Ulmen case has illustrated this with a clarity that surpasses previous debates. Those affected are well advised to act swiftly and decisively: preserve evidence, notify platforms, and pursue criminal and civil remedies in parallel.

For businesses: deepfake risks are a matter for compliance, IT security, and – particularly in the context of influencer marketing – contract drafting. Those who engage influencers or celebrities as brand ambassadors should review existing agreements for deepfake clauses. Those who deploy AI tools within their organisation require clear internal policies and – where a works council exists – a carefully negotiated works agreement.

Contact:
Jens Borchardt

Scroll to Top