EU Commission Fines X €120m: DSA Ad Transparency, Scam Ads

Twitter dSA Fine

EU Commission Fines X €120m: DSA Ad Transparency, Scam Ads

On 5 December 2025, the European Commission imposed a €120 million fine on X (formerly Twitter) for breaches of transparency obligations under the Digital Services Act (DSA), citing deceptive design related to “verified” blue checkmarks, deficiencies in the advertising repository, and failures regarding researchers’ access to public data.
For victims of online fraud, the advertising repository finding is particularly relevant because the Commission explicitly links accessible and searchable ad repositories to the ability of researchers and civil society to detect scams and fake advertisements. 
When scam campaigns are funded and distributed through paid ads, a functioning ad repository becomes the primary mechanism to reconstruct what was shown, by whom, to which audience segments, and over what period. 

Background

The enforcement action is based on the DSA as set out in Regulation (EU) 2022/2065, specifically Article 25 (deceptive interface design) , Article 26 (ad transparency for recipients), Article 39 (advertising repositories for Very Large Online Platforms and Very Large Online Search Engines), and Article 40 (data access and vetted researcher access), as published in the Official Journal of the European Union. These provisions are part of the DSA’s systemic‑risk and transparency architecture, not just its content‑moderation rules.

X qualifies as a Very Large Online Platform (VLOP) under the DSA. This means it must meet stricter obligations on advertising transparency, public scrutiny, and data access to enable independent assessment of systemic risks, including those linked to scams, disinformation, and other harmful business models.

The DSA’s two-layer ad transparency regime

The DSA does not treat advertising transparency as a single obligation. It establishes a user‑level transparency layer and a public-scrutiny layer that are designed to work together.

At the user level, Article 26 requires that, for each specific advertisement shown to each recipient, the recipient can identify in real time that the content is an ad, who it is presented on behalf of, who paid (if different), and meaningful information on the main parameters used to determine to whom the ad is shown (and how to change those paparameters. 

This is directly relevant to scam ads because deception often starts at the point of exposure, where recipients cannot easily distinguish sponsored persuasion from organic content or understand who is behind it.

At the scrutiny level, Article 39 requires VLOPs/VLOSEs that present ads to compile and make publicly available an ad repository through a searchable, reliable tool allowing multi‑criteria queries and through APIs, covering the full period during which an ad is shown and up to one year after it was last shown.

Article 39 also requires the repository to contain the ad content and subject matter, the person on whose behalf the ad is presented, the payer (if different), the period shown, targeting parameters (including exclusions where applicable), and aggregate reach information (including breakdowns by Member State where applicable).

This architecture matters because effective scam‑ad monitoring is rarely about isolated ads. It is about repetition patterns, advertiser networks, brand impersonation tactics, and cross‑border distribution over time.

Why “repository usability” is the critical variable for scam and fake ads

A repository can exist on paper but fail in practice if it cannot be used to run real queries, or if it is missing the minimum information needed for attribution and pattern detection. The DSA expressly anticipates multi‑criteria search and API access, reflecting the reality that scam‑ad detection requires structured investigation rather than manual browsing

If repository fields such as ad content/topic and payer identity are missing, then crucial investigative questions become unanswerable at scale. A victims’ protection perspective is not primarily concerned with whether an ad library looks complete, but whether it enables defensible documentation of who funded which messaging, how it was targeted, and how wide the reach was.

What the Commission found deficient in X’s ad repository—and why it aligns with scam‑ad risk

The Commission states that X’s advertisement repository fails to meet the DSA’s transparency and accessibility requirements, and it explicitly links accessible and searchable repositories to detection of scams and fake advertisements

The Commission further states that X has design features and access barriers, including “excessive delays in processing,” and that the repository lacks critical information such as the content and topic of the advertisement and the legal entity paying for it.

Those missing fields map directly onto Article 39’s minimum content requirements, which include both the content/subject matter of the ad and the identity of the payer (if different from the person on whose behalf the ad is presented).

A repository that is slow, incomplete, or missing payer identity is not merely less transparent; it materially reduces the possibility of independent verification of scam‑ad patterns, especially in cross‑border contexts where fraud campaigns may use multiple entities and repeated creative re‑uploads.

For scam‑ad victims, this means that even when a platform claims to have an ad library, the essential evidence needed to trace responsibility for harmful campaigns may simply not be available

Researcher access to public data as a systemic control mechanism

The Commission also found that X fails to meet its DSA obligations to provide researchers access to public data, citing, among other points, that X’s terms of service prohibit eligible researchers from independently accessing public data, including through scraping, and that its processes impose barriers that undermine research into systemic risks.

This finding is consistent with Article 40, which requires VLOPs/VLOSEs to provide data access to regulators for compliance assessment and, upon reasoned request by the Digital Services Coordinator, to provide access to vetted researchers for research contributing to detection and understanding of systemic risks and assessment of mitigation measures.

Blocking or degrading researcher access effectively neutralises one of the DSA’s main systemic‑control mechanisms. Even a formally compliant ad repository becomes of limited value if independent experts cannot query it at scale, check it against other datasets, or investigate longitudinal patterns.

A compliance nuance that remains relevant for scam ads: “removed ads"

Article 39(3) provides that where a platform has removed or disabled access to a specific ad based on alleged illegality or incompatibility with its terms and conditions, the repository shall not include the information referred to in Article 39(2)(a)–(c) for that specific ad. In other words, where an ad is removed, the public repository will typically not show its actual content, subject matter, and the identity of the person on whose behalf it was presented.

This creates a structural question: how can transparency and traceability be preserved when harmful ads are removed, particularly when removal occurs after exposure and harm, and when post‑hoc documentation is essential to understand scam campaigns? The DSA addresses this by requiring certain other information in that case, and by creating a separate route for vetted researcher access under Article 40. The operational question is whether this remaining dataset, combined with researcher access, is sufficient for independent scrutiny and accountability in fraud contexts, or whether platforms in practice can use removal to erase key traces of scam campaigns from public view.

From a victims’ perspective, this is a critical gap: the moment when an ad is recognised as harmful is precisely when a maximum level of traceability should be ensured, not when transparency is reduced.

Next steps

According to the Commission’s press release, X has 60 working days to inform the Commission of the specific measures intended to end the infringement of Article 25(1) DSA related to the deceptive use of blue checkmarks.
X also has 90 working days to submit an action plan addressing the infringements of Articles 39 and 40(12) DSA relating to the advertising repository and access to public data for researchers, and the press release outlines the subsequent opinion step by the Board of Digital Services and a further Commission decision setting an implementation period.

For victims and civil society scrutiny, a key practical indicator will be whether X’s implementation results in an ad repository that is genuinely usable for multi‑criteria search, preserves the legally required minimum fields, and supports independent detection of scam and fake advertisements

Only then will the DSA’s ad transparency architecture start to deliver what it promises to victims of fraud: the ability to reconstruct paid scam campaigns, attribute them to responsible entities, and push regulators and law enforcement to act.

Leave a Comment