EU AI Act: your website chatbot needs an AI disclosure by August 2026

From 2 August 2026 the EU AI Act requires that visitors are told they're talking to an AI. Here's what Article 50 means for SMB websites, where it applies, and a checklist to be ready in time.

If your website has a chat widget, an AI assistant, or any kind of bot that talks to visitors, you have a deadline. From 2 August 2026, Article 50 of the EU AI Act requires that visitors are clearly told when they are interacting with an AI system. No consent, no popup. Just a simple, visible disclosure at the moment of first interaction.

This is one of those rules that sounds tiny until you realise how many WordPress sites have quietly bolted on a chatbot widget over the past two years. Tidio AI, Intercom Fin, HubSpot's AI assistant, Crisp's AI replies, an embedded ChatGPT-powered helper. If any of these run on your site, the obligation almost certainly applies to you.

TL;DR: From 2 August 2026, Regulation (EU) 2024/1689 requires that AI systems interacting with people on your website make it obvious they're AI. The rule lands on both the chatbot vendor and you as the deployer. Penalties for transparency violations are up to EUR 15 million or 3% of global turnover, with proportionality for SMEs. A cookie banner does not satisfy this rule. The fix is usually small: a label in the chat header and a clear opening message. AI-generated images on a normal business site are mostly out of scope unless they're realistic deepfakes.

Table of contents

What changes on 2 August 2026

The EU AI Act entered into force on 1 August 2024, but most of its obligations are phased in. Prohibited practices kicked in on 2 February 2025. Rules for general-purpose AI models followed in August 2025. The general application date for the rest of the Act, including Chapter IV on transparency, is 2 August 2026. That's the date Article 50 starts to apply to your website.

There is one moving piece worth flagging. The Commission published a "Digital Omnibus on AI" amendment in November 2025, and as of April 2026 the European Parliament has voted to delay the machine-readable watermarking obligation (Article 50(2)) by three months. The chatbot disclosure obligation in Article 50(1) is not part of that delay. Plan around 2 August.

What Article 50 actually requires

Article 50 contains several transparency obligations. The two that affect typical SMB websites are:

  • Article 50(1): providers of AI systems "intended to interact directly with natural persons" must make sure those persons are informed they're interacting with an AI. The exception is when this is "obvious" to a reasonably observant user.
  • Article 50(4): deployers of AI that generates deepfakes must disclose that the image, audio, or video is artificially generated. There is also a related rule for AI-generated text published "to inform the public on matters of public interest".

Article 50(5) sets the timing: the information has to be given "in a clear and distinguishable manner at the latest at the time of the first interaction or exposure". In practice, that means before the visitor types their first message, not buried in a privacy policy three clicks away.

The provider (the vendor that builds the chatbot) carries the design obligation. As a deployer (the website owner), your job is to not switch the disclosure off and not configure the bot in a way that hides its AI nature.

Does your chatbot fall under the AI Act?

The Act has a specific definition of an "AI system" in Article 3(1): a machine-based system that "infers, from the input it receives, how to generate outputs". The Commission published guidelines on this definition in July 2025 and made the boundary explicit: rule-based systems "based on the rules defined solely by natural persons to automatically execute operations" are out of scope. The capacity to infer is the indispensable condition.

For a typical SMB website, that translates roughly like this:

Chat type In scope?
Pure decision-tree FAQ flow ("Press 1 for billing") No
LLM-powered widget (Tidio AI, Intercom Fin, HubSpot AI) Yes
Crisp or Drift with AI assist features turned on Yes
Embedded ChatGPT or custom OpenAI/Claude API widget Yes
Older NLP intent-classification bot Probably yes

If your vendor markets the chatbot with the words "AI", "GPT", "smart replies" or "intent detection", assume it's in scope.

The "obvious AI" exemption is not your safety net

Article 50(1) carves out an exception when the AI nature of the system is "obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect". Tempting as it is to lean on this, there's no official guidance yet on what counts as obvious. The AI Office has working groups drafting interpretation guidance, with publication expected in June 2026, roughly eight weeks before the rule starts to bite.

Until that guidance lands, the safe assumption is: disclose anyway. A widget called "AssistBot" with no other AI signals is not obviously an AI system to a normal user. A clearly labelled "AI assistant" header that's visible before the user types anything is the cheapest way to be on the right side of the rule, regardless of how the exception ends up being interpreted.

AI-generated images and the deepfake rule

Article 50(4) is the deepfake clause. It applies to deployers of systems that generate or manipulate image, audio or video content "constituting a deepfake". The trigger is realism: the rule targets content that could plausibly be mistaken for an authentic recording of real people, places or events.

For a normal business site that publishes Midjourney or DALL-E illustrations on blog posts, this matters less than the headlines suggest:

  • A stylised AI banner image of a server rack is not a deepfake. No visible label required under Article 50(4).
  • An AI-generated photo of a real, identifiable person doing something they didn't do is a deepfake. Label it.
  • AI-written editorial content published "to inform the public on matters of public interest" needs disclosure, unless a human has reviewed it and holds editorial responsibility. A blog post about cloud hosting that you wrote with AI assistance and edited yourself is fine.

Recital 134 of the regulation also carves out evidently artistic, satirical or fictional works, where disclosure can be more discreet.

The separate machine-readable marking obligation in Article 50(2) (watermarking generated content in metadata) sits with the vendor, not with you. OpenAI, Midjourney and Stability AI are the ones who need to embed those markers.

Short answer: it does not count. A cookie banner exists to satisfy GDPR and the ePrivacy Directive, both of which are about lawful processing of personal data. The AI Act disclosure is something else entirely. It's about the nature of the system you're talking to, and it's disclosure-based, not consent-based. The user can't opt out and isn't being asked to.

You'll need both: a GDPR-compliant basis for processing the conversation data your chatbot collects, and an Article 50 disclosure that the interlocutor is an AI. They sit in different places and answer different questions. If you'd like a refresher on the GDPR side, my cookie consent banner guide walks through the cookie banner basics.

Penalties: not the headline number you might think

The much-quoted "EUR 35 million or 7% of global turnover" figure does the rounds in headlines, but Article 99 sets that ceiling only for prohibited AI practices under Article 5. Article 50 transparency violations sit in a lower tier: up to EUR 15 million or 3% of worldwide annual turnover, whichever is higher.

For SMEs and startups, Article 99(6) explicitly requires that fines be proportionate, and the lower of the percentage or the flat amount applies. That doesn't make the rule optional, but it does mean the practical exposure for a 5-person agency is nothing like the headline figure.

Enforcement in the Netherlands is its own story. The Autoriteit Persoonsgegevens and the RDI have jointly recommended a sector-plus-coordination model with five Dutch authorities sharing oversight, but the formal designating legislation is still expected later in 2026. Even so, the obligations apply on 2 August. The lack of a designated authority is not a free pass.

A practical compliance checklist for SMBs

If you run a WordPress or marketing site with a chatbot, here's the short version of what to do before August.

  1. Identify whether the bot is an AI system. Check your vendor's product page. If it mentions GPT, generative replies, smart routing or intent detection, assume yes.
  2. Make the AI nature visible at first contact. Set the chat widget header to read something like "AI assistant" or "Powered by AI". Add a one-line opening message: "Hi, I'm an AI assistant. How can I help?" Both is better than either.
  3. Don't switch off your vendor's built-in disclosure. If the SaaS provider has an "AI label" toggle, leave it on. Suppressing the provider's design is exactly what Article 50 prohibits the deployer from doing.
  4. Audit AI-generated images. Stylised illustrations: no label needed. Realistic images of identifiable people: add a visible "AI-generated" caption. When in doubt, label.
  5. Update your privacy policy. Add a short paragraph explaining that the site uses an AI chatbot, what data it processes, and on what GDPR basis. This handles the GDPR side, separately from the AI Act disclosure.
  6. Watch for the AI Office guidance in June 2026. The Commission's code of practice on AI-generated content and the Article 50 guidelines are both expected before the August deadline. They will be the first authoritative interpretation of what "clear and distinguishable" actually looks like in practice.

This is also a good moment to look at the wider EU compliance picture. If you've already worked through the European Accessibility Act explainer, the NIS2 article, or the EU data sovereignty piece, the AI Act slots into the same cluster. A 30-minute pass over all four covers most of what an SMB site owner needs to think about in 2026.

Key takeaways

  • Article 50 of the EU AI Act applies from 2 August 2026. If your site has an AI chatbot, the rule covers you.
  • The disclosure has to be clear and visible at first interaction. A label in the chat header and a one-line opening message is usually enough.
  • Rule-based decision-tree bots are out of scope. Anything with an LLM or ML inference layer is in.
  • Don't rely on the "obvious AI" exemption until the AI Office publishes guidance in June 2026.
  • A cookie banner does not satisfy the AI Act. It's a separate obligation.
  • Penalties are up to EUR 15 million or 3% of global turnover, with proportionality for SMEs.

Need a WordPress fix or custom feature?

From error fixes to performance improvements, I build exactly what's needed—plugins, integrations, or small changes without bloat.

Explore web development