IT WILL SOON BE MANDATORY IN THE EU TO LABEL CONTENT GENERATED BY ARTIFICIAL INTELLIGENCE


New rules on the transparency of so-called artificial intelligence will come into force in the European Union on 2 August this year. It will be necessary to label content generated by large language models (LLMs), provide information about deepfakes, and alert users that they are communicating with AI. The aim is to reduce the risk of misuse of this technology, manipulation, misleading content, fraud and so on.

The new measures form part of the Artificial Intelligence Act (AI Act). Its individual provisions are coming into force gradually; thanks in part to the Czech Republic’s efforts, some sectionshave beenmitigated or postponed.

Jana Vorlíček Soukupová from the law firm Dentons summarised the situation as follows:

The new rules primarily concern generative artificial intelligence, i.e. systems that create text, images, audio or video content. They are intended to ensure that it is clearly distinguishable when content has been created by a human and when by an algorithm. AI is capable of creating content that is almost indistinguishable from reality. The regulation therefore introduces a simple principle – people have the right to know when content has been created by artificial intelligence or when they are communicating with AI rather than a human.

Under the new rules, providers of AI systems (such as OpenAI) must ensure that all outputs – text, images, audio or video – containa machine-readable label indicating that they were created or modified by artificial intelligence. Typically, this will involve metadata, digital watermarks, cryptographic tags or other technical identifiers embedded directly into the file. Technical labelling is key. It will enable social media platforms, search engines, content verification tools, the media and other digital platforms to automatically recognise that content has been created by artificial intelligence.

In addition to technical labelling, a requirement for visible labelling of outputs for end-users will also be introduced from August 2026. Certain entities deploying AI systems will be required to label these outputs properly, clearly and recognisably. This applies in particular to so-called deepfakes – imitations of real people or events created by artificial intelligence – as well as to content intended to inform the public. The rules on how to label content are currently being finalised, but it is expected that this will involve visual icons, text labels, verbal warnings, or messages in captions.

Another change will affect interactive systems such as chatbots and virtual assistants. If a person is communicating with an AI system, they must be clearly informed of this from the very first interaction. Transparency is a fundamental prerequisite for trust in the digital environment. A user should never find themselves in a situation where they believe they are communicating with a human, but are in fact being responded to by an algorithm.

The obligation of transparency is imposed on developers, suppliers and operators of AI systems by Article 50 of the European Union’s Artificial Intelligence Act (AI Act), adopted in 2024. Rules for the detection and identification of AI-generated content, including deepfakes and communication with chatbots, will come into force at the beginning of August 2026. The European Commission is therefore preparing a Code of Practice on the transparency of AI-generated content, which is intended to help fulfil these obligations in practice and is due to be finalised in June 2026.

The second draft of the code, published in mid-March, defines, among other things, technical standards for labelling AI content, such as metadata or digital watermarks. The Code consists of two parts. The first contains rules for providers of AI systems regarding the detection and labelling of AI-generated content. The second part then sets out guidelines for labelling deepfakes, addressed to entities implementing AI systems.

One of the fundamental principles of the code is multi-layered labelling of AI-generated content for AI system providers, i.e. the use of multiple techniques in combination. In addition, providers should make tools available to users for detecting AI-generated or manipulated content. For the labelling of deepfake content, the code proposes the introduction of an “AI” icon.

Source: lupa.cz