Generative artificial intelligence (AI) has several benefits for the trademark licensing industry: creating images of new licensed products, developing creative assets, and conducting market research. However, there are also a number of legal issues that brands must be aware of.

Here’s everything you need to know.

What is generative AI?

Generative AI refers to a type of artificial intelligence that can create new content or data. It uses machine learning algorithms to scan, review and learn from large volumes of data and then generate new content based on that learning. The generated content can include text, images, music and video.

Some of the current applications of generative AI include chatbots, language translation software and recommendation engines. However, generative AI has the potential to create entirely new forms of content that would be impossible for humans to produce on their own.

Recently, there has been a great deal of interest in, and use of, new generative AI systems such as OpenAI’s ChatGPT-4 and DALL-E 2. Microsoft has invested billions in OpenAI and is using ChatGPT to enhance its Bing search engine and Edge web browser. Google has launched its own AI chatbot, Bard. And AWS (Amazon Web Services) has partnered with Hugging Face to make Hugging Face’s products, which include an AI language generation tool named Bloom, available to AWS’s cloud customers who want to use the tools as the building blocks of their own applications. As the technology continues to advance, we will see greater adoption and more applications of generative AI.

Use of AI in the trademark licensing industry

Trademark licensing is an important commercial tool for trademark owners, enabling them to derive significant rewards by allowing third parties to use their marks. There are several ways generative AI can be a potentially game-changing technology for the trademark licensing industry.

One of the key ways generative AI can be used is in the development of images of proposed new licensed products based on generative AI system inputs of brand owner logos and trademarks, style rules and guidelines, and the desired categories and products.

Other possible uses include the development of creative assets such as style guides and sales materials, as well as market research, licensing plan development outlining potential categories for extension, retail channels and territories, and identification of potential licensee partners.

Legal issues with generative AI

There are several legal issues that can arise with the use of generative AI. One of the main concerns is that it can create content that may infringe on someone else’s intellectual property rights. For example, if an AI system generates an image of a proposed new licensed product that looks substantially similar to a competitor’s existing product, there could be questions about whether or not it constitutes copyright and/or trade dress infringement.

Similarly, if an AI system generates a design for a licensed product that is similar to an existing patented product, there could be patent infringement issues. In the event of infringement claims, who is liable? Is the owner of the AI system that scraped the Internet for huge volumes of data responsible, or is the user who queried and prompted the system and used the output liable? Although case law in this area is scarce, owners of AI systems are imposing terms of use that seek to shield the owners against liability and shift responsibility to the users. The success of this strategy is not certain as, again, case law in this area is scarce.

Another legal issue with generative AI is whether the output is protectable by copyright. Most copyright laws worldwide are based on an assumption that works of authorship are the creative output of human beings. With generative AI, the law has fallen behind reality. One argument in favor of protectability is that corporations are routinely recognized as the author of a work. Unfortunately, in connection with purely AI generated works, that argument has not yet been embraced, by either the US Copyright Office (USCO) or the courts.

On February 21, 2023, the USCO decided to register a work that was generated by both a human (i.e., the text) and AI (i.e., the images), specifically including the human-generated text and the selection, coordination, and arrangement of that text with the AI-generated images. The USCO conditioned its decision on the copyright applicant’s explicit exclusion of the non-human authorship contained in the work. In reaching its decision, the USCO reasoned that “[b]ecause of the significant distance between what a user may direct the AI system to create and the visual material the system actually produces,” the applicant did not have enough control over the final images generated to legally be considered the “inventive or master mind” behind them.

On March 16, 2023, the USCO issued further guidance stating that AI-generated works can indeed “contain sufficient human authorship to support a copyright claim. For example, a human may select or arrange AI-generated material in a sufficiently creative way that ‘the resulting work as a whole constitutes an original work of authorship’… What matters is the extent to which the human had creative control over the work’s expression and ‘actually formed’ the traditional elements of authorship.” However, human generated AI system prompts do not meet the human authorship requirement and are, therefore, not registrable. According to the USCO, prompts “function more like instructions to a commissioned artist—they identify what the prompter wishes to have depicted, but the machine determines how those instructions are implemented in its output.” The USCO’s guidance advises that copyright registration applicants have a duty to disclose the inclusion of AI-generated content in a work submitted for registration and provide a brief explanation of the human author’s contributions to the work.

Regulatory guidance and enforcement

The US Federal Trade Commission (FTC) has issued guidance on the use of AI and algorithms in business practices. The FTC recommended that companies should take steps to ensure that their AI systems are accurate, transparent, explainable and fair, and that they should be mindful of the potential for these systems to perpetuate fraud or amplify biases.

On March 3, 2023, FTC spokesperson, Juliana Gruenwald, told Business Insider,“The FTC has already seen a staggering rise in fraud on social media…AI tools that generate authentic-seeming videos, photos, audio, and text could supercharge this trend, allowing fraudsters greater reach and speed.”

In June 2022, the FTC recommended that Congress pass laws to prohibit the use of AI tools to commit fraud and cause consumer harm. The FTC has also brought enforcement actions against companies for deceptive or unfair practices and privacy violations related to AI.

In March 2022, the FTC reached a settlement with a company for AI privacy violations by requiring the company to destroy algorithms and models based on wrongful data collection and processing. It is important to note that the field of AI and machine learning is rapidly evolving, and the regulatory landscape is likely to change, as well. Therefore, it is important for users of generative AI to stay informed of the latest developments and best practices in this area.

Minimizing legal liability

When using generative AI, there are several steps companies can take to minimize liability:

  1. Be transparent:Clearly communicate that AI generated content was, in fact, generated by an AI system. If appropriate, provide information about the prompts used to elicit the AI generated output.
  2. Review output:Monitor the output of the AI system to identify and address any issues or errors. Have a process in place for company personnel to review, search for and remove infringing, inappropriate or harmful content generated by the AI system.
  3. Engage counsel:Consult with legal counsel to ensure compliance with relevant laws and regulations, and to develop appropriate policies and procedures for using generative AI.
  4. Implement safeguards: Implement safeguards to avoid violations of applicable laws and regulations and maximize compliance with companies’ established policies and procedures.

Generative AI can be a valuable tool for the trademark licensing industry. However, its use can create legal and regulatory issues. Given the legal implications and risks, it’s important to think about a framework for employee use. To minimize the risk of claims and damages, companies should consider developing appropriate policies and procedures that mitigate potential risks and address legal and ethical concerns arising from the use of generative AI.

Source: thedrum.com