How Does Artificial Intelligence Affect Intellectual Property Protection?

Artificial Intelligence (AI) is revolutionizing the way we interact with technology and the internet. As AI continues to advance at an unprecedented pace, it is having a profound impact on intellectual property (IP) protection.

AI chat bots such as ChatGPT can produce incredibly sophisticated human-like written answers due to the language model’s ability to understand questions and tasks and produce responses based on masses of text and data on which it has been trained. AI is ever more in the public sphere and its role played in business is only set to increase. The UK government estimates that AI contributed £3.7bn to the UK economy last year – with this in mind, interest and therefore funding in AI will continue at speed.

These innovative developments will have a positive impact on how we interact with technology and the internet. However, it is important to remember that there are also significant potential risks, with advancements moving at such a rapid pace that most AI tools on the market have not yet been fully explored and have little to no regulation in place. Elon Musk and the late Stephen Hawking are among those who have called for greater research and regulation to ensure AI is developed in an ethical manner.

In 2021, the European Commission (EC) proposed the Artificial Intelligence Act, designed to encourage AI developers in Europe to keep transparency and user safety front of mind by assigning applications of AI to three risk categories (unacceptable, high-risk and non-high-risk).

In April 2023, China’s Cyber Administration released draft measures, for the administration of generative artificial intelligence services, for public comment. The draft measures try to balance the development of the technology with maintaining control over it to ensuring that it does not disrupt social order. One of the key requirements to be implemented is for companies that create generative AI tech to implement safeguards to ensure compliance – not only to prevent content being generated which does not fall foul of government policy, but also with consideration to the IP implications.

With AI technology moving so quickly however, the legislation around AI needs to be kept updated to remain relevant and useful. Overly prescriptive regulations will stifle developments, whereas no regulation at all may result in rapid disruption and unforeseen negative consequences.

Key debates are ongoing on the impact of AI on existing Intellectual Property (IP) systems, and whether AI-generated work can be patented or copyrighted in the same way as human-generated work.

There is no mention of the Intellectual Property impact in the EC’s Artificial Intelligence Act, leaving uncertainty as to how rightsholder’s IP will be protected. Similarly, there is little legislation in the UK specifically to keep IP safe from risks associated with AI. Last year the UK government opened a consultation on whether copyright protection for computer-generated works without a human author should they be protected, which resulted in the continuation of UK law protecting computer-generated works. In China, courts have found that AI-generated content can be subject to copyright protection in situations where there is a certain degree of human involvement with the AI in the creative process of an original work.

Under the regulation that is currently in place, AI could be seen to pose a risk to rightsholder’s IP, throwing up issues around authorship and ownership as well as copyright infringement. The output of generative AI depends on both user input and the huge data set, much of it harvested from the internet, used to generate it. Using processes and rules to gather information from a large dataset of human-generated text means it is possible and indeed likely that the content it produces will contain some of the same elements as already existing literature or art.

Technically, this means that the original owner could claim against an AI generator or the AI’s creator for copyright infringement. It is still unclear, however, how this law case would be treated and who would be held responsible. It is likely that creators of AI tools will need to show that proper safeguards have been put in place to prevent IP infringement by their systems. Developers must aim to establish proper rules and contracts with third parties, such as artists, image library owners, and database owners, for data used to generate output. For users of such systems, they should ensure that any content they generate is checked for whether it infringes third party IP rights before they use it, particularly in a commercial context.

With Intellectual Property laws both in the UK and globally struggling to keep up with the pace of AI developments, brands need to educate themselves on the potential risks associated with AI-generated content. If a brand is looking into using AI-generated content for its own marketing or product design for example, they should be aware that AI can generate content that infringes on third-party IP such as trademarks or copyright images. Ensuring they have the legal rights to use AI-generated content will be key.

It is not only those companies looking to use AI tools to be aware of the associated risks to IP. Now that AI tools are so widespread, any brand is at risk of having their IP accidentally or deliberately targeted for IP infringement. To help mitigate this risk, brands need to make sure they have a robust protection strategy in place. Some of the steps they should take are as follows:

  1. IP assessment: Companies will need to conduct audits of their Intellectual Assets to identify value that is currently unprotected. Potential infringement of third party rights need to also be examined and risks quantified using professional assistance. This will also allow them to understand better the scope of their own and third-party rights.
  2. Copyright protection: clear evidence of copyright protection should be in place, so that if AI-generated content threatens the brands ownership the brand owner is able to give sufficient evidence to win a dispute. This needs to be rolled out in all the countries where a brand has a footprint. Blockchain-based solutions can assist in securing immutable evidence where recordal systems are unavailable or cost prohibitive.
  3. Patenting: AI solutions are notoriously difficult to patent, and IP professionals and the courts are in ongoing conversations about what is and is not patentable. Companies will need to consult with an IP specialist to work out which parts of their innovation to patent to give them the best protection against infringement. A professional can also provide guidance as to the ‘how’.
  4. IP policy development: Companies should develop clear IP policies that outline their expectations for the use and protection of their IP assets. This can help to ensure that employees, partners, and third parties are aware of their responsibilities and obligations regarding IP protection.

If used correctly, AI is useful for brands in running their brand protection programmes. AI technologies can help to track IP assets and identify infringers or copyright issues, which might come from AI platforms themselves. Such technology is an excellent complement to, but cannot yet fully replace, human advisors.

There is still a long way to go in understanding the impact of AI on IP rightsholders. It is clear, however, that as AI technology continues to become more commonplace and widely used, brands that are proactive in addressing intellectual property issues will be better positioned to take advantage of the benefits of AI-generated content whilst minimising the potential risks.

——-

About the author

James is the Deputy Global Enforcement Head based in Rouse’s Guangzhou office. He advises leading multinationals and fast-growth companies on protecting, enforcing, and leveraging their intellectual assets. He works closely with brand, digital content, and technology owners across the electronics, luxury goods, apparel, FMCGs (fast-moving consumer goods), industrial and software industries.

James’ practice areas are intelligence-led online/offline enforcement programmes, anti-counterfeiting, evidence gathering, dispute resolution, litigation management, online monitoring and takedown, domain name recovery, trade fair enforcement and customs protection measures.

Tom Jowitt

Tom Jowitt is a leading British tech freelancer and long standing contributor to Silicon UK. He is also a bit of a Lord of the Rings nut...

Recent Posts

How Agentic AI Became the Newest Form of Business Investment

Agentic AI is revolutionizing business investment by enabling autonomous, scalable systems that require minimal human…

2 months ago

New EU Law Expands Digital Resilience to Third-Party Dependencies: What is the Impact on Businesses

The EU’s Digital Operational Resilience Act (DORA) sets new standards for financial services, emphasizing digital…

2 months ago

So long, SaaS: Klarna is right, DIY is the Future for AI-Enabled Businesses

Klarna’s bold decision to abandon SaaS giants like Salesforce and Workday signals a major shift…

2 months ago

Demystifying AI Models: How to Choose the Right Ones

Large Language Models (LLMs) have revolutionized artificial intelligence, transforming how businesses interact with and generate…

3 months ago

Beyond CISO Scapegoating: Cultivating Company-Wide Security Mindsets

In the evolving cybersecurity landscape, the role of the Chief Information Security Officer (CISO) has…

3 months ago

Three Key Considerations for Companies Implementing Ethical AI

Artificial Intelligence (AI) has grown exponentially, transforming industries worldwide. As its use cases expand, concerns…

3 months ago