How Might Generative AI Impact IP Laws?

5 mins

Towards the end of March, Elon Musk was among the first signatories of an open letter from the Future of Life institute which has since gathered over 30,000 signatures, calling for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

 

The letter called for AI developers to collectively implement a system of self-regulation, through “a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” while simultaneously cooperating “with policymakers to dramatically accelerate development of robust AI governance systems.” In other words, it called for the regulation of AI developers, both bottom-up (through self-regulation) and top-down (via lawmakers). 

 

Chief among the concerns raised by the letter was the existential threat that unfettered AI development might pose to human life and civilisation. “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?” it asked. “Should we risk loss of control of our civilization?”

 

There are, however, various concerns over AI that are, while less dramatic, certainly more immediate than this. The letter alludes to the spread of misinformation and the encroachment on (meaningful) employment that the technology threatens, but doesn’t dive into related issues like intellectual property (IP) infringement.

 

Lawyers and AI labs are already grappling with the question of how AI relates to this field, and the outcomes of these legal wrangles will, in all likelihood, inform how AI is ultimately regulated – or, indeed, whether it is at all. 

 

The issues

 

While the latest wave of AI technology may be labelled “generative,” it is, paradoxically, derivative at its core. All AI models are trained on pre-existing inputs, which themselves have human originators. Those human originators have legal protections over their content.

 

Legal issues therefore circle the production of AI content. Take, for example, the cover of Beyoncé’s Cuff It that recently went viral featuring AI-generated vocals from Rihanna. 

 

If the creator of the track is misleading about the fact that the vocal is AI-generated, then they may be liable to a “passing-off” claim because they have (in the eyes of the law) attempted to pass the music off as being created by Rihanna, when it wasn’t.

 

“If you're creating a recording with the intention of misleading people into thinking it's the real thing,” Alexander Ross, partner at UK law firm Wiggin, told Business Insider, “then that's called a passing-off claim. You're passing-off that as the original."

 

Perhaps by way of warning to the track’s creator, Rihanna has form in pursuing passing-off claims. In 2015, she won a case against Topshop for selling T-shirts featuring an image of her taken from a photoshoot for her Talk That Talk album.

 

Then again, perhaps RiRi will take an approach more like that of Grimes, who has recently given open permission to creators to clone her voice using AI, in exchange for 50% royalties. 

 

Rihanna isn’t the only potentially infringed party in the Cuff It case, though.

 

“If [the creator has] pinched the instrumental, or part of, from the original Beyoncé recording,” said Ross, “that's copyright infringement in a number of ways.”

 

Recreating the backing track themselves could cover AI creators in this instance, he said, provided that they adhere to the usual rules around covering music: notifying the artist (Beyoncé) of the cover, obtaining a mechanical licence, and paying her royalties.

 

It is for this reason that Universal Music Group has taken the step of asking streaming services like Spotify and Apple Music to block AI developers from training models on their content. 

 

“We will not hesitate to take steps to protect our rights and those of our artists,” the group said to the streaming platforms in emails.

 

An insider to these conversations told the FT that generative AI “poses significant issues” over musical copyright. “You could say: compose a song that has the lyrics to be like Taylor Swift, but the vocals to be in the style of Bruno Mars, but I want the theme to be more Harry Styles. The output you get is due to the fact the AI has been trained on those artists’ intellectual property.”

 

There is also the question of who is liable for AIs that breach intellectual property: the person or company using the software, or the developer who trained it? Stability AI, for example, is currently being sued by Getty Images for illegally accessing 12 million of its photos to train its Stable Diffusion image generating platform. Stable Diffusion has, according to the suit, started producing images including Getty’s watermark.

 

The case for regulation

 

The UK government published the results of its consultation into the implications of advanced AI on IP laws in June 2022. 

 

At high level, the report promised no changes to the law besides the introduction of a new copyright and database exception allowing text and data mining for any purpose, while protecting the rights of current content owners, “including a requirement for lawful access.”

 

However, the consultation predated the launch of the latest round of generative AI technology, particularly GPT-4 (and ChatGPT in its universally-available form). The consultation’s executive summary is full of caveats such as “the use of AI is still in its early stages,” or “we will keep this area of law under review to ensure that the UK patent system supports AI innovation.”

 

Might the advent of mass-market generative AI have tipped the balance?

 

Outgoing Chief Scientific Officer Patrick Vallance appears to think so. His March 2023 Pro-innovation Regulation of Technologies Review recommended “that the government requires the IPO [Intellectual Property Office] to provide clearer guidance to AI firms as to their legal responsibilities, to coordinate intelligence on systematic copyright infringement by AI, and to encourage development of AI tools to help enforce IP rights.”

 

The report, overall, goes to lengths to stress the balance that needs to be struck between encouraging innovation and protecting the rights of creatives. The danger, which the Future of Life letter highlighted and which Vallance’s report hints at in places, is that the technology evolves faster than regulators can move in obtaining this balance.

 

Elsewhere, governments have taken a more aggressive approach. Italy banned ChatGPT at the beginning of April over data protection and age verification concerns, giving OpenAI until the end of the month to comply with its concerns on both fronts.

 

The case against

 

Vallance’s report is, however, very clear on the need to embrace the potential of AI. It highlights the “urgent need” to overcome “the barriers faced by AI firms in accessing copyright and database materials,” and encourages lawmakers to “utilise existing protections of copyright and IP law on the output of AI.” In other words, it makes the case that existing laws will be sufficient to regulate AI, while also arguing that laws should reflect the need for access to (proprietary) content in order to train models.

 

AI developer Anthropic makes the point, in a blog post about the possible dangers of unfettered AI development, that “methods for detecting and mitigating safety problems may be extremely hard to plan out in advance.” The self-perpetuating momentum of AI’s development makes it difficult to know, today, what might need legislating against tomorrow. 

 

Anthropic’s blog post raises this point in the context of the general safety of AI systems, but the point applies equally to IP legislation. How might legislators go about designing specific laws, that go beyond those already in existence, to apply to technological developments that haven’t yet taken place?

 

The balance

 

Clearly, there is a need to strike a balance between the transformative power of AI and the existent rights of individuals and content creators. To what extent this requires new laws and regulations to be created specifically for AI, as opposed to simply ensuring that existing IP laws are reviewed and modernised in order to account for the possibilities of the technology, is probably one for the lawyers.

 

Either way, the technology is set to radically change our society and way of life over the coming years and decades, and we’re excited to see how this transformation pans out. How this might relate to AI-specific law is an interesting question. 

 

Might, for example, legislators decide that artists, like Rihanna, have the right to protect themselves against AI impersonation? Where might a law like this sit alongside the current laws around cover versions of songs and other common music industry practices, like sampling?

 

This, fundamentally, gets to the heart of the point on how fast AI is now catching up to human intelligence. Like “generative” AI, human creativity is itself derivative. No art is created in a vacuum; all human artists are “trained” on the work of those who have gone before them. 

 

Current laws permit artists to imitate and even replicate each others’ work to a reasonable extent, in the knowledge that to prevent them doing so would stifle the production of any future art. The question for lawmakers is: to what extent should the laws that govern the way AI processes these inputs, and produces new outputs, vary from those that define how humans do so?

 

Oho

/ (əʊˈhəʊ) /

interjection

  1. an exclamation expressing surprise, exultation or delight

 

Find out how Oho’s offering could surprise and delight you here.

 

 

 

Oho Group ltd.
Site by Venn