Intellectual Property
A particularly important issue that arises from the growing use of Generative AI is the issue of copyright and intellectual property. This can apply on a number of levels. Performers may have concerns about the implications for their image rights, with the increasing ability to clone the voices and videos of talent. This may become a key element in contracts:
Matt Deegan, Folder Media
The ability of Gen AI to both generate scripts and clone actors was a key issue in the 2023 US writers and actors strikes. Content creators meanwhile are concerned about their work being used to train Gen AI models that can then be used to compete directly against them.
One option for content creators is to block the use of their content for the education of Gen AI models, another is to enforce some kind of licensing model. The Guardian newspaper and the BBC for example have taken steps to block their content, while X (Twitter) claims that moving behind a paywall may be the only way to stop the army of AI crawlers slowing down the service.
The big question is how realistic attempts to block access to, or monetize, data as an input to Gen AI can be. The first issue is that so much is available on the open web that the data will not necessarily be seen as essential. There is some skepticism that media companies may be sitting on a treasure trove of unique and valuable data that is not available elsewhere.
Some content is very obviously labeled and IP enforceable, but so much gradually bleeds into and becomes indistinguishable from general popular culture, the zeitgeist, a sort of de facto collective ownership.
Meanwhile, there is the issue of being able to prove that your content has contributed directly or indirectly to a new creation. Copyright law places great emphasis on the idea of a ‘creative step’ that a piece of work can be sufficiently changed to be deemed a new creation, like the works of the artist Roy Lichtenstein which were ‘inspired by’ images from 1960s comic books.
In this sense, it could be argued that we may be unable to hold Gen AI programs to higher standards than we apply to ourselves:
Moritz Fries, ProSiebenSat.1 Media SE
Scott Thompson, Publicis
Graeme Griffiths, IPA
One initiative that may have more success is the idea that content generated by AI should be labeled as such, so that users are at least aware of how it was created. A number of major players like BBC and Microsoft are part of the Content Authenticity Initiative, arguing that content should be labeled in the metadata as AI-produced, along with what steps have been taken to generate the content. This initiative could prove useful - but would of course be limited to areas of activity that are regulated and ‘above board’, as opposed for example to political deep fake videos that are deliberately designed to mislead. For unregulated content, the irony is that it will most likely be AI programs that are trained to recognize AI-generated or altered content.