Without companies like Openai, Google and Meta of AI training on copyright protected by the United Kingdom can undermine the quality of the model and the economic impact, warn political experts. They say that this will lead to a bias in the results of the model, undergoing their effectiveness, while the right pushers are unlikely to receive the level of compensation they provide.
THE The British government has opened a consultation In December 2024, exploring the means to protect the rights of artists, writers and composers when creative content is used to form AI models. It described a system that allows AI developers to use online content for training unless the reholiament is explicitly deactivated.
Organizations representing the creative industries have largely rejected this proposal, because it has put the burden on the creators to exclude their content rather than asking the developers of AI to request consent. Technological companies I did not like either, arguing that the system would make it difficult to determine the content they could legally use, restrict commercial applications and require excessive transparency.
During a recent webinar hosted by the Center for Data Innovation reflection group, three political experts explain why they believe that any solution unless a full text and an exemption from data exploration in the risk of British law law producing ineffective AI systems and innovation to take down.
Unsubscribe regimes can cause poorly trained AI and minimal income for rights
Benjamin White, the founder of Copyright Reform Plaidance Group Knowledge Rights 21, argued that the regulations on AI training will affect more than creative industries, and as copyright is used to stimulate investment by protecting intellectual property, he declared that the broader economic impact of any restriction should also be taken into account. “The rules that affect singers affect scientists and the rules that affect clinicians also affect composers.
He added that the scientific community is “very concerned about the framing of the consultation”, noting that it neglects the potential advantages of knowledge sharing in advanced university research, which, in turn, offers widespread advantages for society and the economy.
White said: “The existing exception does not allow universities to share training data or to analyze data with other universities in proportional partnerships, does not allow NHS trustees to share copywood data from law equipment as articles or documents screened on the web.”
SEE: Why artists hate the art of Ai
Bertin Martens, principal researcher at the Bruegel economic reflection group, added: “I think the media industries want to have their cake and eat it at the same time. They use all these models to increase their own productivity at the moment, and they benefit from good quality models, and retaining their data for training, they reduce quality … so that it reduces their own flesh. “
If the developers of the AI have signed license agreements with only consenting publishers or right -handers, the data on which their models are formed would be biased, according to Martens. “Obviously, even the major AI companies will not sign licenses along this long tail of the small publishers,” he said. “It is far too expensive in terms of transaction costs, it is not possible, and we therefore obtain biased models with partial information.”
Julia Willemyns, co-founder of the research project on UK Day One technological policies, said that the opt-out regime would be unlikely to be effective in practice, because the courts with less restrictive laws would always allow access to the same content for training. The blocking of access to outings of these jurisdictions would ultimately deprive the United Kingdom of the best available models, she warned. She said it “slows down technology” and has “negative productivity effects”.
SEE: The British government publishes an Action Plan on AI
In addition, it is unlikely that artists will gain significant income from the AI license agreements. “The problem is that each data element is not worth much for the models, these models work on a large scale,” said Willemyns. Even if license regimes were applied worldwide and the content of rights could only be used with explicit legal consent, the economic advantage for creators would always be “probably very, very minimal”. “So we exchange economic effects outside the country for a positive that seems very negligible,” she said.
Willemyns added that overcoming the copyright approach to the United Kingdom, for example, requiring separate regimes for AI training on scientific and creative documents, could create legal uncertainty. This would overload the courts, would dissuade the adoption of companies and risk losing productivity gains of the AI. An exemption from text and data exploration would guarantee simplicity.
The controversy of Ghibli de Chatgpt underlines the blurred lines in the creativity of the AI
The debate on artistic protection against innovation also surfaced last month during a controversy involving the art generated by AI in the style of Ghibli studioThe Japanese animation house behind “Spirited Away” and “My neighbor Totoro”. The criticisms argued that he was likely to appropriate a distinctive artistic style without authorization, and Openai finally introduced a refusal mechanism that activates when users try to generate images in the style of a living artist.
The panel did not agree with this approach. Willemyns said that the Studio Mother Society Ghibli “has clearly strengthened” when increased attention pushed more people to watch his films. “I have the impression that the arguments that AI Sol will not really take the content has been somehow reaffirmed by the body,” she said. Martens has accepted, arguing that “if there are many Ghibli looks that are produced, it increases competition around a popular product, and this is something that we should welcome.”
SEE: United Kingdom of the Public Sector AI revisiting
White added that cartoons with Ghibli's artistic style are produced by many different Japanese studios. “They are all people with big eyes, of Western aspect, it's style,” he said. “It is not protected by copyright, what the law on copyright protects is a substantial similarity.”
Martens noted that the proximity of a work generated by Ai-Ai can arrive at an original is “up to the courts”, but this can only be determined on a case-by-case basis. In the end, the panel agreed that the models should not be able to reproduce the content of the training directly, but that the training on the equipment accessible to the public should remain authorized. “Having flexibility on how systems are built and how technology teaches content accessible to the public is probably the best way to follow,” said Willemyns.