Meta’s former global affairs chief, Nick Clegg, believes that requiring tech companies to obtain consent from rights holders before training large language models with copyrighted material would “kill” the UK’s artificial intelligence industry.
“I just don’t know how you go around, asking everyone first. I just don’t see how that would work,” he said at a promotional event for his new book titled “How to Save the Internet,” according to The Times. “And by the way if you did it in Britain and no one else did it, you would basically kill the AI industry in this country overnight.”
Clegg, who is also the former deputy prime minister of the UK, described it as “implausible” to seek permission from every artist given the vast scale of data used to train AI models; however, he supported giving creators the option to opt out of having their work included.
New data bill amendment would require AI model transparency
Clegg’s remarks pertain to an amendment of the UK’s Data (Use and Access) Bill that is currently making its way through Parliament.
The amendment would require operators of AI models “to disclose information regarding text and data used in the pre-training, training and fine-tuning of general-purpose AI models.” It was introduced by Baroness Beeban Kidron, a crossbench peer and AI ethics expert, in January during the House of Lords report stage.
The government rejected the amendment in May, citing concerns over the potential costs of implementing such transparency measures.
When the bill was passed back to the House of Lords, Kidron’s amendment was reinserted with slightly softer language but rejected again in the House of Commons. Technology Secretary Peter Kyle told the Commons that “growing Britain’s economy needs both (AI and creative) sectors to succeed and to prosper,” according to PA.
A few days later, an open letter signed by hundreds of creatives — including Paul McCartney, Elton John, and Dua Lipa — urged the UK government to uphold stronger copyright protections in the AI era and support the Lords’ amendment. The House of Lords is expected to respond again on June 2.
Tech companies, artists, and policy experts remain divided on how copyright and AI should coexist
The debate over Kidron’s amendment follows a UK government proposal to reform copyright law, allowing AI developers to train their models on creators’ online content by default unless rights holders explicitly opt out. Bodies representing the creative industries largely rejected this proposal, as it put the onus on creators to exclude their content rather than requiring AI developers to seek consent.
Tech companies didn’t like it either, claiming that the system would make it difficult to determine which content they could legally use. They would prefer a broader exception that allows all copyrighted material to be used to train commercial models without the rights holder’s permission. Policy experts agree that allowing some creators to opt out would result in biased models.
AI models are currently trained on vast public datasets that often include copyrighted content without explicit permission, operating in a legal grey area, as most countries lack laws that directly address this type of data use. Nevertheless, tech companies like OpenAI and Meta are facing multiple lawsuits where rights holders are challenging the legality of using their work without consent to train commercial AI systems.