AI should be regulated like medicine and nuclear power: UK Labour Party MP

Tuesday, 6 Jun 2023

Cointelegraph By Jesse Coghlan

Original Article

A front-bench member of the United Kingdom’s opposition Labour Party says artificial intelligence should be licensed and regulated similarly to the pharmaceutical and nuclear industries.


Join us on social networks

Developers working on artificial intelligence should be licensed and regulated similarly to the pharmaceutical, medical and nuclear industries, according to a representative of Britain’s largest opposition political party.

Lucy Powell, a member of parliament for the United Kingdom’s Labour Party and front-bench spokesperson serving as the party’s shadow secretary of state for digital, culture, media and sport, told The Guardian on June 5 that firms like OpenAI or Google which have created AI models should “have to have a license in order to build these models,” adding:

“My real point of concern is the lack of any regulation of the large language models that can then be applied across a range of AI tools, whether that’s governing how they are built, how they are managed or how they are controlled.”

Powell argued regulating the development of certain technologies is a better option than banning them similar to how the European Union banned facial recognition tools.

She added AI “can have a lot of unintended consequences” but if developers were forced to be open about their AI training models and datasets then some risks could be mitigated by the government

“This technology is moving so fast that it needs an active, interventionist government approach, rather than a laissez-faire one,” she said.

Ahead of speaking at the TechUk conference tomorrow, I spoke to the Guardian about Labour’s approach to digital tech and AI

— Lucy Powell MP (@LucyMPowell)

June 5, 2023

Powell also believes such advanced technology could greatly impact the U.K. economy and the Labour Party is purportedly finishing up its own policies on AI and related technologies.

Next week, Labour leader Keir Starmer is planning to hold a meeting with the party’s shadow cabinet at Google’s U.K. offices so it can speak with its AI-focused executives.

Related: EU officials want all AI-generated content to be labeled

Meanwhile, Matt Clifford, the chair of the Advanced Research and Invention Agency — the government’s research agency set up last February — appeared on TalkTV on June 5 to warn that AI could threaten humans in as little as two years.

EXCLUSIVE: The PM’s AI Task Force adviser Matt Clifford says the world may only have two years left to tame Artificial Intelligence before computers become too powerful for humans to control.

— TalkTV (@TalkTV)

June 5, 2023

“If we don’t start to think about now how to regulate and think about safety, then in two years’ time we’ll be finding that we have systems that are very powerful indeed,” he said. Clifford clarified, however, that a two-year timeline is the “bullish end of the spectrum.”

Clifford highlighted that AI tools today could be used to help “launch large-scale cyber attacks.” OpenAI has put forward $1 million to support AI-aided cybersecurity tech to thwart such uses.

“I think there’s lots of different scenarios to worry about,” he said. “I certainly think it’s right that it should be very high on the policymakers’ agendas.”

BitCulture: Fine art on Solana, AI music, podcast + book reviews


Cointelegraph By Jesse Coghlan

You May Also Like…

Open chat
BlockFo Chat
Hello 👋, How can we help you?
You can choose between Telegram or WhatsApp 👍
📱 When you've made your choice, we automatically transfer to the right app 🔝🔐
🖥️ Or, if you use a PC or Mac, then we'll open a new window to load your desktop app.