According to a spokesperson of Britain’s major opposition political party, developers of artificial intelligence should be subject to the same licensing and regulation as the pharmaceutical, medical, and nuclear industries.
The Labour Party’s Lucy Powell suggested to The Guardian on June 5 that companies like OpenAI and Google that have developed AI models “have to have a license to build these models.”
AI Models
Lucy Powell, a member of the Labour Party, thinks that artificial intelligence developers should be subject to the same licensing and regulations as the medical, nuclear, and pharmaceutical industries. According to her, there should be regulations governing the use of huge language models across different AI tools, and companies like OpenAI and Google should be required to obtain a license before creating AI models.
Powell thinks that, as evidenced by the European Union’s prohibition on facial recognition, limiting the development of some technologies is preferable to outright prohibiting them. She thinks that some hazards could be reduced by the government if developers were compelled to disclose their AI training models and datasets.
- Labour Party member Lucy Powell advocates AI licensing, regulations, and licensing for big language models.
- Powell advocates for limiting technology development and reducing hazards through government oversight.
- Keir Starmer seeks AI executives; AI threat warns of imminent danger.
The Labour Party is completing its plans on artificial intelligence and similar technologies. According to Powell, sophisticated technology might have a huge impact on the UK economy.
To talk with executives interested in artificial intelligence, Keir Starmer intends to meet with the party’s shadow cabinet at Google’s U.K. offices. The head of the Advanced Research and Invention Agency, Matt Clifford, issues a dire warning that in as little as two years, AI may pose a threat to people.
He contends that a timescale of two years represents the “bullish end of the spectrum” and that AI capabilities might be used to carry out significant cyberattacks. $1 million has been contributed by OpenAI to support AI-enhanced cybersecurity technology.