Labour Planuje Wymusić Obowiązkowe Dzielenie Się Wynikami Testów Drogowych przez Firmy zajmujące się Sztuczną Inteligencją

The Labour Party is planning to introduce mandatory regulations that would require AI companies to share test data regarding their autonomous driving technology. This comes after the party warned that regulations and policymakers are failing to effectively control social media platforms. The Labour Party intends to replace the voluntary agreement between tech companies and the government with a statutory regime, under which AI companies would be obligated to share test data with officials.

Peter Kyle, Shadow Minister for Technology in the Labour Party, stated that lawmakers and regulators have been “falling behind” social media and assured that the Labour Party will prevent a similar mistake with AI. He called for greater transparency from tech companies following the murder of Brianna Ghey, stating that companies developing AI technology would have to be more open under a Labour government.

“We are moving from a voluntary code to a statutory code,” Kyle said during the “Sunday with Laura Kuenssberg” program on BBC One. “The companies conducting this kind of research and development would have to provide all test data and tell us what they are testing, so that we can see exactly what is happening and where this technology is leading us.”

In November, during the inaugural Global AI Security Summit, Rishi Sunak reached a voluntary agreement with leading AI companies, including Google and the creator of ChatGPT, OpenAI, to collaborate on testing advanced AI models before and after deployment. According to the Labour Party’s proposals, AI companies would have to inform the government whether they plan to develop AI systems with a specified level of capability and carry out safety tests under “independent supervision”.

The testing agreement during the AI summit was supported by the European Union and 10 countries, including the United States, the United Kingdom, Japan, France, and Germany. Tech companies that agreed to test their models included Google, OpenAI, Amazon, Microsoft, and Meta, led by Mark Zuckerberg.

Kyle, who is visiting parliamentarians and tech executives in Washington, said that the test results would help the newly established AI Security Institute in the UK “disconcert society with its independent oversight of what is happening in the most advanced areas of artificial intelligence.”

He added, “Some of these technologies will have a huge impact on our work, society, and culture. We need to ensure that they are developed in a safe manner.”

Frequently Asked Questions about Labour’s AI Regulations

The source of the article is from the blog toumai.es