The EU has reached a historic regulatory agreement over AI development

Following a marathon 72-hour debate, European Union legislators Friday have reached a historic deal on its expansive AI Act safety development bill, the broadest-ranging and far-reaching of its kind to date, reports The Washington Post. Details of the deal itself were not immediately available.

“This legislation will represent a standard, a model, for many other jurisdictions out there,” Dragoș Tudorache, a Romanian lawmaker co-leading the AI Act negotiation, told The Washington Post, “which means that we have to have an extra duty of care when we draft it because it is going to be an influence for many others.”

The proposed regulations would dictate the ways in which future machine learning models could be developed and distributed within the trade bloc, impacting their use in applications ranging from education to employment to healthcare. AI development would be split between four categories depending on how much societal risk each potentially poses — minimal, limited, high, and banned.

Banned uses would include anything that circumvents the user’s will, targets protected social groups or provides real-time biometric tracking (like facial recognition). High risk uses include anything “intended to be used as a safety component of a product,” or which are to be used in defined applications like critical infrastructure, education, legal/judicial matters and employee hiring. Chatbots like ChatGPT, Bard and Bing would fall under the “limited risk” metrics.

“The European Commission once again has stepped out in a bold fashion to address emerging technology, just like they had done with data privacy through the GDPR,” Dr. Brandie Nonnecke, Director of the CITRIS Policy Lab at UC Berkeley, told Engadget in 2021. “The proposed regulation is quite interesting in that it is attacking the problem from a risk-based approach,” similar what’s been suggested in Canada’s proposed AI regulatory framework.

Ongoing negotiations over the proposed rules had been disrupted in recent weeks by France, Germany and Italy. They were stonewalling talks over the rules guiding how EU member nations could develop Foundational Models, generalized AIs from which more specialized applications can be fine-tuned. OpenAI’s GPT-4 is one such foundational model, as ChatGPT, GPTs and other third-party applications are all trained from its base functionality. The trio of countries worried that stringent EU regulations on generative AI models could hamper member nations’ efforts to competitively develop them.

The EC had previously addressed the growing challenges of managing emerging AI technologies through an variety of efforts, releasing both the first European Strategy on AI and Coordinated Plan on AI in 2018, followed by the Guidelines for Trustworthy AI in 2019. The following year, the Commission released a White Paper on AI and Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics.

“Artificial intelligence should not be an end in itself, but a tool that has to serve people with the ultimate aim of increasing human well-being,” the European Commission wrote in its draft AI regulations. “Rules for artificial intelligence available in the Union market or otherwise affecting Union citizens should thus put people at the centre (be human-centric), so that they can trust that the technology is used in a way that is safe and compliant with the law, including the respect of fundamental rights.”

“At the same time, such rules for artificial intelligence should be balanced, proportionate and not unnecessarily constrain or hinder technological development,” it continued. “This is of particular importance because, although artificial intelligence is already present in many aspects of people’s daily lives, it is not possible to anticipate all possible uses or applications thereof that may happen in the future.”

More recently, the EC has begun collaborating with industry members on a voluntary basis to craft internal rules that would allow companies and regulators to operate under the same agreed-upon ground rules. “[Google CEO Sundar Pichai] and I agreed that we cannot afford to wait until AI regulation actually becomes applicable, and to work together with all AI developers to already develop an AI pact on a voluntary basis ahead of the legal deadline,” European Commission (EC) industry chief Thierry Breton said in a May statement. The EC has entered into similar discussions with US-based corporations as well.



Leave a Reply

Your email address will not be published. Required fields are marked *