EU lawmakers are expecting to approve the draft for Artificial Intelligence (AI) regulations in March. The lawmakers aim for a deal with EU countries by the end of the year, Reuters quoted one of the legislators.

In 2021, the European Commission put forth AI regulations with the aim of promoting innovation and establishing a worldwide standard for the technology, encompassing areas such as autonomous vehicles, chatbots, and automated factories, which are currently dominated by China and the US.

“We are still in good time to fulfill the overall target and calendar that we assumed in the very beginning, which is to wrap it up during this mandate,” said Dragos Tudorache, member of the European Parliament and co-rapporteur of the EU AI Act.

Concern about potential risks

It was planned a year ago, and the delay in its approval had drawn criticism and scepticism as the lawmakers were accused of not taking the risk of AI seriously. However, the companies involved in this industry are stating that such an Act can stifle innovation.

“It took slightly longer than I initially thought, this text has seen a level of complexity that is even higher than the typical Brussels complex machinery,” said Tudorache.

The debate includes defining “general purpose AI,” with some seeing it as high risk and others wanting stricter regulations for chatbots like ChatGPT due to potential risks.

“During this year alone, we are going to see some exponential leaps forward not only for ChatGPT but for a lot of other general purpose machines,” said Tudorache, with reference to the lawmakers were trying to write some basic principles on what makes general purpose such a distinct type of AI.

ChatGPT grabbed attention

Even though a draft of the rules for AI regulation was introduced last year, the popularity of ChatGPT has forced the lawmakers to put it on their priority list. ChatGPT was only launched in November and has made history by reaching 100 million users in two months.

EU industry leader Thierry Breton said the risk posed by ChatGPT and the AI system underscored the urgent need for rules.

“As showcased by ChatGPT, AI solutions can offer great opportunities for businesses and citizens, but can also pose risks. This is why we need a solid regulatory framework to ensure trustworthy AI built on high-quality data,” said Breton.

Breton is seeking co-operation from OpenAI and developers of high-risk AI systems to ensure compliance with the proposed AI Act.

This purposed AI regulation Act will aim to tackle concerns such as those raised by the introduction of ChatGPT.

“I think if that will be the effect of this Act, then we will be severely missing our objective. And we haven’t done our jobs if that’s what’s going to happen,” said Tudorache.

However, the critics of the legislators has said such move could lead to increased costs and more compliance pressure for companies, throttling innovation.

This article is originally from MetaNews.


Please enter your comment!
Please enter your name here