The IT sector has been rocked by advances in artificial intelligence this year, prompting calls for regulations governing how the technology should be used from politicians, consumer advocacy organizations, and even AI professionals themselves.

At least in the UK, those restrictions are starting to take shape. The AI Act, which is slated to be the first comprehensive set of AI regulations in the West, was approved by the European Union’s parliament on Wednesday.

Real-time, remote biometric surveillance is prohibited under the proposed regulations, and it is also forbidden to gather surveillance data or scrape the internet to create facial recognition databases. In the parliament’s version, methods known as “predictive policing” that look at past criminal behavior and other data in an effort to predict future illicit action are also prohibited.

More generally, the proposed law seeks to control how businesses train AI models using massive data sets. In some circumstances, it would be necessary for businesses to disclose when material is produced using AI.

Companies would also be forced to provide summaries of the copyrighted data they used to train their AI models, as well as construct their algorithms in a way that prohibits them from producing illicit content.

When their works are used as the basis for AI-generated content by tools like ChatGPT, such an obligation would give publishers and content creators a potential way to pursue a cut of the revenues.

A firm might be fined up to 6% or 7% of its global revenue under certain circumstances, according to the bill’s current drafts.

Tech corporations and their lobbyists contend that any regulations put into place by the government should concentrate on certain applications of AI rather than place too many limitations on how AI is created, as is being proposed in Europe. They claim that such a strategy would stifle innovation.

However, other technologists have joined academics and technologists in endorsing regulations like those being created in the EU, which may effectively slow down a race by businesses to release cutting-edge new AI capabilities by controlling how such tools are developed in the first place.

Elon Musk was among a group of tech leaders and AI researchers who earlier this year signed an open letter calling for a six-month pause on the development of the next generation of AI tools to give industry and regulators time to establish safety guidelines. A group of scholars stated last month that reducing the dangers of AI-related human extinction should be a top focus on a worldwide scale.

The European Commission, which serves as the EU’s executive arm, proposed the AI legislation in 2021. Since the emergence of technologies like ChatGPT in recent months, the effort to define AI rules has become more urgent. The program can react to users’ textual inquiries. It was created by Microsoft-backed startup OpenAI.

Officials in Europe are hoping that the proposed legislation would be a world first and serve as a model for other countries and the businesses who produce and utilize the technology.

Governments around the world are considering whether to enact new regulations for potent AI tools as a result of the rapid growth of AI in recent months. The main internet regulator in China put up draft regulations in April, and the Biden administration is evaluating whether more checks are required.

A lobbying group called the Computer & Communications Industry Association said that several of the parliament’s supported proposals run the risk of enforcing too strict regulations on relatively low-risk AI technologies and stifling innovation. According to consumer advocacy groups, the parliament’s proposed bans are necessary to safeguard people’s fundamental rights.

EU officials have made an effort to establish themselves as leaders in the establishment of AI system safeguards that, in their opinion, should stimulate innovation while limiting the largest hazards of the technology.