China and Europe lead the push to regulate AI

A robot plays piano at the Apsara Conference, a conference on cloud computing and artificial intelligence, in China, on October 19, 2021. As China refreshes its technology rulebook, the European Union is working on its own regulatory framework to curb AI but is yet to not over the finish line.

Str | Afp | Getty Images

As China and Europe try to curb artificial intelligence, a new front is opening around who will set the standards for the burgeoning technology.

In March, China introduced regulations on how online recommendations are generated through algorithms suggesting what to buy, watch or read.

It’s the latest salvo in China’s tightening grip on the tech sector and sets a significant marker in how AI is regulated.

“It was a surprise to some people that China started drafting AI regulation last year. It is one of the first major economies to put it on the regulatory agenda,” said Xiaomeng Lu, director of the geotechnology practice of Eurasia. Group, to CNBC.

As China revamps its technology rulebook, the European Union is unpacking its own regulatory framework to rein in AI, but has yet to make it to the finish line.

With two of the world’s largest economies presenting AI regulation, the field for AI development and business worldwide could be about to undergo a significant change.

A global scenario from China?

At the heart of China’s latest policies are online recommendation systems. Companies must inform users if an algorithm is used to display certain information to them, and people can choose not to be targeted.

Lu said this is an important shift as it gives people more control over the digital services they use.

Those rules come amid a changing environment in China for their largest internet companies. Several of China’s domestic tech giants – including Tencent, Alibaba and ByteDance – have been in trouble with authorities, particularly over antitrust.

I see China’s AI regulation and the fact that they are leading the way as essentially conducting some large-scale experiments for the rest of the world to look at and potentially learn from.

Matt Sheehan

Carnegie Gift for International Peace

“I think those trends have changed the government’s stance on this quite a bit, to the extent that they’re going to look at other questionable market practices and algorithms that promote services and products,” Lu said.

China’s measures are remarkable given how quickly they were implemented, compared to the timeframes that other jurisdictions typically operate when it comes to regulation.

China’s approach could provide a roadmap that influences other international laws, said Matt Sheehan, a fellow with the Asia Program of the Carnegie Endowment for International Peace.

“I see China’s AI regulations and the fact that they’re going first, essentially as conducting some large-scale experiments that the rest of the world can watch and potentially learn from,” he said.

Europe’s approach

The European Union is also working on its own rules.

The AI ​​law is the next big piece of tech legislation on the agenda in a busy couple of years.

In recent weeks, it has finalized negotiations on the Digital Markets Act and the Digital Services Act, two key regulations that will curb Big Tech.

The AI ​​law now seeks to impose a comprehensive framework based on the level of risk, which will have far-reaching implications for what products a company markets. It defines four risk categories in AI: minimal, limited, high and unacceptable.

France, which holds the rotating presidency of the EU Council, has given national authorities new powers to check AI products before they hit the market.

Defining these risks and categories has proved fraught at times, with MEPs calling for a ban on facial recognition in public places to limit its use by law enforcement. However, the European Commission wants to make sure it can be used in investigations, while privacy activists fear it will increase surveillance and affect privacy.

Sheehan said that while China’s political system and motivations will be “totally anathema” to lawmakers in Europe, the technical goals of both sides are very similar — and the West should pay attention to how China implements them.

“We don’t want to mimic any of the ideological or speech controls that are being deployed in China, but some of these issues on a more technical side are similar across jurisdictions. And I think the rest of the world should look at what’s happening from China from within.” a technical perspective.”

China’s efforts are more prescriptive, he said, and they include algorithm recommendation rules that could curb tech companies’ influence on public opinion. The AI ​​law, on the other hand, is a broad effort to bring all AI under one regulatory roof.

Lu said the European approach will be “heavier” for businesses as it requires a premarket assessment.

“That’s a very restrictive system compared to the Chinese version. They basically test products and services in the market and don’t do that before introducing those products or services to consumers.”

‘Two different universes’

Seth Siegel, global head of AI at Infosys Consulting, said these differences could create a schism in how AI evolves on the global stage.

“If I try to design mathematical models, machine learning and AI, I will take fundamentally different approaches in China than in the EU,” he said.

At some point, China and Europe will dominate the way AI is controlled, creating “fundamentally different” pillars on which the technology can develop, he added.

“I think we’re going to see the techniques, approaches and styles start to diverge,” Siegel said.

Sheehan disagrees that the world’s AI landscape will shatter as a result of these different approaches.

“Companies are getting better at tailoring their products to different markets,” he says.

The greater risk, he added, is that researchers will be detained in different jurisdictions.

The research and development of AI is cross-border and all researchers can learn a lot from each other, Sheehan said.

“If the two ecosystems sever the ties between technologists, if we ban communication and dialogue from a technical perspective, then I’d say that poses a much bigger threat, with two different universes of AI that could end up getting pretty dangerous in the way they do.” they interact with each other.”

Leave a Comment