HomeTechnologyWho wants to regulate...

Who wants to regulate artificial intelligence?

China and the US have chosen to avoid detailed regulations, hoping that their companies can develop without undue restrictions and that their standards, even voluntary ones, will become a guide for other countries and companies. | Font: Possessed photo photo on Unsplash

adUnits.push({
code: ‘Rpp_tecnologia_mas_tecnologia_Nota_Interna1’,
mediaTypes: {
banner: {
sizes: (navigator.userAgent.match(/iPhone|android|iPod/i)) ? [[300, 250], [320, 460], [320, 480], [320, 50], [300, 100], [320, 100]] : [[300, 250], [320, 460], [320, 480], [320, 50], [300, 100], [320, 100], [635, 90]]
}
},
bids: [{
bidder: ‘appnexus’,
params: {
placementId: ‘14149971’
}
},{
bidder: ‘rubicon’,
params: {
accountId: ‘19264’,
siteId: ‘314342’,
zoneId: ‘1604128’
}
},{
bidder: ‘amx’,
params: {
tagId: ‘MTUybWVkaWEuY29t’
}
},{
bidder: ‘oftmedia’,
params: {
placementId: navigator.userAgent.match(/iPhone|android|iPod/i) ? ‘22617692’: ‘22617693’
}
}]
});

In the winter of 2016, the Google Nest home automation manager made an update to software their thermostats, causing damage to the batteries. A large number of users were left without communication, although many were able to change batteries, buy a new thermostat, or wait for Google to fix everything. The company indicated that the failure may have been caused by the artificial intelligence (AI) system that was driving these updates.

What would happen if the majority of the population used one of these thermostats, and because of its malfunction, half the country remained exposed to cold for several days? A technical problem would turn into a social emergency that would require state intervention. All because of a faulty artificial intelligence system.

No jurisdiction in the world has developed a comprehensive and specific regulation of the problems generated by artificial intelligence. This does not mean a complete legislative vacuum: there are other ways to respond to the many types of damage that artificial intelligence can cause.

For example:

  1. In the case of accidents caused by self-driving cars, insurance companies will still be the first recipients of claims.

  2. Companies that use artificial intelligence systems for their job selection processes can be held liable if discriminatory practices are used.

  3. Insurers that use anti-consumer methods based on the analysis generated by their artificial intelligence models to set prices and decide who to insure will still have to answer as companies.

In general, other already existing rules, such as contract law, transportation, damages, consumer law, and even human rights rules, will adequately cover many of the regulatory needs of artificial intelligence.

Generally does not seem sufficient. There is some consensus that the use of these systems will create problems that are not easily resolved in our legal systems. From the distribution of responsibility between developers and professional users to the scalability of losses, AI systems defy our legal logic.

For example, if artificial intelligence finds illegal information in the deep web and makes investment decisions based on it, should a bank that manages pension funds or a company that creates an automated investment system be held responsible for these illegal investment practices?

If an autonomous community decides to introduce a co-payment for medical prescriptions managed by an artificial intelligence system, and this system makes small errors (for example, a few cents on each prescription), but affects almost the entire population, who is responsible for the lack of initial control? Administration? The contractor installing the system?

Towards a European (and global) regulatory system

Since the submission in April 2021 of the proposal of the European Union on the regulation of artificial intelligence, the so-called AI lawa slow legislative process has been launched that should lead us to a regulatory system for the entire European Economic Area and possibly Switzerland by 2025. The first steps are already being taken with government agencies that will exercise part of the control over the system.

What about outside the EU? Who else wants to regulate artificial intelligence?

On these issues, we tend to look to the US, China, and Japan and often assume that legislation depends on the degree: more or less environmental protection, more or less consumer protection. However, in the context of artificial intelligence, it is surprising how different the views of lawmakers are.

United States of America

In the United States, the mainstream AI legislation is a rule of limited core content, more related to cybersecurity, which instead refers to other indirect methods of regulation, such as the creation of standards. The main idea is that the standards developed to control the risks of artificial intelligence systems will be voluntarily adopted by companies and become their standards. de facto.

In order to retain some control over these standards, rather than leave it up to organizations that typically develop technical standards and are controlled by the companies themselves, in this case AI system risk management standards are being developed by a federal agency (NIST).

Thus, the United States is engaged in a standards-setting process that is open to industry, consumers, and users. Now this is being followed by the White House AI Bill of Rights project, also on a voluntary basis. At the same time, many states are trying to develop specific legislation for certain specific contexts, such as the use of artificial intelligence in job selection processes.

China

China has developed a complex plan to not only lead the development of artificial intelligence, but also its regulation.

To do this, combine:

  1. Regulatory experiments (some provinces may develop their own regulations, for example to promote the development of autonomous driving).

  2. Development of standards (with a comprehensive plan covering more than thirty sub-sectors).

  3. Tight regulation (such as Internet recommendation mechanisms to avoid recommendations that could change public order).

For all these reasons, China is committed to regulatory control of artificial intelligence that does not hinder its development.

Japan

In Japan, on the other hand, they don’t seem particularly concerned about the need to regulate artificial intelligence.

Instead, they hope that their tradition of partnerships between government, companies, workers and users will prevent the worst problems that artificial intelligence can cause. At the moment, they are orienting their policies towards the development of society 5.0.

Canada

Perhaps the most advanced country in terms of regulation is Canada. There, within two years, every artificial intelligence system used in the public sector must undergo an impact analysis that anticipates its risks.

For the private sector, the Canadian legislature is now debating a standard similar to (though much more simplified) than the European one. A similar process was launched last year in Brazil. Although she seemed to have lost momentum, she can now be saved after the election.

From Australia to India

Other countries, from Mexico to Australia, passing through Singapore and India, are on hold.

These countries seem to be confident that their current rules can be adapted to prevent the worst damage that artificial intelligence can cause, and allow themselves to wait and see what happens with other initiatives.

Two parties with different views

Within this legislative diversity, two parties are being played.

First, among advocates that it is too early to regulate disruptive technology – and not well understood – such as artificial intelligence; and those who prefer to have a clear regulatory framework that addresses major concerns while at the same time providing legal certainty for developers and users.

The second game, and perhaps the most interesting one, is the competition for the role of a regulator. de facto world-class artificial intelligence.

The European Union’s commitment is clear: first, rules need to be created to bind anyone who wants to sell their products on their territory. The success of the General Data Protection Regulation, which is now a global benchmark for technology companies, is encouraging European institutions to follow this model.

Faced with them, China and the United States have chosen to avoid detailed regulations, hoping that their companies can develop without excessive restrictions and that their standards, even voluntary ones, will become a guide for other countries and companies.

At this time, he is playing against Europe. The United States will publish the first version of its standards in the coming months. The European Union will not have applicable legislation for another two years. Perhaps excessive European ambition will have its price inside and outside the continent, creating rules that, when they come into force, will already be surpassed by other rules.Talk

José Miguel Bello y Villarino, ARC CoE/Diplomat Fellow (on leave), University of Sydney

This article was originally published on The Conversation. Read the original.

We recommend you METADATA, an RPP technology podcast. News, analytics, reviews, recommendations and everything you need to know about the tech world.

Source: RPP

- A word from our sponsors -

Most Popular

LEAVE A REPLY

Please enter your comment!
Please enter your name here

More from Author

- A word from our sponsors -

Read Now

Archie from the Russian Federation wears a “printer”

The invaders use the silence regime to clean the river. .in_text_content_22 {width: 300px; Height: 600px; } @Media (min-width: 600px) {.in_text_content_22 {width: 580px; Height: 400px; }} .Adsbygoogle {Touch-Action: Manipulation; } The Russian army organizes heavy equipment routes, hiding and using the so...

APU boys fell based on Toretsky, ES from Bend – Zelensky

The Ukrainian military was ambushed in the direction of Toretsky. .in_text_content_22 {width: 300px; Height: 600px; } @Media (min-width: 600px) {.in_text_content_22 {width: 580px; Height: 400px; }} .Adsbygoogle {Touch-Action: Manipulation; } There are victims among the defenders. The invaders will be destroyed, and...

Russian Federation Throw Dronov from Curtains to Donbass – ISW

The country of the aggressor turned off the units of drones from the Kursk region to Donbas. .in_text_content_22 {width: 300px; Height: 600px; } @Media (min-width: 600px) {.in_text_content_22 {width: 580px; Height: 400px; }} .Adsbygoogle {Touch-Action: Manipulation; } The uncertain units of the...

Russia was caused by a provocation in Donetsk – in CPD they were fired in

The explosion in the occupied Donetsk was organized on April 20 by the invaders themselves. .in_text_content_22 {width: 300px; Height: 600px; } @Media (min-width: 600px) {.in_text_content_22 {width: 580px; Height: 400px; }} .Adsbygoogle {Touch-Action: Manipulation; } The Russian country aggressor arranged this provocation...