New Report Finds Strong Support In Favour Of Regulating AI

November 15, 2021
Back
A new report has found that a large majority of technology policy experts and influencers believe the ethical use of artificial intelligence (AI) should be regulated.

A new report has found that a large majority of technology policy experts and influencers believe the ethical use of artificial intelligence (AI) should be regulated.

In a report titled "Our Relationship with AI: Friend or Foe - A Global Study", Clifford Chance and Milltown Partners discuss the overall attitudes to technology and AI and find that 78 percent of tech experts surveyed believe that the ethical use of AI should be a priority for new legislation or regulation.

“Across all countries, there is a clear desire to build an appropriate legal framework and regulate AI. The envisaged pathways for a regulatory response to AI differ, with a wide spectrum of available options: from a global regulatory framework such as the upcoming EU AI Act, through soft law guiding principles, as well as sector specific standards to deal with each industry’s specific needs,” Dessi Savova, a partner at Clifford Chance, said in the report.

In terms of regulation, the respondents view self-regulation as a positive step forward, but insufficient on its own, the report says. Nearly half of them (46 percent) see industry self-regulation as a positive first step, and there seems to be strong support for a sector-by-sector regulatory approach (62 percent).

In April 2021, the European Commission proposed a new legal framework to govern the use of AI across the EU. It proposes a risk-based approach whereby the uses of the technology are categorised and restricted depending on whether they are of an unacceptable, high or low risk to human safety and fundamental rights.

“The EU hopes to set the international standard for the regulation of AI, as it did for data protection with the GDPR. But the legislative process is still underway and we are likely to see significant changes introduced to the Commission’s draft,” Gail Orton, head of EU public policy at Clifford Chance, said in the report.

According to the survey, 85 percent of respondents are open to the EU's proposed requirement to register high-risk AI systems with a government or EU-run database.

In September, the UK government published a National AI Strategy, which outlines the government’s plan on how to create an AI-based economy within the next ten years. However, the government remains undecided in those documents about the extent to which AI should be regulated or the industry should adhere to voluntary technical standards. A white paper to set out different approaches is expected to be published in early 2022.

The law firm concludes in the report that “the regulatory landscape for AI will likely emerge gradually, with a mixture of AI-specific and non-AI specific binding rules, non-binding codes of practice, and sets of regulatory guidance”.

It is possible that as AI regulation evolves across the globe, countries will adopt different solutions, with multiple similar or overlapping sets of rules being generated by different bodies.

It advises businesses to engage in the regulatory process as early as possible.

Although a significant topic among experts, AI was behind other key technology-related issues, including cybersecurity (94 percent), data privacy (92 percent) and the role of bots circulating misinformation (86 percent).

The study was based on a survey of 1,000 tech policy experts and influencers from France, Germany, UK and US.

Our premium content is available to users of our services.

To view articles, please Log-in to your account, or sign up today for full access:

Opt in to hear about webinars, events, industry and product news

Still can’t find what you’re looking for? Get in touch to speak to a member of our team, and we’ll do our best to answer.
No items found.