EA - Proposals for the AI Regulatory Sandbox in Spain by Guillem Bas

The Nonlinear Library: EA Forum - Podcast autorstwa The Nonlinear Fund

Podcast artwork

Kategorie:

Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Proposals for the AI Regulatory Sandbox in Spain, published by Guillem Bas on April 27, 2023 on The Effective Altruism Forum.Translated by Daniela Tiznado.Summary: The European Union is designing a regulatory framework for artificial intelligence (AI) that could be approved by the end of 2023. This regulation prohibits unacceptable practices and stipulates requirements for AI systems in critical sectors. These obligations consist of a risk management system, a quality management system, and post-market monitoring. The legislation enforcement will be tested for the first time in Spain, in a regulatory sandbox of approximately three years. This will be a great opportunity to prepare the national ecosystem and influence the development of AI governance internationally. In this context, we present several policies to consider, including third-party auditing, the detection and evaluation of frontier AI models, red teaming exercises, and creating an incident database.IntroductionEverything indicates that the European Union will become the first major political entity to approve a comprehensive regulatory framework for artificial intelligence (AI). On April 21, 2021, The European Commission presented the Regulation laying down harmonised rules on AI –henceforth AI Act or Act–. This legislative proposal covers all types of AI systems in all sectors except the military, making it the most ambitious plan to regulate AI.As we will explain below, Spain will lead the implementation of this regulation in the context of a testing ground or sandbox. This is an opportunity for the Spanish Government to contribute to establishing good auditing and regulatory practices that can be adopted by other member states.This article is divided into six sections. Firstly, we provide a brief history of the Act. The second part summarizes the legislative proposal of the European Commission. The third section details the first sandbox of this regulation, carried out in Spain. The fourth lists the public bodies involved in the testing environment. The fifth part explains the relevance of this exercise. Finally, we present proposals to improve the governance of risks associated with AI in this context. We conclude that this project provides an excellent opportunity to develop a culture of responsible AI and determine the effectiveness of various policies.Brief History of the ActThe foundations of the text date back to 2020, when the European Commission published the White Paper on Artificial Intelligence. This was the beginning of a consultation process and a subsequent roadmap that involved the participation of hundreds of stakeholders, resulting in the aforementioned proposal.After its publication, the Commission received feedback from 304 actors and initiated a review process involving the European Parliament and the Council of the European Union as legislative bodies. In December 2022, the Council adopted a common approach. In the case of the Parliament, the vote to agree on a joint position is scheduled for May (Bertuzzi, 2023). The trilogue will begin immediately afterward, and the final version could be approved by the end of 2023, entering into force at the beginning of 2024.Summary of the ActThe main starting point of the proposed law is the classification of AI systems according to the level of risk they entail. Specifically, the proposal is based on a hierarchy distinguishing between unacceptable, high, limited, and minimal risks. The first two are the main focus of the regulation.As part of the category of unacceptable risks, practices that pose a clear threat to the safety, livelihoods, and rights of people will be banned. Currently, three practices have been deemed unacceptable as they go against European values: distorting human behavior to cause harm; evaluating and classi...

Visit the podcast's native language site