Uncategorized

Forum: Global impact of the EU AI Act

Published

on

 
On March 13, the European Parliament made a significant move by approving the European Union’s Artificial Intelligence Act (EU AI Act), a pivotal legislation set to shape the global AI regulatory landscape.
While similar in its transformative impact to the EU’s General Data Protection Regulation (GDPR) of 2016, which focused on privacy, the EU AI Act delves into the specifics of regulating artificial intelligence technologies.
This legislation is poised to influence future regulations beyond Europe, setting a precedent for preemptive regulation rather than reactive penalties, and is likely to inspire similar legislative frameworks in the United States and other nations.
Comprehensive Compliance Programs
Companies are now tasked with developing extensive compliance programs that are integral to their operational framework. This involves collaboration across various levels of the organization, including the board of directors, C-suite, compliance professionals, and team managers, who are all required to clearly define and fulfill distinct responsibilities.
Constructing an Effective AI Program
Starting such a program might tempt organizations to focus solely on the technology. However, a robust program must integrate a blend of human oversight, procedural processes, and technology from the outset.
According to Saskia Vermeer-de Jongh, a partner specializing in AI and digital law at HVG Law LLP, part of the global EY law network, building trust in AI begins with ensuring human oversight is incorporated into AI operations. The EU AI Act emphasizes this by requiring safeguards that correspond with the risks, autonomy, and contexts of AI use.
Navigating New Legislation
The EU AI Act addresses AI systems marketed, implemented, or utilized within the EU and introduces a tiered, risk-based system for oversight. This includes categorizing AI systems based on risk levels from unacceptable to minimal, with corresponding regulatory requirements for each category. The act specifies comprehensive measures for high-risk applications, which include critical infrastructures, law enforcement, and employment, among others.
There are also specific exemptions in the act for AI systems used exclusively for military, defense, national security, scientific research, and open-source AI.
Phased Compliance and Penalties
The legislation outlines a phased approach to compliance, beginning with a prohibition on certain AI systems and extending to comprehensive regulations for general-purpose and high-risk AI systems over a two-year period. Noncompliance can result in substantial fines, the greater of up to EUR 35 million or 7% of global annual revenue.
Strategic Implications for Businesses
The act necessitates that organizations prepare by maintaining up-to-date AI system inventories, assessing the scope of applicable regulations, and classifying the risk levels of their AI systems. Vermeer-de Jongh stresses the importance of understanding the extensive requirements and potential risks and opportunities presented by the legislation. This includes training for AI users, ensuring transparency, using high-quality datasets, and maintaining robust privacy standards.
Organizations are advised to consult with legal and technological experts to navigate these requirements effectively and implement the necessary governance frameworks. Given the evolving nature of AI regulations, continuous updates and adaptations are crucial.
Reflecting on global AI policy trends, Vermeer-de Jongh notes the EU’s approach aligns with core OECD principles endorsed by the G20, focusing on human rights, sustainability, transparency, and risk management. While comprehensive AI legislation may not be imminent in the US, there is a growing consensus on the need to address issues such as bias, data privacy, and the impact of AI on the workforce.
Source: thomsonreuters.com
The post Forum: Global impact of the EU AI Act appeared first on HIPTHER Alerts.

Trending

Exit mobile version