Abel Torres
Executive Director, Center for Global Regulatory Cooperation, U.S. Chamber of Commerce
Zach Helzer
Senior Director, Europe
Head, U.S.-UK Business Council

Published

June 30, 2023

Share

Some projections suggest that AI could contribute up to $13 trillion to the global economy by 2030, resulting in an annual GDP growth of 1.2%. However, the recent amendments to the EU AI Act, approved by the European Parliament, raise significant concerns and skepticism about their potential impact. While the legislative framework claims to promote responsible AI practices in line with fundamental rights and values, there are serious doubts about its effectiveness and potential negative consequences. 

As the EU AI Act progresses through the trilogue process, it is critical to focus on the potential drawbacks and negative implications rather than optimistic projections. Striking a balance between effective regulation and the untapped potential of AI is essential, but excessive and heavy-handed regulation can undermine AI's promise without achieving desired objectives. 

Unclear Definition of Risk 

The Act introduces rigorous assessments for "high-risk" AI systems and proposes banning certain AI practices. However, there are serious concerns about the potential broadening of the scope of what constitutes a high-risk AI application. Labeling entire sectors as high-risk could have significant repercussions for enterprises employing AI, including many U.S. companies, and create operational challenges in Europe. Such broad classifications fail to consider the nuanced differences between AI applications within each sector and may hinder technological advancements. 

General Purpose AI 

Applying high-risk requirements to all General-Purpose AI (GPAI) systems could have unintended consequences and hinder access to essential low-risk AI systems. Instead of focusing on specific use cases with the potential to cause significant harm, the Act's current approach risks hindering innovation in low-risk, general-purpose AI technologies. This approach could limit the development of AI applications that could otherwise benefit society without posing substantial risks. 

Balancing Regulation, Innovation, and Global Competitiveness 

Finding a balance between regulation, innovation, and global competitiveness is essential. However, the U.S. Chamber remains skeptical about the EU's ability to adopt a proportionate, flexible, and risk-based approach to AI regulation. Overly prescriptive regulations could stifle innovation and impede the potential benefits associated with different AI use cases. It is crucial to consider the roles of various actors in AI development and the trade-offs associated with different applications to avoid unintended consequences. 

Facilitating Transatlantic Collaboration 

Strong stakeholder engagement between government, industry, and academia is crucial to foster transatlantic collaboration and minimize administrative burdens. However, the U.S. Chamber is concerned that the EU AI Act fails to address these concerns adequately. Unilateral export restrictions could hinder collaboration and coordinated international efforts to address export controls more effectively. The Act should prioritize collaboration and shared interests to ensure the benefits of AI innovation while maintaining individual and societal safety. 

Geopolitical Considerations 

The EU AI Act should prioritize transparency, accountability, and ethical standards without giving undue advantage to non-market actors. However, granting EU regulators access to privately held data and AI source codes raises significant concerns about exposing valuable IP, trade secrets, and personal information to cyberattacks and industrial espionage. Retaining provisions that recognize the proprietary nature of this information are crucial to enable businesses to leverage AI without compromising their competitiveness. Safeguarding commercially sensitive information, respecting data privacy concerns, and acknowledging the proprietary nature of data and technology are imperative in the context of the EU AI Act. 

Bottom Line

The potential impact of the EU AI Act on U.S. industries raises skepticism and concerns. It is crucial to approach AI regulation with caution. Sound regulatory approaches should enable AI technologies to flourish in a manner that benefits society while upholding ethical standards and maintaining economic competitiveness. 

The U.S. Chamber emphasizes the need for a proportionate, flexible, and risk-based approach to AI regulation. Overly burdensome regulations can hinder innovation and impede the potential benefits of AI. Protecting proprietary information, respecting data privacy concerns, and recognizing the proprietary nature of data and technology are essential to safeguarding commercial interests and facilitating transatlantic collaboration. 

It is vital to strike a balance between regulation, innovation, and global competitiveness. By addressing the U.S. Chamber’s concerns, EU policymakers can ensure that AI regulation promotes responsible practices while avoiding unintended consequences and stifling the untapped potential of AI. Only through careful consideration and collaboration can AI technologies truly flourish in a manner that benefit society as a whole. 

About the authors

Abel Torres

Abel Torres

Abel Torres serves as Senior Director in the Center for Global Regulatory Cooperation (GRC) at the U.S. Chamber of Commerce.

Read more

Zach Helzer

Zach Helzer