Pushback on Regulation
Despite their public calls for regulation, major AI players are pushing back on the first real effort to do so, working to temper the European Union’s draft Artificial Intelligence Act. A June open letter, signed by over 160 executives from companies ranging from Renault to Meta, expressed concerns about the proposed act, arguing that it would “jeopardize Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing.”
Influence behind the Scenes
Meanwhile, influential players in the AI industry are working behind the scenes to soften elements of the act. OpenAI CEO Sam Altman, for example, provided feedback on earlier drafts, advocating for a lessening of the act’s regulatory burden on large AI providers. Other tech giants like Microsoft and Google are also pushing for similar changes.
Narrow Focus on Generative AI
The AI Act has largely limited its focus on generative AI to the issues of bias and copyright, rather than tackling the greater societal threats of AI agency and autonomy. The act introduces a sophisticated “product safety framework” with strict requirements for market entrance and certification of High-Risk AI Systems. These systems include those deployed in biometric identification, critical infrastructure, educational and vocational settings, essential services, law enforcement, migration and border control, as well as the administration of justice and democratic processes.
Challenges of Training Data
One of the challenges in regulating AI is handling copyright and bias problems within the training data. Foundation models and other generative AI systems cannot simply forget their training data, leading to questions about the responsibility of companies using these models. The draft AI Act proposes that the provider of a foundation model must take reasonable steps to mitigate bias and copyright risks. Liability may be imposed on providers if biases or copyright infringement occur despite reasonable steps taken.
Transparency and Penalties
The AI Act emphasizes transparency, requiring companies to reveal more information about their training data to evaluate the impact on citizens. Penalties for breaches are still under debate, and it is possible that companies may accept fines as a cost of doing business in the EU.
The EU’s approach to AI regulation could set the standard globally, with countries like China, Brazil, Canada, and the United States also considering similar regulations. The tiered structure of the proposed legislation, based on risks, is seen as a model worth emulating.
Striking a Balance
The final impact of the EU AI Act remains to be seen, but its success will depend on effectively balancing innovation and safety. Regulation can be implemented without stifling innovation, and the EU is optimistic about achieving this balance.