The European Union’s artificial intelligence (AI) regulation comes into force today, but Germany still faces the crucial question of which authority will oversee compliance with the new rules on artificial intelligence.
Today marks the official start of the European artificial intelligence (AI) regulation, also known as the “AI Act,” following lengthy negotiations among EU institutions. This comprehensive legislation prohibits practices like social scoring – the evaluation of social behavior through AI. It also classifies certain applications as high-risk, imposing strict requirements in areas such as recruitment, justice, border control, and education.
The regulations will be implemented gradually, with full enforcement expected by 2027. However, member states face more immediate deadlines. Within one year, they must designate national authorities to supervise the implementation of these rules – a process that has sparked discussions in Germany.
Key responsibilities and potential oversight bodies
The national AI oversight body will have three main tasks:
- Appointing independent testing centers to evaluate high-risk AI systems
- Monitoring the market and serving as a point of contact for AI providers who discover errors in their systems
- Promoting innovation and competition
Several existing German institutions could potentially share these responsibilities, including the Federal Network Agency (Bundesnetzagentur), data protection authorities, or a newly established agency.
Data protection authorities make their case
In May, both federal and state data protection commissioners expressed their willingness to take on the role of national market surveillance for AI systems. They proposed a division of responsibilities where the federal authority would oversee nationwide AI products, while states would generally be responsible for AI applications in companies and public agencies.
Louisa Specht-Riemenschneider, the incoming Federal Commissioner for Data Protection, supports this approach. She argues that data protection authorities are “excellently suited” for the task, citing their existing expertise and the potential for cost-effective oversight.
Experts advocate for centralized oversight
However, some experts, like Mario Martini from the German Research Institute for Public Administration, argue for a more centralized approach. Martini emphasizes the importance of technical expertise and uniform rule interpretation in AI oversight. He suggests that a federal structure, similar to data protection, may not be ideal for AI regulation.
Martini proposes that while various agencies could contribute expertise on specific AI applications, a single federal authority should ultimately make decisions. He expresses skepticism about data protection authorities’ ability to promote innovation effectively.
According to Martini, the Federal Network Agency (Bundesnetzagentur) could be the most suitable choice. He suggests that this agency, which currently oversees mobile and internet providers, could be expanded into a comprehensive digital authority. This would require ensuring its independence and freedom from ministerial instructions.
Industry concerns and time pressure
As other countries like Austria have already established their national AI oversight, German digital companies are growing concerned about potentially unclear responsibilities. The VATM telecommunications association warns against Germany’s federal structure becoming an obstacle, while the AI Federal Association advocates for a central authority, preferably the Bundesnetzagentur with adequate resources and AI expertise.
With only twelve months remaining to establish AI oversight, creating an entirely new agency may be too time-consuming. As Martini notes, “In the worst case, the AI supervisory authority would be primarily occupied with itself.” Regardless of the final decision, the debate surrounding AI regulation implementation is far from over, even as the EU AI Act takes effect.