Artificial Intelligence as a Servant and a Potential Threat: Setting Boundaries
The European Union's Act on Artificial Intelligence (EU AI Act) has reached the final stretch of the legislative process after a three-day negotiation marathon. Representing the world's first set of rules for regulating artificial intelligence, it evaluates AI based on risk assessment, emphasising security and transparency. While the final compromise version of the regulation has not yet been disclosed, we can still draw some initial insights.
Words by: Adam Hanka, Image AI-generated by Midjourney
The article was initially written for Hospodarske noviny
Why is it Necessary?
The European economy is likely to become increasingly dependent on the effective use of artificial intelligence. Combining data, algorithms, and computational power, this technology is poised to play a crucial role. As mathematicians and programmers seek ways to create the smartest systems with the most accurate results, equal attention must be given to the potential impact on our daily lives.
How do we establish conditions for AI usage that benefit the majority while minimising its unintended consequences on society? At this moment, envisioning the scope of future applications is challenging, but it seems almost anything one can imagine may soon become a reality. From healthcare advancements and improved diagnoses to enhanced efficiency in scientific research addressing climate change, the possibilities are vast. Personal educational assistants, individual trainers or therapists, and even automated customer support are on the horizon.
Identifying Risks
The broad and currently undefined range of applications raises concerns, especially those touching upon sensitive and intimate aspects of our lives. Poorly designed or malicious AI can potentially cause harm, penetrating our homes to govern social media content and even influencing our choice of clothing, admiration, emotions, or holiday destinations.
So, why is Brussels addressing this? No European country is individually powerful enough to enforce its AI regulation against global tech giants like Meta (Facebook), Microsoft, or Alphabet (Google). Effective regulation aims to minimise technology risks while preserving companies' innovative potential and individual benefits. Smart restrictions where necessary– and nowhere else–are essential.
The New Regulatory Framework
The EU AI Act classifies AI systems into four levels of risk severity: unacceptable, high, limited, and minimal risks. Regulation will clearly define its scope, applicable only within the EU's jurisdiction.
Unacceptable systems in the EU's future include AI manipulating human behaviour, mass acquisition of people's photos from the internet, or emotion recognition in workplaces or educational institutions. Predictive police surveillance, such as AI evaluating the likelihood of a person committing a crime, is also prohibited. High-risk applications include those in education, critical infrastructure, public services, border control systems (including common biometrics at airports), and the field of justice and law enforcement.
For high-risk systems to be deployed in the European market, they must be demonstrably designed to prevent discrimination and respect fundamental human rights. Providers must maintain documentation and regularly train their staff. The AI Act also requires human oversight of these systems for additional risk mitigation. Systems with minimal risk will only be subject to low transparency requirements, ensuring users are informed when interacting with an AI system.
Consequences of Regulation
The implications can be significant but not universally disruptive. While some companies may have invested in technologies that are now prohibited, they should swiftly adapt their strategies and explore alternative market segments. For most firms, these requirements may not pose a significant risk. Ideally, AI operators are already behaving responsibly and monitoring risky systems adequately. If companies need to adjust, increased administrative burdens, staff training, and associated costs should ensure higher credibility and safety of AI in Europe.
*** This article was not generated by artificial intelligence.
How the legal department sees it:
About the author and further references: Adam Hanka is Head of BigData & AI at Creative Dock, the largest independent corporate venture builder in Europe and the MENA region. In 2023, Creative Dock has undergone an intensive AI transformation to become the first AI-powered venture builder in its category. For more details on Creative Dock's transformation into an AI-driven company, see also: AI has increased the efficiency of the Creative Dock tech department by a third! Wondering how? or Investing in Artificial Intelligence is a Matter of Our Sovereignty.
Keep up to date with the latest news in corporate venture building and subscribe to our monthly newsletter. Do you want to collaborate with us to advocate for the broader use of AI in your company or because you want to increase your business value through new revenue from your existing assets? We are looking forward to hearing from you!