Data and analytics leaders know artificial intelligence (AI) is key to getting and acting on the insights needed to thrive in our “new normal.” Despite the benefits AI can provide for predictive and prescriptive analytics and automation, AI has also become a topic of concern around the potential for biased, harmful outcomes. To address these concerns, the European Union has drafted new regulation—the Artificial Intelligence Act (EU AIA). Like the GDPR, this act will have many implications for businesses—and it might seem as if there’s lots of time to prepare, much like the pre-regulatory phase of the GDPR.
Prepare for AI Regulations Now
From my perspective, waiting could be a costly mistake. The time to start preparing is NOW. Here’s what to know:
There will be financial penalties for violating the Act. In its draft stage, the penalties are painful. Fines up to €30M, or 6 percent of worldwide annual turnover (whichever is higher), can be assessed if non-compliant. And fines of up to €10M or 2 percent of turnover are possible if organizations provide incorrect, incomplete, or misleading information to the regulatory bodies.
The Act will likely apply even if a company is not doing business in the EU. Penalties can apply extraterritorially to providers and users outside of the EU when the output of the AI system is used in the EU.
High-risk AI systems as defined by the Act will likely be in focus. Such systems include (but are not limited to) those where AI is part of critical infrastructure or a safety component, determines access to education or training, impacts employment, and affects access to financial services. Government use of AI in law enforcement, border control, and the administration of justice will also be under scrutiny.
How to Start Your AI Preparations
Are you already using AI in your business? Here’s what to do:
Raise awareness about AI risk mitigation with your executive leadership. Compared to other forms of risk management, AI risk management is new. Your executive team may not yet have prioritized this. Share information about AI risk and bias on a regular cadence; expect attention to grow, based on the increase in Google search trends.
Start a working group, and consider starting a Decision Observer Team. Identify stakeholders for an “AI compliance” working group, and a separate Decision Observer Team including Legal and Analytics stakeholders, as well as Data Management and Data Science leaders. A diverse team of Decision Observers is a best practice. Managing AI algorithmic risk should be a group effort, and given the risks, the team must reflect the community that AI will impact.
Identify current or planned projects that use AI technologies and potential EU impacts. Review internally-developed tools, third-party tools, and the work of services groups or contractors. Once identified, review the technologies in line with the Act’s classifications. You should also determine if there are EU citizens who may be affected by your use of AI.
Projects that merit scrutiny include:
- Machine Learning for Pattern Recognition
- Edge Anomaly Detection and Root Cause Analysis
- Dynamic Pricing
- Customer Engagement
- Digital Twins to Improve Yield
- Production Surveillance & Condition-Based Maintenance
- Fraud and Risk Management
What Can You Expect?
At this time, the Act has been assigned to EU Parliament committees for review. There will be a two-year implementation period after the Act is passed. While a two-year-plus timeline seems generous, the potential impact of fines and broad use of AI across modern enterprises means ANY use of AI should drive preparation now.
Interested in discovering more about artificial intelligence and machine learning for your business? Learn more here.