The EU AI Act: How to Adapt Quickly and Safely for Profit by Maryrose Lyons
How to Adapt Quickly and Safely for Profit
- The EU AI Act is designed to create a comprehensive legal framework for the development, deployment and use of AI systems.
- Its full text, expected to be published in late July 2024, becomes law in mid-August.
- Rules on prohibited AI systems and AI literacy will come into force in February next year, giving you (just) enough time to adjust and adapt.
- Implementation will be overseen by an European Artificial Intelligence Board, with national supervisory authorities in each member state.
- Non-compliance penalties can reach up to €30 million or 6% of global annual turnover, whichever is higher.
However, legislators are also very clear they do not want to leave EU nations at a competitive disadvantage. Which is why they have attempted to write the Act, so it still fosters innovation and investment, as well as enhancing the governance and legal certainty around the tech.
To achieve this balance, the Act adopts a risk-based approach, placing AI use into four risk categories: (i) Unacceptable; (ii) High; (iii) Limited; and (iv) Minimal.
Each category requires a different approach from owners and users of the systems. It ranges from a complete ban on some activities to others which remain entirely unaffected. The new complexity means it is important for everyone, not least accountants, to understand which activities are categorised where, and the broad implications of each.
i. Unacceptable risks
These will be prohibited in the Act and, it is hoped, will help reassure us that our legal rights remain intact. The following activities will be illegal next year:
- Using AI for subliminal, manipulative or deceptive techniques to distort human behaviour in ways that cause, or are likely to cause, significant harm.
- Exploiting vulnerabilities of specific groups – such as children, disabled people or those in economic distress – to negatively distort their behaviour with AI.
- Scoring people’s behaviours or personal characteristics as a public body to evaluate or classify individuals.
- Remotely identifying individuals in ‘real-time’ using remote biometric information, such as facial recognition, in public for law enforcement, although narrow exceptions do apply.
ii. High risks
Despite being deemed high risk, these activities will continue to be permitted. However, they will be subject to strict legal obligations. Systems which fall into this area include those used for:
- Recruitment, promotion, task allocation and other worker management tasks,
- Safety components of products covered by existing legislation, such as medical devices, toys and machinery,
- Access to credit scores, healthcare and emergency services; and
- Permitted forms of biometric identification and categorisation.
If you or your clients use AI which falls into the high-risk category, the duties include:
- The ability to demonstrate a high level of robustness, accuracy and security,
- Application of appropriate human oversight to ensure the system is operating as intended,
- Clear and adequate information for the system user, such that they know how to operate it successfully,
- Risk assessment and mitigation systems, which must be implemented ahead of time,
- High quality datasets to train the AI,
- The logging of all activity, to ensure traceability of each output; and
- Creation and maintenance of detailed documentation, ahead of any request from authorities.
In order to avoid unnecessary burden and allow for competition, activities thought by legislators to be of ‘limited’ risk will continue unabated. With one caveat – users must be aware they are interacting with AI.
ChatGPT, Copilot, other LLM’s and generative AI fall into this category. As do emotion recognition systems. And content creators will have to disclose when text, audio, images or videos have been generated or manipulated by AI. This is particularly the case when the content relates to matters of public interest.
iv. Minimal risks
Some tools, by contrast, will have no need to declare their use of AI and so, in practical terms, are unaffected by the Act. They tend to be narrower-purpose applications which, for instance, are embedded into video games, spam filters, writing apps to check spelling or websites for shopping recommendations.
1. Risk Assessment and Compliance:
As a certified accountant and business owner, you’ll need to assess whether any AI systems you use, or plan to implement, fall under the high-risk category. This is particularly relevant if you’re using AI for:
- Credit scoring or loan approval processes,
- HR management and recruitment,
- Fraud detection and prevention; and
- Automated financial reporting and analysis.
If your AI systems are classified as high-risk, you’ll need to ensure compliance with the stringent requirements outlined in the Act.
2. Data Quality and Management:
The new law places significant emphasis on the quality of data used to train AI systems. As certified accountants often deal with sensitive financial data, you’ll need to:
- Implement robust data collection and preprocessing methods,
- Ensure all data is accurate, complete and representative,
- Regularly audit and update your datasets; and
- Implement strong data protection and privacy measures in line with GDPR requirements.
3. Transparency and Client Communication:
When using AI systems in your practice, particularly those that interact directly with clients, you’ll need to:
- Clearly disclose the use of AI to your clients,
- Explain how AI-driven decisions are made, especially in high-stakes situations like credit assessments; and
- Provide options for human intervention when requested.
The AI Institute has created an AI Policy template which includes all of the areas that need to be addressed in order to stay on-side with the Law. To skip ahead of the crowd, you can download it here.
4. Professional Development:
In order to stay compliant with the Act, all businesses should invest in AI literacy training for themselves and their staff, which is something the AI Institute can help with. In addition, as it’s a fast-paced environment which is continually evolving, everyone must stay updated on AI regulations and best practices. Get in touch with us to chat in more detail.
5. Ethical Considerations:
The EU AI Act emphasises the importance of ethical AI deployment. As trusted financial advisors, certified accountants have a responsibility to ensure that AI systems are used ethically in their practice. In practical terms this means developing an AI Policy that aligns with your professional values and the requirements of the Act. The policy should address:
- Transparency and explainability
- Fairness and non-discrimination
- Privacy and data protection
- Accountability and liability
- Human oversight and intervention
By proactively addressing the regulations and the wider ethical considerations of AI, you can safely leverage the technology to enhance your practice. All the while maintaining the trust and confidence of your team and clients. This is important because the competitive benefits for those who adopt early are likely to be significant.
But early adoption will not be enough. The AI landscape will continue to evolve. Rapidly. Which means staying informed, remaining flexible and always being committed to best-practice will be essential. Your practice’s methods are likely to iterate faster tomorrow than they do today.
Which is almost certainly far quicker than they did yesterday.
And that rate of change will likely only get faster. And it is this that makes your commitment to the principles of ethical AI, as well as the letter of the EU AI Act, vital to future profitability. You will have to adapt not only quickly but also safely.
If certified accountants can collectively position themselves as leaders in the responsible use of AI, they may well end up setting the standard for ethical innovation across the entire financial sector.
https://www.instituteofaistudies.com