How to use AI responsibly – practices and regulations
The power and opportunities of AI are equalled only by the potential threats that can result from its misuse.
Catching up to a new reality already forced into being by the explosion of AI tech, regulatory forces are looking to clamp down on unwanted effects.
The EU AI act – what does it mean for you?
Namely, the EU AI Act comes into force at the start of August – beginning an urgent countdown for affected businesses to comply.
We caught up with one of QA’s AI experts, Learning consultant Vicky Crockett, to unpack what that means for businesses in the tech industry and beyond.
Compliance deadlines range from six months to three years, but ‘no one should panic’ says Vicky, ‘these deadlines give businesses time to get advice and prepare for any changes they need to make, with the priority deadline being to deal with “prohibited” AI systems before the six-month deadline in February 2025.’
The new EU law will not apply to every business, and may not affect yours. But, you can bet that your legal advisors have been busy checking!
As an employer, you may already have an organisational AI policy in place. Vicky encourages employees to check and make sure they’re following their company’s legal advice.
AI regulation across the world
There are strong similarities between many worldwide legal and regulatory frameworks.
Many countries including the UK, the United States, and many leading EU nations are members of the OECD (international policy observatory) and are committed to their collective Value-Based AI Principles:
- Inclusive growth, sustainable development and well-being
- Human rights and democratic values, including fairness and privacy
- Transparency and explainability
- Robustness, security, and safety
- Accountability
AI regulation in the UK
The UK for example, follows a regulatory approach outlined in AI regulation: a pro-innovation approach, in March 2023.
This supported a great deal of work by the AI Standards Hub in collaboration with industry leading Data Scientists and AI experts to develop standards and policies. These have generated a lot of discussions and detailed the practicalities of using AI responsibly.
Here’s the ‘pro-innovation framework’:
- Safety, security, and robustness
- Appropriate transparency and explainability
- Fairness
- Accountability and governance
- Contestability and redress
Responsible AI: what’s the takeaway?
In terms of how all this filters down to businesses and their every day operations, it’s all about readiness for change… and that starts with your people. Vicky explains:
‘From a training point of view, for organisations who need to comply with the EU AI Act they’ll likely need support when it comes to Article 4 (AI Literacy). Others will just think it’s a very good idea. We’ll be ready with appropriate learning solutions of course!’
A people-centred mindset is key. So much so in fact, it’s being adopted at the highest level.
‘Most countries are choosing to define what an AI System is; easier than simply defining “AI”. The added bonus: making it easier to think about the people in the process, and the governance and security needed at each stage.’
For instance, in line with the EU law’s role definitions, the ‘people in the process’ include AI providers and AI deployers.
As Vicky advises, ‘an all-staff training approach is key to implementing AI Governance successfully, so that employees are aware whenever a system needs to follow your AI policy and procedures.’
Ready to adopt an all-staff AI skilling approach?