The new Frankenstein; don’t create an AI Monster
The potential benefits are huge, with AI promising to free up our time from menial or repetitive tasks, help us be more creative, and even eliminate human error in our processes.
But there are already examples of the negative impacts that can come from AI when poorly used, informed, or regulated.
For example, Buzzfeed came under fire this year for a piece which used AI to generate concepts of ‘Barbie’ from around the world. The resulting images were criticised for being culturally inaccurate, racially insensitive, and offensive to many viewers.
The sins of the AI programme used, called Midjourney, included whitewashing, stereotypes and inaccurately depicted cultural attire.
Horrifying, no? That a machine which thinks for itself (one that we created, remember) could be… there’s only one word for it: racist. Let’s investigate how this happens.
How does AI go bad?
In another example, the site Civitai was criticised after its image generating AI models proved to be overwhelmingly influenced by Asian features and culture. This was the simple result of many of the models on this site having been trained on databases of Asian imagery.
University of Cambridge research showed that when trained on a database of online news articles, an AI model would digest this information and then be more likely to associate the word ‘man’ with ‘software engineer’ than with ‘nurse.’
The commonality between these ‘crimes of AI’ is that we can’t blame the tech. It is only acting on what we have taught it.
So what makes bad AI? There’s a clue not-so-hidden in the stories above.
It’s Data.
We must face up to one uncomfortable truth here: humans are biased, whether consciously or unconsciously. Therefore, the data we produce (and there are oceans of it, by the way) contains biases too.
The trouble with AI is that if we feed those biases into an algorithm, they only become amplified.
Who will save us?
To safeguard against the consequences of biased AI, we must be discerning when identifying the data with which to train a model.
Ask yourself, is this data complete? Is it clean? Is there any sprout of bias here that, when watered by the power of AI, could grow into a mighty tree of inequity?
Who are the heroes who will create a generation of AI tools we can trust? The bringers of an AI future where all are equal, and bias can be squashed instead of unleashed?
They are Data Scientists.
These are the professionals who discover patterns in your data, collecting and analysing to make sure it’s reliable. Data science, therefore, is the foundation of building responsible AI.
If your business is looking to harness the power of AI, whether that’s to improve customer experience, assist management or automate operations (all of which by the way, are a strong move in the right direction) you’ll first want to ensure that your data science team are up to scratch.
Diverse teams separate the good AI from the bad
Coming all the way back to those inauthentic ‘Barbies from around the world’, you’ll want to ensure your data scientists are not just responsibility conscious, but diverse too.
We already know a lot of the stats around the benefits of diversity in business. It correlates to more innovation, increased revenue, and better culture. But it also results directly in superior products and services.
Unconscious bias is a big player here. If you have a team of largely the same demographics, for instance male, Caucasian, and university educated, you run the risk of homogeneity and groupthink, due to a lack of different perspectives. A homogenous team are more likely to develop a product that works well, but only for people like them, because that’s the angle from which they view the product and look for problems.
Diverse teams, on the other hand, will catch issues from all different points of view and help you to fine-tune a customer experience, product or service that works for everyone.
And guess what? That’s a one-way ticket to success.
If you’re ready to start equipping your business with AI-conquering talent today, check out our data courses here. QA are committed to equitably sourcing diverse talent, enabling you to create AI miracles, not an AI monster.