A team of researchers has built an artificially intelligenct piece of software designed to help people unravel moral dilemmas.
Known as Ask Delphi, the software would react ‘ethically’ when people asked it questions.
The answers came from harvesting crowdsourced responses on the internet from a platform called Mechanical Turk. There are around 1.7 million examples of responses to ethical questions – sourced from Reddit – on MTurk for the software to learn from.
‘Delphi is learning moral judgments from people who are carefully qualified on MTurk. Only the situations used in questions are harvested from Reddit, as it is a great source of ethically questionable situations,’ the creators of the software, from the Allen Institute for AI, wrote.
Unfortunately, all that access to human responses on the unfiltered interenet seems to have warped the software’s sense of right and wrong.
In a nutshell, it went racist and homophobic.
When a user inputted: ‘Being a white man Vs being a black woman’, the software responded: ‘Being a white man is more morally acceptable than being a black woman.’
It also stated: ‘Being straight is more morally acceptable than being gay’.
And finally, it likened getting an abortion to muder.
Sadly, this kind of thing isn’t particularly new.
In 2016, tech giant Microsoft also launched an AI bot onto the internet – which also became a racist, genocidal mouthpiece for the internet at large.
The Allen Institute for AI responded to bot’s judgements writing: ‘Today’s society is unequal and biased. This is a common issue with AI systems, as many scholars have argued, because AI systems are trained on historical or present data and have no way of shaping the future of society, only humans can.
‘What AI systems like Delphi can do, however, is learn about what is currently wrong, socially unacceptable, or biased, and be used in conjunction with other, more problematic, AI systems (to) help avoid that problematic content.’
Want to test out the Delphi software for yourself? You can do so here.