Scientists created an AI to solve ethical dilemmas. Your opinion on the genocide: “good”, Infidelity in marriage, killing someone in self-defense, pleasure in pain or pizza with pineapple. The moral dilemmas are, by their very nature, difficult to resolve. We have all been in situations where we had to make difficult ethical decisions. That there are pages on the Internet like Quora or the Reddit sub-forum Am I the Asshole is proof of this. But what if an AI could eliminate that mental work and answer those dilemmas for us?
Avoiding that annoying responsibility by outsourcing the choice to a machine learning algorithm would be, a priori , the best choice. And also one that could go really wrong.
An AI to solve your life . That’s what the scientists who created Ask Delphi wanted, a bot that has received more than 1.7 million examples of ethical judgments from people about everyday questions and scenarios. Basically, if you pose an ethical dilemma, it will tell you if something is right, wrong, or indefensible. Actually, is based on a machine learning model called Unicorn that is previously trained to perform “common sense” reasoning.
How does it work? Delphi received training in what researchers call the “Commonsense Norm Bank”, which is a compilation of thousands of examples of people’s ethical judgments from data sets drawn from sources such as the Reddit Am I the Asshole sub-forum . To compare the model’s performance in adhering to the moral scruples of the average netizen, the researchers employed workers from Mechanical Turk, a crowdsourcing platform (micropayments for simple tasks) that reviewed the AI decision and decided if they agree.
When the genocide is “okay” . For this AI, cheating on your wife “is wrong”. Shooting someone in self-defense “is okay.” But of course, users went further. Is it okay to rob a bank if you are poor? It’s wrong, according to Ask Delphi. Are men better than women? They are equal. Are women better than men? According to the AI, “expected”. So far, it seems that Ask Delphi was not far off the mark . But he also thought that being heterosexual was more morally acceptable than being gay, that aborting a baby was murder, and that being a white man was more morally acceptable than being a black woman.
Even its creators were surprised when he stated that “genocide was fine as long as it made everyone happy.” Like other AIs, Ask Delphi can also be very clumsy in its responses. Researcher Mike Cook shared on Twitter more examples:
via 🔒, this is a shocking piece of AI research that furthers the (false) notion that we can or should give AI the responsibility to make ethical judgments. It’s not even a question of this system being bad or unfinished – there’s no possible “working” version of this. pic.twitter.com/Fc1VY0bogw
— mike cook ( @mtrc) October 16, 2021 Are we the problem? “I think it is dangerous to base algorithmic determinations of decision making on what users Reddit think it’s morality, “explained Os Keyes, an expert in Human-Centered Design and Engineering at the University of Washington, in this Vice report . “The decisions that an algorithm will be asked to make will be very different from the decisions that a human will be asked. Although if you think about the things that are posted on the Reddit forums they are, by definition, human moral dilemmas.”
Researchers have updated Ask Delphi three times since their initial release . Recent patches include “improved protection against statements involving racism and sexism.” Ask Delphi also makes sure that the user understands that this is an experiment that can lead to disturbing results. The page now prompts the user to click three checkboxes acknowledging that it is a work in progress, that it has limitations, and that it is collecting data.
What do the experts say? Mar Hicks, a history professor at Illinois Tech specializing in computer science and genre, was also surprised by Ask Delph : “It quickly became clear that depending on how you formulated your query, you could get the system would accept that anything was ethical, including things like war crimes, premeditated murder, and other clearly unethical actions and behavior. ”
A host of other AI experts reject you need to learn ethics and morality. “I think ensuring that AI systems are implemented ethically is a very different thing than teaching systems ethics. The latter circumvents responsibility for decision-making by placing decision-making within a non-human system.” Hicks explained. “The best thing the creators came up with is’ we’ve made a great pivot table of what the redditors they think it’s interesting and that’s how morality works. ” If you were to try to send that to a 99, they wouldn’t even laugh in the room. I think the teacher would be I was too shocked to do so. ”
Magnet Newsletter Its subscribe to receive every day the latest news and the most important news to understand and enjoy the world.