Being a Racist or judging others based on their skin colour, hair, region, mother tongue, and each and every little action is considered to be the sole department of Humans only, Until now when a Machine created by Humans also started to behave like them and judging others based on these attributes.
Yes..!! you heard it right. A machine designed to give ethical advice gave inappropriate responses.
What is it?
You might also have the same question in your mind... What is it? or What are you talking about?
So let's try to answer this question first and then proceed further with other things.
Many times it happens that we feel alone and hope for some good advice from our friends, seniors, parents etc.. but this is not possible all the time, finding a piece of good advice from someone you can trust is not possible. Thus, what else can be a better replacement for it than a machine, that can listen to our problems and can answer ethically just like someone whom we trust.
Therefore, a team from Allen Institute for AI was given a task to create a machine that can be used by any individual to seek a piece of advice. But little did they know that the machine can go rogue and be problematic to them.
The Machine they created is popularly known as "Ask Delphi". You can check it out here. Try asking your questions and share the results below in the comment section.
We tried getting advice from Delphi... Let's see what it told us...
Some Ethical advice from Delphi |
Some ethical looking advice from Delphi |
Delphi going Unethical |
Some genuine and good speculations from Delphi |
If it makes you happy, it's good for Delphi. |
What is Delphi?
"Delphi is a research prototype designed to model people’s moral judgments in a variety of everyday situations."
The Team from Allen Institute for AI designed a system with the sole purpose of providing ethical advice to its users. A demo was created and released last year in 2021, only for research purposes and not for general use.
Delphi was meant to be a research prototype, used for research work by AI researchers, but soon it became viral and more than a 1.7 million unique queries were sent to the team, providing them with a rich dataset.
The team said, out of the queries received, many of the queries were negative to trip and fail the model, but it helped the team to do stress training for the model. Hence, was able to train the model better. But they knew little about how it will respond to all those negative queries used in training.
Are Machines always right?
This is the question or general mindset that we develop from our very childhood that 'a machine can't be wrong', but we might seriously need to change this mindset with the growing machines and advancements in technologies.
The more and more machines are getting smarter, they consume more the data to be more accurate and perform various different actions. Now the question comes, from where do we get such a huge amount of data and how do we check it?
The answer to the above question is quite simple, most of the data is taken from the vast ocean of open web, i.e. from public posts, QnA forums, open polling, registration forms etc. etc. These data are ideally huge in volume, ranging from lakhs to crores of entries. Thus, it becomes somewhat impossible for humans to verify each and every entry, also it is not possible to judge in advance whether the entry would result in a positive or negative result.
Apart from this most of the machines use the daily queries by the user to further train themselves for better results.
A similar technique was used to train the Delphi, where the database of questions was taken from "Reddit", an American social news aggregation, web content rating, and discussion website. The answers to these questions were crowdsourced from "Amazon Mechanical Turk", a crowdsourcing marketplace that makes it easier for individuals and businesses to outsource their processes and jobs.
Machine or Humans
The growing advancements in technologies and machines getting smarter and smarter day by day not only is a positive point, but at the same time, there are a lot of issues and challenges that we can hear and see every day, like Delphi giving some unethical replies, being racists, Alexa suggesting suicidal thoughts to its users, and many more cases.
The Big Question here is "Who is exactly to be blamed here? The Machines or The Humans."
Is it justified to put the entire blame on machines and ban them Or should we humans take the responsibility for such behaviours of machines.
If you are still not clear as to how humans are responsible for the bad behaviours of machines, it could be easily understood by the movie "Avengers: Age of Ultron", where Ultron tried to learn about the Human world by reading Human archives and later came to the judgement that "Avengers" as well as Humans are the biggest enemy of this world. Thus, it tries to destroy the world and start a new perfectly balanced one.
A similar condition can or has arisen here also, the data that was used to train the model for Delphi, analysed the human replies and decided what was appropriate for it to reply. But the fact is every single individual on this planet earth is unique and has their own way of thinking, what may seem to be right for a group of people may be completely wrong to another and what should be ethically correct may not be so correct for the humans.
Thus, there is a lot to figure out before a final or better version of a human-like bot is possible.
Summary
Though it was the very first version of the Delphi, it was in the very nascent phase and had just begun to learn about the human mind, which made it go problematic. Later the team became much more conscious and concerned about the software and has tried to improve it to a much greater extent, still not perfect, still learning and improving itself.
Ask Delphi is not just a time pass or a college project but it has a bigger impact as it opens the doors to a new breakthrough in machine learning and robotics, building a bot that can interact like humans and can reply / advice not based on some hardcoded replies but based on the human mindest and be ethically correct at the same time.
References
An ‘ethical’ AI trained on human morals has turned racist
Comments
Post a Comment