In late 2015, the professionals at HRILAB at Tufts University, built a robot that denied to execute commands of humans if the command bought a risk to their safety. A little over two years later here we are, striving hard to develop an AI which is super-efficient. We believe that it will reduce our efforts and revolutionize every industry! Certainly, but there are a few potholes on the road to developing highly effective AI. To start with, how should it think? We don’t know that yet, but it should certainly not become “Skynet!” We might be able to make AI and intelligent robots that outperforms us, but will be able to replicate our thought process? We, as human beings, do plenty of things which are ethically incorrect, but are morally in accordance with the situation. Do we need systems to replicate us? Absolutely not!

Source: evolvingai.org

Well, consider a few situations, what if:

  • An elderly and forgetful man asks to wash clothes to AI but he had already asked the same in the morning and the clothes are clean?
  • A small kid tells AI to throw an expensive item out of the window?
  • A teenager commands AI to complete his homework instead of doing it himself?

What should AI do? We don’t know the answer to this, do we? There are plenty of cases where robots receive commands that shouldn’t be carried out as they lead to unwanted outcomes. Well, if we don’t teach them to defy us, then it might prove out to be disastrous, and even if we do, the results won’t be pleasing.

Source: armstrongeconomics.com

In either case, it’s crucial for machines to detect the potential harm their actions could cause and to react accordingly to either avoid it, or by refusing to carry out human instructions by telling him, the repercussions of doing the same!

Like it or not, we’ll have to make them learn ethics so that they can choose not to obey every command given to them. We definitely don’t want to see our system turn against us by listening to commands of someone who is at opposition!

Should Ethics Be Built-in or Should We Teach Them?

If we wish to live peacefully even when automation and AI are on the peak, then we need to teach them ethics, but how will we do that? We have two ways:

(i) We can hard code these in their OS, but there’s a problem, ethics are subjective. What is ethical for you may not be for us! If we differ from each other and ethical values mean different things to different people, then how will we make machines with values?

(ii) Secondly, we can design its algorithm in a way that it learns based on its own experience. But this passive approach is leaving room for a plethora of misinterpretations of morality. We cannot forget the recent meltdown of Microsoft’s Twitter AI, Tay.

Source: yourstory.com

Basically, we are not in a position to tell if we should hard code the ethics or leave them to learn on their own! Let’s assume that the machines have been fed with ethics using some techniques, now imagine a scenario when two instructors are giving contradictory commands to the machine. What decision will it take? The experts have been debating over this for long and yet aren’t able to decide! Moreover, if we fail to teach them ethics, who’ll take responsibility of all the nuisance caused?

Well, there is a team at Georgia Tech, that is attempting to teach cognitive systems by making them read stories so that they can behave in a socially acceptable way. A system named Quixote is helping the systems to identify protagonist in the story so that they can align their behavior similarly. Since we don’t know which approach was followed by Microsoft’s Tay, we can just hope that it was not similar to this.

If the methodologies used are similar, then we are possibly quite far from solidifying ethics in systems. What do you think?

People Who Read This Post Also Like