Tuesday 19 February 2019

The morality of a machine

It is 2019, everybody is gung-ho on about Artificial Intelligence and/or Machine Learning. If we look around, there are many systems that exist today which incorporate AI/ML in some shape or form. Most systems in product consist of typical classifiers or regression that provides information to make a particular decision. The question of the morality of a machine has not found top consideration because till now the AI/ML is producing output that is used by a human to make actual decision.
When Google Photo image tagging algorithm misclassifies an image and tags it with somebody else's’ name, it is not a serious problem because I can personally go ahead and reclassify the photo with the correct name. Still, the algorithm is correctly classifying more than 90% of images accurately and reducing my workload. When Google Mail or Gmail misclassifies one of the important email containing a notice from the income tax department, it is a significant hassle but still something that I can easily fix. The primary point here is that all the input created by AI is brought to the notice of a human who looks at all the available data before making any decision.
But, the day is not far off when we will reach a situation where machines may be directly handling responses and humans are either not in the loop or they become so unmindful of what the machines are doing for you that just ignore those actions. For example, it is not inconceivable to expect Gmail to automatically respond to any notice received from the income tax department by looking at your financial data that is stored on some Google Drive. It does not have the capability today but the technology exists where this capability can be easily built.
The problem becomes more compounded where the actions taken by machines may result in serious hazards to humans. Look at all the excitement around self-driving cars. When human’s are driving cars, they are constantly making decisions that are driven by morality rather than plain logic. Take an example where you are driving your car and on one side there is a pedestrian or a cyclist while on the other side of your car there is another car. Most human would err on the side of the car because that is just the material damage. Our morality expects us to value human life higher than material things. If we extend this problem, let’s say the self-driving car AI realizes that it can’t avoid an accident, it has to choose between a child and one aged person. Suddenly the decision becomes extremely difficult. Humans make these decisions instinctively and many are disturbed through decisions for many years down the line.
Should you value the life of a child higher than a senior citizen? What about when you have to choose between parent and child? What about choosing between the head of state and a normal person? These are complex decisions. We can write code that can make these decisions but the rules have to be defined and agree by society and governments. If every car manufacturer starts making their own moral decisions, that may become anarchy. Governments, Societies, Courts need to define a set of morality rules that every autonomous AI has to follow. I think the time has come when Asimov’s three laws need to be expanded to something like Asimov’s three laws and other morality principles. These principles need to define how a machine can evaluate different outcomes based on a given situation and choose an outcome that is acceptable by courts, societies, and governments. Unless that happens, AI should be relegated to a decision support system and should not become something that has any control.


  1. Regardless of the advantages or difficulties it brings, every one of these inquiries can be abridged into one. What worth does AI bring and in what capacity will it change the job that people play in the workforce? Here are some potential answers:machine learning certification

  2. Self driving cars are a luxury to relieve the rich and smart people from the mundane tasks. Safety tech is smart enough now that the self driving cars may never be in an at fault accident. Every accident will be due to someone breaking a law and the moral responsibility should be on that person. If that person happens to be a child then it will be his/her parent(s) or guardian.

    1. That is the current state, but in a future when there will be truly autonomous vehicles, may be cars owned by ubers, eventually fault of decisions would lie with manufacturers. I am just saying it is a complex decision making which would require morality to be programmed, not just be a difference engine.

  3. If we try to analyze it, these consumer reviews are very helpful to ascertain the positive and negative traits of the products. Wertgutachten

  4. Hi there to everyone, the contents present at this web page are actually amazing for people knowledge, well, you can also visit Automated Trading Platform for more TVG International related information and knowledge. Keep up the good work.


How GenAI models like ChatGPT will end up polluting knowledge base of the world.

 Around nine years ago, around the PI day, I wrote a blog post about an ancient method of remembering digits of pi. Here is the  link  to bl...