Philosophy and Morals of technology - Digital Ethics

/ smart city

By Jon Salisbury

I recently read a few articles that starting to educate the market on digital ethics and morals in technology. I also believe that incredibly bright groups such as MIT are working on this that are completely wrong and scary at best. It is doing the very thing we need to never do in my opinion.

Moral's date back to the very beginning of recorded history with philosophers such as Aristotle, Plato and earlier. Today we are working to put in place structures to govern our technology and the miscalculated consequences could have negative possible outcomes.

Never should we put in place "a non-static value on human life" as the implications are the incredibly scary long term.

This means I purpose from a technological lens that all human life is always valued at the highest level at all times. I believe we should move forward with this including no exceptions other than humanity as a whole be included and in saying this I am taking a utilitarian approach to how technology should work. The reasoning for this is that cars are not people and should not act like people. People have moral judgments and cars/technology just will not and should not move in that direction.

The problem with ethics we should put in place on technology and how should we apply it.

Brief overview of Ethics
Ethics Application
meta-theory-applied

Some Ethical Structure Options

Contractarianism
Fundamental-Elements-of-Contractarianism

  1. Act Utilitarianism
    js-mills-utilitarianism-34-638

  2. Rule Utilitarianism

slide_3

Since technology is really utility I think a rule-based utilitarian approach can be set with the proper overarching structure.

Some of the people who have put the most thought into this topic in the past are sci-fi writers who were dreaming of a future when technology would be put in the position of deciding where human life may have a negative consequence.

The 3 laws of robotics is a perfect example:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
    Then later an updated law was added:

Later, Asimov added a fourth, or zeroth law, that preceded the others in terms of priority:

  1. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Read more about the problems with a 4 law society here.
https://io9.gizmodo.com/why-asimovs-three-laws-of-robotics-cant-protect-us-1553665410

So why talk about all this and care so much to write an article? The world will of AI and driverless cars are coming fast and people are trying to create systems (Such as MIT) which will try to mimic human morality which truly is not 100% defined. I believe if we think we can make the machines just like humans we put in so many possible negative consequences. We will be pushed into an eventual system which has multiple values of human life which are calculated to potentially pick the human which is deemed as more valuable to save.

My call is for philosophers of our day to create the ethos needed to sit above the noise and create clarity to ensure our children's future means all human life is always valued at the highest level at all times.

I will plan on coming back to this topic with some thought experiments I have been working on which show the flaw clearly in my model and in others. The answer needs to be solved by some person much smarter than I.

If you haven't been to the moral machine by MIT please take a look here.
http://moralmachine.mit.edu/

Author - Jon Salisbury - CEO @ smartLINK

Subscribe to smartLINK

Get the latest posts delivered right to your inbox