By Chet Yarbrough
By: Nick Bostrom
Narrated by: Napoleon Ryan
Nick Bostrom (Swedish philosopher at University of Oxford, author.)
Nick Bostrom explains the difference between A.I. potential and human brain limitation. With addition of sentient reasoning, Bostrom explains the incomprehensible leap beyond human brain capability with the advent of artificial intelligence.
That leap can be viewed with fear and trembling as inferred by Bostrom or it might be seen as a next step in human evolution.
Bostrom’s concern revolves around human brain limitation in setting standards for A.I.’ programming.
A machine’s ability to recall billions of facts and historical precedence cannot be matched by the human brain. However, the significance of A.I.’s achievement is delimited by how it may be programmed to have moral, ethical, and normative standards that benefit humanity. The difficulty of that programing is humanity’s continual redefinition and lack of agreement on normative standards.
One may ask oneself how good a job has human evolution done in setting standards for humanity? Have authoritarians like Vladimir Putin, and Donald Trump benefited the world?
Bostrom notes two fundamental scenarios for human evolution. Both seem more a return to the past than to the future. Bostrom suggests A.I. will become either an oracle or sovereign leader of humanity. As an oracle, one is reminded of Athenian fealty to the Oracle of Delphi. As sovereign, one is reminded of Augustus Caesar, Caligula, Franklin Roosevelt, and Adolph Hitler. Humanity has survived all–both false predictions of the Oracle and atrocities of sovereigns.
It is unfair to suggest Bostrom is not revealing the difficulties accompanying the introduction of A.I. to humankind. The reality of advancing intelligence through machine learning far outstrips the ability of any singular past or present scientist, philosopher, or politician. One is intimidated by the shear complexity of programing A.I. and its potential for benefit and harm to humanity.
To understand humanities place in the world, human beings cannot agree on what is moral, amoral, equitable, or unfair in society.
How will input from human beings to an oracle or sovereign A.I. escape the imperfect nature of humankind? Added to that difficulty is A.I.’ potential to ignore the best interest of humanity in the interest of its own self-preservation.
Bostrom’s book is interesting, but he beats the idea of A.I.’s ascendance to death by delving into game theory. Bostrom notes the world’s race to create artificial intelligence has the potential of ignoring safeguards for A.I.’s growth and potential for world domination.
Though abandoning safeguards is quite true as evidenced by the Crispr revolution that opened Pandora’s box of genetic manipulation, evolution of species is a fundamental law of the world’s existence.
A.I. is a step in the evolution of species. Its consequence is unknown and cannot be known because it follows the randomness of genetic selection. Humanity needs to get over it and get on with it. A.I. will either be humanity’s savior or its doom.