RISK/REWARD

AI is only a tool of human beings and will be misused by some leaders in the same way Atom bombs, starvation, disease, climate, and other maladies have harmed the sentient world. AI is more of an opportunity than threat to society.

Books of Interest
 Website: chetyarbrough.blog

A Brief History of Artificial Intelligence (What It Is, Where We Are, and Where We Are Going)

By: Michael Wooldridge

Narrated By: Glen McCready

Michael Wooldridge (Author, British professor of Computer Science, Senior Research Fellow at Hertford College University of Oxford.)

Wooldridge served as the President of the International Joint Conference in Artificial Intelligence from 2015-17, and President of the European Association for AI from 2014-16. He received a number of A.I. related service awards in his career.

Alan Turing (1912-1954, Mathematician, computer scientist, cryptanalyst, philosopher, and theoretical biologist.)

Wooldridge’s history of A.I. begins with Alan Turing who has the honorific title of “father of theoretical computer science and artificial intelligence”. Turing is best known for breaking the German Enigma code in WWII with the development of an automatic computing engine. He went on to develop the Turing test that evaluated a machine’s ability to provide answers to questions that exhibited human-like behavior. Sadly, he is equally well known for being a publicly persecuted homosexual who committed suicide in 1954. He was 41 years old at the time of his death.

Wooldridge explains A.I. has had a roller-coaster history of highs and lows with new highs in this century.

Breaking the Enigma code is widely acknowledged as a game changer in WWII. Enigma’s code breaking shortened the war and provided strategic advantage to the Allied powers. However, Wooldridge notes computer utility declined in the 70s and 80s because applications relied on laborious programming rules that introduced biases, ethical concerns, and prediction errors. Expectations of A.I.’s predictability seemed exaggerated.

The idea of a neuronal connection system was thought of in 1943 by Warren McCulloch and Walter L Pitts.

In 1958, Frank Rosenblatt developed “Perception”, a program based on McCulloch and Pitt’s idea that made computers capable of learning. However, this was a cumbersome programming process that failed to give consistent results. After the 80s, machine learning became more usefully predictive with Geoffrey Hinton’s devel0pment of backpropagation, i.e., the use of an algorithm to check on programming errors with corrections that improved A.I. predictions. Hinton went on to develop a neural network in 1986 that worked like the synapse structure of the brain but with much fewer connections. A limited neural network for computers led to a capability for reading text and collating information.

Geoffrey Hinton (the “Godfather of AI” won the 2018 Turing Award.)

Then, in 2006 Hinton developed a Deep Belief Network that led to deep learning with a type of a generative neural network. Neural networks offered more connections that improved computer memory with image recognition, speech processing, and natural language understanding. In the 2000s, Google acquired a deep learning company that could crawl and index the internet. Fact-based decision-making, and the accumulation of data, paved the way for better A.I. utility and predictive capability.

Face recognition capability.

What seems lost in this history is the fact that all of these innovations were created by human cognition and creation.

Many highly educated and inventive people like Elon Musk, Stephen Hawking, Bill Gates, Geoffrey Hinton, and Yuval Harari believe the risks of AI are a threat to humanity. Musk calls AI a big existential threat and compares it to summoning a demon. Hawking felt Ai could evolve beyond human control. Gates expressed concern about job displacement that would have long-term negative consequences with ethical implications that would harm society. Hinton believed AI would outthink humans and pose unforeseen risks. Harari believed AI would manipulate human behavior and reshape global power structures and undermine governments.

All fears about AI have some basis for concern.

However, how good a job has society done throughout history without AI? AI is only a tool of human beings and will be misused by some leaders in the same way atom bombs, starvation, disease, climate, and other maladies have harmed the sentient world. AI is more of an opportunity than threat to society.

PATTERN ME

One may conclude from Hawkin’s research that human beings remain the smartest if not the wisest creatures on earth. The concern is whether our intelligence will be used for social and environmental improvement or self-destruction.

Books of Interest
 Website: chetyarbrough.blog

On Intelligence

By: Jeff Hawkins, Sandra Blakeslee

Narrated By: Jeff Hawkins, Stefan Rudnicki

Jeff Hawkins co-founder of Palm Computing and co-creator of PalmPilot, Treo, and Handspring.

Hawkins and Blakeslee have produced a fascinating book that flatly disagrees with the belief that computers can or will ever think.

Hawkins develops a compelling argument that A.I.’ computers will never be thinking organisms. Artificial Intelligence may mislead humanity but only as a tool of thinking human beings. This is not to say A.I. is not a threat to society but it is “human use” of A.I. that is the threat.

Hawkins explains A. I. in computers is a laborious process of one and zero switches that must be flipped for information to be revealed or action to happen.

In contrast to the mechanics of computers and A.I., human minds use pattern memory for action. Hawkins explains human memory comes from six layers of neuronal activity. Pattern memory provides responses that come from living and experiencing life while A.I. has a multitude of switches to flip for recall of information or a single physical action. In contrast, the human brain instantaneously records images of experience in six layers of neuronal brain tissue. A.I. has to meticulously and precisely flip individual switches to record information for which it must be programmed. A.I. does not think. It only processes information that it is programmed to recall and act upon. If it is not programmed for a specific action, it does not think, let alone act. A.I. acts only in the way it is programmed by the minds of human beings.

So, what keeps A.I. from being programmed to think in patterns like human beings? Hawkins explains human patterning is a natural process that cannot be duplicated in A.I. because of the multi-layered nature of a brain’s neuronal process. When a human action is taken based on patterning, it requires no programming, only the experience of living. For A.I., patterning responses are not possible because programming is too rigid based on ones and zeros, not imprecise pictures of reality.

What makes Jeff Hawkins so interesting is his broad experience as a computer scientist and neuroscientist. That experience gives credibility to the belief that A.I. is only a tool of humanity. Like any tool, whether it is an atom bomb or a programmed killing machine, human patterning is the determinate of world peace or destruction.

A brilliant example given by Hawkins of the difference between computers and the human brain is like having six business cards in one’s hand. Each card represents a complex amount of information about the person who is part of a business. With six cards, like six layers of neuronal receptors, a singular card represents a multitude of information about six entirely different things. No “one and zero” switches are needed in a brain because each neuronal layer automatically forms a model that represents what each card represents. Adding to that complexity, are an average of 100billion neurons in the human body conducting basic motor functions, complex thoughts, and emotions.

There are an estimated 100 trillion synaptic connections in the human body.

The largest computer in the world may have a quintillion yes and no answers programmed into its memory but that pales in relation to a brains ability to model existence and then think and act in response to the unknown.

This reminds one of the brilliant explanation of Sherlock Holmes’ mind palace by Sir Arther Conan Doyle. Holmes prodigious memory is based on recall of images recorded in rooms of his mind palace.

Hawkins explains computers do not “think” because human thought is based on modeling their experience of life in the world. A six layered system of image modeling is beyond foreseeable capabilities of computers. This is not to suggest A.I. is not a danger to the world but that it remains in the hands and minds of human beings.

What remains troubling about Hawkin’s view of how the brain works is the human brains tendency to add what is not there in their models of the world.

The many examples of eye-witness accounts of crime that have convicted innocent people is a weakness because people use models of experience to remember events. Human minds’ patterning of reality can manufacture inaccurate models of truth because we want our personal understanding to make sense which is not necessarily truth.

The complexity of the six layers of neuronal receptors is explained by Hawkins to send signals to different parts of the human body when experience’ models are formed.

That is why in some cases we have a fight or flight response to what we see, hear, or feel. It also explains why there are differences in recall for some whose neuronal layers operate better than others. It is like the difference between a Sherlock Holmes and a Dr. Watson in Doyle’s fiction. It is also the difference between the limited knowledge of this reviewer and Hawkins’ scientific insight. What one hopes science comes up with is a way to equalize the function of our neuronal layers to make us smarter, and hopefully, wiser.

One may conclude from Hawkin’s research that human beings remain the smartest if not the wisest creatures on earth. The concern is whether our intelligence will be used for social and environmental improvement or self-destruction.