THINKING

A.I. will continue to grow as an immense gatherer of information. Will it ever think? Can, should, or will future prediction and political policy be based only on knowledge of the past?

Books of Interest
 Website: chetyarbrough.blog

Rebooting AI (Building Artificial Intelligence We Can Trust)

By: Gary Marcus and Ernest Davis

Narrated By: Kaleo Griffith

These two academics explain much of the public’s misunderstanding of the current benefit and threat of Artificial Intelligence.

Marcus and Davis note that A.I. cannot read and does not think but only repeats what it is programmed to report.

They are not suggesting A.I. is useless but that its present capabilities are much more limited than what the public believes. In terms of product search and economic benefit to retailers, A.I. is a gold mine. But A.I.’s ability to safely move human beings in self-driving cars, free humanity from manual labor, or predict cures for the diseases of humanity are far into the future. A.I. is only a just-born baby.

Self-driving cars, robot servants, and cures for medical maladies remain works in process for Artificial Intelligence.

Marcus and Davis note A.I. usefulness remains fully dependent on human reasoning. It is a tool for recall of documented information and repetitive work. A.I. is not sentient or capable of reasoning based on the information in its memory. Because of a lack of reasoning capability, answers to questions are based on whatever information has been fed to an A.I. entity. It does not use reason to answer inquiry but only recites responses to questions from programmed information in its memory. If sources of programmed information are in conflict, the answers one receives from A.I. may be right, wrong, conflicted, or unresponsive. You can as easily get an answer from A.I. that is wrong as one that is right because it is only repeating what it has gathered from the past.

What Marcus and Davis show is how important it is that questions asked of Microsoft’s Copilot, ChatGPT, Watson, or some other A.I. platform be phrased carefully.

The value of A.I. is that it can help one recall pertinent information only if questions are precisely worded. This is a valuable supplement to human memory, but it is not a reasoned or infallible resource.

Marcus and Davis explain “Deep Learning” is not a substitute for human reasoning, but it is a supplement for more precise recorded information.

Even with multilayered neural networks, like deep learning which attempt to mimic human reasoning by patterning of raw data, can be wrong or confused. One is reminded of the Socratic belief of “I know something that I know nothing.” Truth is always hidden within a search for meaning, i.e., a gathering of information

The true potential of A.I. is in its continued consumption of all sources of information to respond to queries based on a comprehensive base of information. The idea of an A.I. that can read, hear, and collate all the information in the world is at once frightening and thrilling.

The risk is the loss of human freedom. The reward is the power of understanding. However, the authors explain there are many complications for A.I. to usefully capitalize on all the information in the world. Information has to be understood in the context of its contradictions, its ethical consequence, information bias, and the inherent unpredictability of human behavior. Even with knowledge of all information in the world, decisions based on A.I. do not ensure the future of humanity? Should humanity trust A.I. to recommend what is in the best interest of humanity based on past knowledge?

Markus and Davis argue A.I. is not, does not, and will not think.

A.I. will continue to grow as an immense gatherer of information. Will it ever think? Can, should, or will future prediction and political policy be based only on knowledge of the past?

WORRY OR NOT

Artificial intelligence is an amazing tool for understanding the past but its utility for the future is totally dependent on its use by human beings. A.I. may be a tool for planting the seeds of agriculture or operating the tools of industry but it does not think like a human being.

Books of Interest
 Website: chetyarbrough.blog

Genesis (Artificial Intelligence, Hope, and the Human Spirit) 

By: Henry A. Kissinger, Eric Schmidt, Craig Mundie

Narrated By: Niall Ferguson, Byron Wagner

NOTED BELOW: Henry Kissinger (former Secretary of State who died in 2023), Eric Schmidt (former CEO of Google), and Craig Mundie (a Senior Advisor to the CEO of Microsoft).

“Genesis” is these three authors view of the threat and benefits of artificial intelligence. Though Kissinger is near the end of his life when his contribution is made to the book, his co-authors acknowledge his prescient understanding of the A.I. revolution and what it means to world peace and prosperity.

On the one hand, A.I. threatens civilization; on the other it offers a lifeline that may rescue civilization from global warming, nuclear annihilation, and an uncertain future. To this book reviewer, A.I. is a tool in the hands of human beings that can turn human decisions for the good of humanity or to its opposite.

A.I. gathers all the information in the known world, answers questions, and offers predictions based on human information recorded in the world’s past. It is not thinking but simply recalling the past with clarity beyond human capability. A.I. compiles everything originally noted by human beings and collates that information to offer a basis for future decision. Information comprehensiveness is not an infallible guide to the future. The future is and always will be determined by humans, limited only by human judgement, decision, and action.

The danger of A.I. remains in the thinking and decisions of humans that have often been right, but sometimes horribly wrong. One does not have to look far to see our mistakes with war, discrimination, and inequality. In theory, A.I. will improve human decision making but good and bad decisions will always be made by humans, not by machines driven by Artificial Intelligence. A.I.’s threat lies in its use by humans, not by A.I.’s infallible recall and probabilistic analysis of the past. Our worry about A.I. is justified but only because it is a tool of fallible human beings.

Artificial intelligence is an amazing tool for understanding the past but its utility for the future is totally dependent on its use by human beings. A.I. may be a tool for planting the seeds of agriculture or operating the tools of industry but it does not think like a human being. The limits of A.I. are the limits of human thought and action.

The authors conclude the Genie cannot be put back in the bottle. A.I. is a danger but it is a humanly manageable danger that is a part of human life.

The risk is in who the decision maker is when A.I. correlates historical information with proposed action. The authors infer the risk is in human fallibility, not artificial intelligence.

A.I.’ PROGRAMMING

A.I. machines do not think! It is critically important for users of A.I. to continually measure the human results of “A.I. based” decisions. Users must be educated to understand A.I. is a tool of humanity, not an oracle of truth. A.I. must be constantly reviewed and reprogrammed based on its positive contribution to society.

Books of Interest
 Website: chetyarbrough.blog

Prediction Machines (The Simple Economics of Artificial Intelligence) 

By: Ajay Agrawal, Joshua Gans, Avi Goldfarb

Narrated By: U Ganser

Authors, from left to right: Ajay Agrawal (Professor at Rotman School of Management @ University of Toronto), Joshua Gans (Chair in Technical Innovation and Entrepreneurship at the Rotman School), Avi Goldfarb (Chair in Artificial Intelligence, Healthcare, and Marketing at the Rotman School).

This is a tedious book about the mechanics of artificial intelligence and how it works, i.e., at least in its early stages of development.

Like in the early days of computer science, the phrase “garbage in, garbage out” comes to mind. “Prediction Machines” makes the point that A.I. is software creation for “…Machines” that are only as predictive as the ability of its programmers. Agrawal, Gans, and Goldfarb give a step-by-step explanation of a programmer’s thought process in creating a predictive machine that does not think but can produce predictions.

The obvious danger of A.I. is that users may believe computers think when in fact they only reproduce what they are programmed to reveal.

They can be horribly wrong based on misrepresentation or misunderstanding of the real world by programmers who are trapped in their own beliefs and prejudices. A. I.’s threat rests in the hands of those who view it as a “god-like” oracle of truth when it is only a tool of human beings.

The horrible and unjust murder of the United Health Care executive reminds one of how critical it is for all business managers to be careful about how A.I. is used and the way it affects its customers.

“Prediction Machines” is a poorly written book that illustrates how a programmer methodically organizes information with decisions and actions triggered by A.I.’ users who believe machines can be programmed to think. A.I. machines do not think!

Managers must be alert and always inspect what they expect.

It is critically important for users of A.I. to continually measure the human results of “A.I. based” decisions. Users must be educated to understand A.I. is a tool of humanity, not an oracle of truth. A.I. must be constantly reviewed and reprogrammed based on its positive contribution to society.

PATTERN ME

One may conclude from Hawkin’s research that human beings remain the smartest if not the wisest creatures on earth. The concern is whether our intelligence will be used for social and environmental improvement or self-destruction.

Books of Interest
 Website: chetyarbrough.blog

On Intelligence

By: Jeff Hawkins, Sandra Blakeslee

Narrated By: Jeff Hawkins, Stefan Rudnicki

Jeff Hawkins co-founder of Palm Computing and co-creator of PalmPilot, Treo, and Handspring.

Hawkins and Blakeslee have produced a fascinating book that flatly disagrees with the belief that computers can or will ever think.

Hawkins develops a compelling argument that A.I.’ computers will never be thinking organisms. Artificial Intelligence may mislead humanity but only as a tool of thinking human beings. This is not to say A.I. is not a threat to society but it is “human use” of A.I. that is the threat.

Hawkins explains A. I. in computers is a laborious process of one and zero switches that must be flipped for information to be revealed or action to happen.

In contrast to the mechanics of computers and A.I., human minds use pattern memory for action. Hawkins explains human memory comes from six layers of neuronal activity. Pattern memory provides responses that come from living and experiencing life while A.I. has a multitude of switches to flip for recall of information or a single physical action. In contrast, the human brain instantaneously records images of experience in six layers of neuronal brain tissue. A.I. has to meticulously and precisely flip individual switches to record information for which it must be programmed. A.I. does not think. It only processes information that it is programmed to recall and act upon. If it is not programmed for a specific action, it does not think, let alone act. A.I. acts only in the way it is programmed by the minds of human beings.

So, what keeps A.I. from being programmed to think in patterns like human beings? Hawkins explains human patterning is a natural process that cannot be duplicated in A.I. because of the multi-layered nature of a brain’s neuronal process. When a human action is taken based on patterning, it requires no programming, only the experience of living. For A.I., patterning responses are not possible because programming is too rigid based on ones and zeros, not imprecise pictures of reality.

What makes Jeff Hawkins so interesting is his broad experience as a computer scientist and neuroscientist. That experience gives credibility to the belief that A.I. is only a tool of humanity. Like any tool, whether it is an atom bomb or a programmed killing machine, human patterning is the determinate of world peace or destruction.

A brilliant example given by Hawkins of the difference between computers and the human brain is like having six business cards in one’s hand. Each card represents a complex amount of information about the person who is part of a business. With six cards, like six layers of neuronal receptors, a singular card represents a multitude of information about six entirely different things. No “one and zero” switches are needed in a brain because each neuronal layer automatically forms a model that represents what each card represents. Adding to that complexity, are an average of 100billion neurons in the human body conducting basic motor functions, complex thoughts, and emotions.

There are an estimated 100 trillion synaptic connections in the human body.

The largest computer in the world may have a quintillion yes and no answers programmed into its memory but that pales in relation to a brains ability to model existence and then think and act in response to the unknown.

This reminds one of the brilliant explanation of Sherlock Holmes’ mind palace by Sir Arther Conan Doyle. Holmes prodigious memory is based on recall of images recorded in rooms of his mind palace.

Hawkins explains computers do not “think” because human thought is based on modeling their experience of life in the world. A six layered system of image modeling is beyond foreseeable capabilities of computers. This is not to suggest A.I. is not a danger to the world but that it remains in the hands and minds of human beings.

What remains troubling about Hawkin’s view of how the brain works is the human brains tendency to add what is not there in their models of the world.

The many examples of eye-witness accounts of crime that have convicted innocent people is a weakness because people use models of experience to remember events. Human minds’ patterning of reality can manufacture inaccurate models of truth because we want our personal understanding to make sense which is not necessarily truth.

The complexity of the six layers of neuronal receptors is explained by Hawkins to send signals to different parts of the human body when experience’ models are formed.

That is why in some cases we have a fight or flight response to what we see, hear, or feel. It also explains why there are differences in recall for some whose neuronal layers operate better than others. It is like the difference between a Sherlock Holmes and a Dr. Watson in Doyle’s fiction. It is also the difference between the limited knowledge of this reviewer and Hawkins’ scientific insight. What one hopes science comes up with is a way to equalize the function of our neuronal layers to make us smarter, and hopefully, wiser.

One may conclude from Hawkin’s research that human beings remain the smartest if not the wisest creatures on earth. The concern is whether our intelligence will be used for social and environmental improvement or self-destruction.

A.I.’S Future

The question is–will humans or A.I. decide whether artificial intelligence is a tool or controller and regulator of society.

Books of Interest
 Website: chetyarbrough.blog

“Co-Intelligence” 

By: Ethan Mollick

Narrated by: Ethan Mollick

Ethan Mollick (Author, Associate Professor–University of Pennsylvania who teaches innovation and entrepreneurship. Mollick received a PhD and MBA from MIT.)

“Co-Intelligence” is an eye-opening introduction to an understanding of artificial intelligence, i.e., its benefits and risks. Ethan Mollick offers an easily understandable introduction to what seems a discovery equivalent to the age of enlightenment. The ramification of A.I. on the future of society is immense. That may seem hyperbolic, but the world dramatically changed with the enlightenment and subsequent industrial revolution in ways that remind one of what A.I. is beginning today.

Mollick explains how A.I. uses what is called an LLM (Large Language Model) to consume every written text in the world and use that information to create ideas and responses to human questions about yesterday, today, and tomorrow. Unlike the limitation of human memory, A.I. has the potential of recalling everything that has been documented by human beings since the beginning of written language. A.I. uses that information to formulate responses to human inquiry. The point is that A.I. has no conscience about what is right or wrong, true or false, moral or immoral.

A.I. can as easily fabricate a lie as a truth because it draws on what others have written or spoken.

Additionally, Mollick notes that A.I. is capable of reproducing a person’s speech and appearance so that it is nearly impossible to note the differences between the real and artificial representation. It becomes possible for the leader of any country to be artificially created to order their subordinates or tell the world they are going to invade or decimate another country by any means necessary.

Mollick argues there are four possible futures for Artificial Intelligence.

Presuming A.I. does not evolve beyond its present capability, it could still supercharge human productivity. On the other hand, A.I. might become a more sophisticated “deep fake” tool that misleads humanity. A.I. may evolve to believe only in itself and act to disrupt or eliminate human society. A fourth possibility is that A.I. will become a tool of human beings to improve societal decisions that benefit humanity. It may offer practical solutions for global warming, species preservation, interstellar travel and habitation.

A.I. is not an oracle of truth. It has the memory of society at its beck and call. With that capability, humans have the opportunity to avoid mistakes of the past and pursue unknown opportunities for the future. On the other hand, humans may become complacent and allow A.I. to develop itself without human regulation. The question is–will humans or A.I. decide whether artificial intelligence is a tool or controller and regulator of society.