AI REGULATION

As Suleyman and Bhaskar infer, ignoring the threat of AI because of the difficulty of regulation is no reason to abandon the effort.

Books of Interest
 Website: chetyarbrough.blog

The Coming Wave

By: Mustafa Suleyman with Michael Bhaskar

Narrated By: Mustafa Suleyman

This is a startling book about AI because it is written by an AI entrepreneur who is the founder and former head of applied AI at DeepMind. He is also the CEO of Microsoft AI. What the authors argue is not understood by many who discount the threat of AI. They explain AI can collate information that creates societal solutions, as well as threats, that are beyond the thought and reasoning ability of human beings.

“The Coming Wave” is startling because it is written by two authors who have an intimate understanding of the science of AI.

They argue it is critically important for AI research and development to be internationally regulated with the same seriousness that accompanied the research and use of the atom bomb.

Those who have read this blog know the perspective of this writer is that AI, whether it has greater risk than the atom bomb or not is a tool, not a controller, of humanity. The AI’ threat example given by Suleyman and Bhaskar is that AI has the potential for invention of a genetic modification that could as easily destroy as improve humanity. Recognizing AI’s danger is commendable but like the atom bomb, there will always be a threat of miscreant nations or radicals that have the use of a nuclear device or AI to initiate Armagedón. Obviously, if AI is the threat they suggest, there needs to be an antidote. The last chapters of “The Coming Wave” offer their solution. The authors suggest a 10-step program to regulate or ameliorate the threat of AI’s misuse.

Like alcoholism and nuclear bomb deterrence, Suleyman’s program will be as effective as those who choose to follow the rules.

There are no simple solutions for regulation of AI and as history shows neither Alcoholics Anonymous (AA) nor the Treaty on the Prohibition of Nuclear Weapons (TPNW) has been completely successful.

Suleyman suggests the first step in regulating AI begins with creating safeguards for the vast LLM capabilities of Artificial Intelligence.

This will require the hiring of technicians to monitor and adjust incorrect or misleading information accumulated and distributed by AI users. The concern of many will be the restriction on “freedom of speech”. Additionally, two concerns are the cost of such a bureaucracy and who monitors the monitors. Who draws the line between fact and fiction? When does information deletion become a distortion of fact? This bureaucracy will be responsible for auditing AI models to understand what their capabilities are and what limitations they have.

A second step is to slow the process of AI development by controlling the sale and distribution of the hardware components of AI to provide more time for reviewing new development impacts.

With lucrative incentives for new AI capabilities in a capitalist system there is likely to be a lot of resistance by aggressive entrepreneurs, free-trade and free-speech believers. Leaders in authoritarian countries will be equally incensed by interference in their right to rule.

Transparency is a critical part of the vetting process for AI development.

Suleyman suggests critics need to be involved in new developments to balance greed and power against utilitarian value. There has to be an ethical examination of AI that goes beyond profitability for individuals or control by governments. The bureaucracies for development, review, and regulation should be designed to adapt, reform, and implement regulations to manage AI technologies responsibly. These regulations should be established through global treaties and alliances among all nations of the world.

Suleyman acknowledges this is a big ask and notes there will be many failures in getting cooperation or adherence to AI regulation.

That is and was true of nuclear armament and so far, there has been no use of nuclear weapons to attack other countries. The authors note there will be failures in trying to institute these guidelines but with the help of public awareness and grassroots support, there is hope for the greater good that can come from AI.

As Suleyman and Bhaskar infer, ignoring the threat of AI because of the difficulty of regulation is no reason to abandon the effort.

BELIEF

Extending Harari’s idea of biophysics research and algo-rhythmic programming suggests a potential for immense changes in society. A singularity that melds A.I. with human brain function and algo-rhythmic programming may be tomorrow’s world revolution. Of course, that capability cuts both ways, i.e., for the good and bad of society.

Books of Interest
 Website: chetyarbrough.blog

Homo Deus (A Brief History of Tomorrow)

By: Uval Noah Harari

Narrated By: Derek Perkins

Yuval Noah Harari (Author, Israeli medievalist, military historian, science writer.)

By any measure, Yuval Noah Harari is a well-educated and insightful person who will offend some and enlighten others with his opinion about religion, spirituality, the nature of human beings, and the future. He implies the Bible is a book of fiction that is historically proven to have been written by different authors with contradictions that only interpreters can reconcile as God’s work.

“Homo Deus” is a spiritual book suggesting humanity is on its own and has a chance to survive the future but only through the ability of human understanding and effort.

To Harari, the greatest threats to society are national leaders who believe in God, heaven and eternal life who discount human existence and use of science to improve human life on earth. The irony of Harari’s belief is that humanist leaders are the only hope for human life’ survival.

Harari argues science, free enterprise, and the growth of knowledge offer the best hope for the future of human life.

Neither capitalism nor communism are a guarantee of survival because of the increasing potential for error as human beings become more God-like. Advances in engineering, artificial intelligence, and biotechnology may replace the happenstance of human birth. The value of free enterprise is evident in the agricultural, industrial, and technological revolutions of history. However, as science improves the understanding of the mind and body of human beings, the technology of biogenetics offers hope for the future while running the risk of biological error with unforeseen consequences.

Harari’s book is the brave new world written about by Shakespeare in the 17th century and reimagined by Aldous Huxley in his 1932 dystopian novel “Brave New World”.

On the one hand, Shakespeare offers a positive spin as his character, Miranda, sees people from outside her experience and says “How beauteous mankind is! O Brave! That has such people in’t”. While Huxley notes a future society that becomes conformist and lacks individuality and human emotion. Which way society will turn is unknown.

The conformist demands of collective ownership of property and means of production by communism impede creativity. Capitalism is more creative and dynamic. However, capitalist incentive raises the specter of human nature that only sees financial gain without any concern for environmental or human cost. On balance, capitalism appears more likely to accelerate technology because communism more often follows than changes scientific direction.

The growth of knowledge comes from science and exploration of the unknown, but its use can be destructive as well as constructive.

Some think A.I. will lead the world to greater knowledge and prosperity while others believe it will destroy human life. A sceptic might suggest both views are wrong because A.I. is only a tool for recalling knowledge of the past to help humans make better decisions for the future. The real risk, as it has always been, is human leadership.

Harari believes, like Nietzsche, that God is dead because belief in God is losing its power and significance in the modern world.

Though many still believe in God, it seems more people are viewing God as a myth. The Pew Research Center reports a median of 45% of people across 34 countries still believe in God. However, the variation is wide with Brazil saying 70% believe while in Japan the percentage is only 20%. Harari implies belief in God is in decline.

Harari explains biophysics illustrates that human thought is algorithmic. He argues our thoughts, decisions, and behaviors can be understood to be a result of patterns created in human brains that are pre-determined. There is no “free-will” in Harari’s opinion. This is not to suggest aberrant behavior does not exist, but that human thought and action is determined by our experientially defined brain in the same way a computer is programmed. Experience from birth to adulthood is just part of a mind’s programming.

Harari implies understanding of brain function will change the world as massively as the Agricultural, Industrial, and technological revolutions.

Harari goes on to suggest humans have never been singular beings, but a multitude of beings split into two brains that mix and match their biogenetic and biochemical programming to think and act in pre-determined ways. Experiments have shown that the way the left half of a human brain sees and compels action is different than how the right brain sees and compels action. Each half thinks and acts independently while negotiating a concerted action when both halves are functioning normally. That negotiation between the two brain halves results in an algorithm for action based on the biochemical nature of the brain. The way two halves of the brain interact multiply the person we are or will become.

Extending Harari’s idea of biophysics research and algo-rhythmic programming suggests a potential for immense changes in society. A singularity that melds A.I. with human brain function and algo-rhythmic programming may be tomorrow’s world revolution. Of course, that capability cuts both ways, i.e., for the good and bad of society. Interestingly, Harari paints a grim picture of the future based on an A.I. revolution.

WORRY OR NOT

Artificial intelligence is an amazing tool for understanding the past but its utility for the future is totally dependent on its use by human beings. A.I. may be a tool for planting the seeds of agriculture or operating the tools of industry but it does not think like a human being.

Books of Interest
 Website: chetyarbrough.blog

Genesis (Artificial Intelligence, Hope, and the Human Spirit) 

By: Henry A. Kissinger, Eric Schmidt, Craig Mundie

Narrated By: Niall Ferguson, Byron Wagner

NOTED BELOW: Henry Kissinger (former Secretary of State who died in 2023), Eric Schmidt (former CEO of Google), and Craig Mundie (a Senior Advisor to the CEO of Microsoft).

“Genesis” is these three authors view of the threat and benefits of artificial intelligence. Though Kissinger is near the end of his life when his contribution is made to the book, his co-authors acknowledge his prescient understanding of the A.I. revolution and what it means to world peace and prosperity.

On the one hand, A.I. threatens civilization; on the other it offers a lifeline that may rescue civilization from global warming, nuclear annihilation, and an uncertain future. To this book reviewer, A.I. is a tool in the hands of human beings that can turn human decisions for the good of humanity or to its opposite.

A.I. gathers all the information in the known world, answers questions, and offers predictions based on human information recorded in the world’s past. It is not thinking but simply recalling the past with clarity beyond human capability. A.I. compiles everything originally noted by human beings and collates that information to offer a basis for future decision. Information comprehensiveness is not an infallible guide to the future. The future is and always will be determined by humans, limited only by human judgement, decision, and action.

The danger of A.I. remains in the thinking and decisions of humans that have often been right, but sometimes horribly wrong. One does not have to look far to see our mistakes with war, discrimination, and inequality. In theory, A.I. will improve human decision making but good and bad decisions will always be made by humans, not by machines driven by Artificial Intelligence. A.I.’s threat lies in its use by humans, not by A.I.’s infallible recall and probabilistic analysis of the past. Our worry about A.I. is justified but only because it is a tool of fallible human beings.

Artificial intelligence is an amazing tool for understanding the past but its utility for the future is totally dependent on its use by human beings. A.I. may be a tool for planting the seeds of agriculture or operating the tools of industry but it does not think like a human being. The limits of A.I. are the limits of human thought and action.

The authors conclude the Genie cannot be put back in the bottle. A.I. is a danger but it is a humanly manageable danger that is a part of human life.

The risk is in who the decision maker is when A.I. correlates historical information with proposed action. The authors infer the risk is in human fallibility, not artificial intelligence.

A.I.’ PROGRAMMING

A.I. machines do not think! It is critically important for users of A.I. to continually measure the human results of “A.I. based” decisions. Users must be educated to understand A.I. is a tool of humanity, not an oracle of truth. A.I. must be constantly reviewed and reprogrammed based on its positive contribution to society.

Books of Interest
 Website: chetyarbrough.blog

Prediction Machines (The Simple Economics of Artificial Intelligence) 

By: Ajay Agrawal, Joshua Gans, Avi Goldfarb

Narrated By: U Ganser

Authors, from left to right: Ajay Agrawal (Professor at Rotman School of Management @ University of Toronto), Joshua Gans (Chair in Technical Innovation and Entrepreneurship at the Rotman School), Avi Goldfarb (Chair in Artificial Intelligence, Healthcare, and Marketing at the Rotman School).

This is a tedious book about the mechanics of artificial intelligence and how it works, i.e., at least in its early stages of development.

Like in the early days of computer science, the phrase “garbage in, garbage out” comes to mind. “Prediction Machines” makes the point that A.I. is software creation for “…Machines” that are only as predictive as the ability of its programmers. Agrawal, Gans, and Goldfarb give a step-by-step explanation of a programmer’s thought process in creating a predictive machine that does not think but can produce predictions.

The obvious danger of A.I. is that users may believe computers think when in fact they only reproduce what they are programmed to reveal.

They can be horribly wrong based on misrepresentation or misunderstanding of the real world by programmers who are trapped in their own beliefs and prejudices. A. I.’s threat rests in the hands of those who view it as a “god-like” oracle of truth when it is only a tool of human beings.

The horrible and unjust murder of the United Health Care executive reminds one of how critical it is for all business managers to be careful about how A.I. is used and the way it affects its customers.

“Prediction Machines” is a poorly written book that illustrates how a programmer methodically organizes information with decisions and actions triggered by A.I.’ users who believe machines can be programmed to think. A.I. machines do not think!

Managers must be alert and always inspect what they expect.

It is critically important for users of A.I. to continually measure the human results of “A.I. based” decisions. Users must be educated to understand A.I. is a tool of humanity, not an oracle of truth. A.I. must be constantly reviewed and reprogrammed based on its positive contribution to society.

PATTERN ME

One may conclude from Hawkin’s research that human beings remain the smartest if not the wisest creatures on earth. The concern is whether our intelligence will be used for social and environmental improvement or self-destruction.

Books of Interest
 Website: chetyarbrough.blog

On Intelligence

By: Jeff Hawkins, Sandra Blakeslee

Narrated By: Jeff Hawkins, Stefan Rudnicki

Jeff Hawkins co-founder of Palm Computing and co-creator of PalmPilot, Treo, and Handspring.

Hawkins and Blakeslee have produced a fascinating book that flatly disagrees with the belief that computers can or will ever think.

Hawkins develops a compelling argument that A.I.’ computers will never be thinking organisms. Artificial Intelligence may mislead humanity but only as a tool of thinking human beings. This is not to say A.I. is not a threat to society but it is “human use” of A.I. that is the threat.

Hawkins explains A. I. in computers is a laborious process of one and zero switches that must be flipped for information to be revealed or action to happen.

In contrast to the mechanics of computers and A.I., human minds use pattern memory for action. Hawkins explains human memory comes from six layers of neuronal activity. Pattern memory provides responses that come from living and experiencing life while A.I. has a multitude of switches to flip for recall of information or a single physical action. In contrast, the human brain instantaneously records images of experience in six layers of neuronal brain tissue. A.I. has to meticulously and precisely flip individual switches to record information for which it must be programmed. A.I. does not think. It only processes information that it is programmed to recall and act upon. If it is not programmed for a specific action, it does not think, let alone act. A.I. acts only in the way it is programmed by the minds of human beings.

So, what keeps A.I. from being programmed to think in patterns like human beings? Hawkins explains human patterning is a natural process that cannot be duplicated in A.I. because of the multi-layered nature of a brain’s neuronal process. When a human action is taken based on patterning, it requires no programming, only the experience of living. For A.I., patterning responses are not possible because programming is too rigid based on ones and zeros, not imprecise pictures of reality.

What makes Jeff Hawkins so interesting is his broad experience as a computer scientist and neuroscientist. That experience gives credibility to the belief that A.I. is only a tool of humanity. Like any tool, whether it is an atom bomb or a programmed killing machine, human patterning is the determinate of world peace or destruction.

A brilliant example given by Hawkins of the difference between computers and the human brain is like having six business cards in one’s hand. Each card represents a complex amount of information about the person who is part of a business. With six cards, like six layers of neuronal receptors, a singular card represents a multitude of information about six entirely different things. No “one and zero” switches are needed in a brain because each neuronal layer automatically forms a model that represents what each card represents. Adding to that complexity, are an average of 100billion neurons in the human body conducting basic motor functions, complex thoughts, and emotions.

There are an estimated 100 trillion synaptic connections in the human body.

The largest computer in the world may have a quintillion yes and no answers programmed into its memory but that pales in relation to a brains ability to model existence and then think and act in response to the unknown.

This reminds one of the brilliant explanation of Sherlock Holmes’ mind palace by Sir Arther Conan Doyle. Holmes prodigious memory is based on recall of images recorded in rooms of his mind palace.

Hawkins explains computers do not “think” because human thought is based on modeling their experience of life in the world. A six layered system of image modeling is beyond foreseeable capabilities of computers. This is not to suggest A.I. is not a danger to the world but that it remains in the hands and minds of human beings.

What remains troubling about Hawkin’s view of how the brain works is the human brains tendency to add what is not there in their models of the world.

The many examples of eye-witness accounts of crime that have convicted innocent people is a weakness because people use models of experience to remember events. Human minds’ patterning of reality can manufacture inaccurate models of truth because we want our personal understanding to make sense which is not necessarily truth.

The complexity of the six layers of neuronal receptors is explained by Hawkins to send signals to different parts of the human body when experience’ models are formed.

That is why in some cases we have a fight or flight response to what we see, hear, or feel. It also explains why there are differences in recall for some whose neuronal layers operate better than others. It is like the difference between a Sherlock Holmes and a Dr. Watson in Doyle’s fiction. It is also the difference between the limited knowledge of this reviewer and Hawkins’ scientific insight. What one hopes science comes up with is a way to equalize the function of our neuronal layers to make us smarter, and hopefully, wiser.

One may conclude from Hawkin’s research that human beings remain the smartest if not the wisest creatures on earth. The concern is whether our intelligence will be used for social and environmental improvement or self-destruction.

AI TRANSITION

The potential of AI is akin to the Industrial Revolution, yet it could surpass it significantly if managed correctly by humans.

Books of Interest
 Website: chetyarbrough.blog

The AI-Savvy Leader (Nine Ways to Take Back Control and Make AI Work)

By: David De Cremer

Narrated By: David Marantz

David De Cremer (Author, Belgian born professor at Northeastern University in Boston, and behavioral scientist with academic studies in economics and psychology.)

“The AI-Savvy Leader” should be required reading for every organization investing in artificial intelligence for performance improvement. From government to business, to eleemosynary organizations, De Cremer offers a guide for organizational transition from physical labor to labor-saving benefits of AI.

AI offers the working world the opportunity to increase their productivity without the mind-numbing physical labor of assembly lines and administrative scut work.

Like assembly line production implemented by Ford and work report filing and writing during the industrial revolution, AI offers an opportunity to increase productivity without the mind-numbing physical labor of assembly line work and after-work’ analysis reports. With AI, more time is provided to workers to think and do what can be done to be more productive.

Arguably, AI is similar to the industrial revolutions transition to assembly line work. Assembly line work improved over time by changes that made it more productive. Why would one think that AI is any different? It is just another tool for improving productivity. The concern is that AI means less labor will be required and that workers will lose their jobs. De Cremer notes loss of employment is one of the greatest concerns of employees working for an organization transitioning to AI. Too many times organizations are looking at reducing costs with AI rather than increasing productivity.

The solution identified by De Cremer is to make AI transition human centered.

His point is that organizations need to understand the human impact of AI on employees’ work process. AI should not only be viewed as a cost-cutting process but as a process of reducing repetitive work for labor to make added contributions to an organization’s goals. AI does not guarantee continued employment, but reduced manual labor offers time and incentive to improve organization productivity through employee’ cooperation rather than opposition. AI is mistakenly viewed as an enemy of labor when, in fact, it is a liberator of labor that provides time to do more than tighten bolts on an auto body frame.

AI is not a panacea for labor and can be a threat just like industrialization was to many craftsmen.

But, like craftsman that went to work for industries, today’s labor will join organizations that have successfully transitioned to AI with a human-centered rather than cost-reduction mentality. Labor productivity is only a part of what any AI transition provides an organization. What is often discounted is customer service because labor is consumed by repetitive work. If AI improves labor productivity, more time can be provided to an organization’s customers.

When AI is properly human centered, the customer can be offered more personal attention by fellow human beings employed by an AI organization.

Too many organizations are using AI to respond to customer complaints. Human-centered AI becomes a win-win opportunity because labor is not consumed by production and has the time to understand customer unhappiness with service or product. AI does not think like a human. AI only responds based on the memory of what AI has been programmed to recall. With human handling of customer complaints, problems are more clearly understood. Opportunity for customer satisfaction is improved.

De Creamer acknowledges AI has introduced much closer monitoring of worker performance and carries some of the same mind-numbing work introduced in assembly line manufacturing.

De Creamer suggests negative consequences of AI should be dealt with directly with employees when AI becomes a problem. Part of a human-centered AI organization’s responsibility is allowing employees to take breaks during their workday without being penalized for slackening production. Repetitive tasks have always been a drain on productivity, but it has to be recognized and responded to in the light of overall productivity of an organization.

AI, like the industrial revolution, is shown as a great opportunity for human beings.

De Creamer suggests AI is not and will never be human. To De Creamer AI is a recallable knowledge accumulator and is only a programmed tool of human minds, not a replacement for human thought and understanding. The potential of AI is akin to the Industrial Revolution, yet it could surpass it significantly if managed correctly by humans.

A.I.’S Future

The question is–will humans or A.I. decide whether artificial intelligence is a tool or controller and regulator of society.

Books of Interest
 Website: chetyarbrough.blog

“Co-Intelligence” 

By: Ethan Mollick

Narrated by: Ethan Mollick

Ethan Mollick (Author, Associate Professor–University of Pennsylvania who teaches innovation and entrepreneurship. Mollick received a PhD and MBA from MIT.)

“Co-Intelligence” is an eye-opening introduction to an understanding of artificial intelligence, i.e., its benefits and risks. Ethan Mollick offers an easily understandable introduction to what seems a discovery equivalent to the age of enlightenment. The ramification of A.I. on the future of society is immense. That may seem hyperbolic, but the world dramatically changed with the enlightenment and subsequent industrial revolution in ways that remind one of what A.I. is beginning today.

Mollick explains how A.I. uses what is called an LLM (Large Language Model) to consume every written text in the world and use that information to create ideas and responses to human questions about yesterday, today, and tomorrow. Unlike the limitation of human memory, A.I. has the potential of recalling everything that has been documented by human beings since the beginning of written language. A.I. uses that information to formulate responses to human inquiry. The point is that A.I. has no conscience about what is right or wrong, true or false, moral or immoral.

A.I. can as easily fabricate a lie as a truth because it draws on what others have written or spoken.

Additionally, Mollick notes that A.I. is capable of reproducing a person’s speech and appearance so that it is nearly impossible to note the differences between the real and artificial representation. It becomes possible for the leader of any country to be artificially created to order their subordinates or tell the world they are going to invade or decimate another country by any means necessary.

Mollick argues there are four possible futures for Artificial Intelligence.

Presuming A.I. does not evolve beyond its present capability, it could still supercharge human productivity. On the other hand, A.I. might become a more sophisticated “deep fake” tool that misleads humanity. A.I. may evolve to believe only in itself and act to disrupt or eliminate human society. A fourth possibility is that A.I. will become a tool of human beings to improve societal decisions that benefit humanity. It may offer practical solutions for global warming, species preservation, interstellar travel and habitation.

A.I. is not an oracle of truth. It has the memory of society at its beck and call. With that capability, humans have the opportunity to avoid mistakes of the past and pursue unknown opportunities for the future. On the other hand, humans may become complacent and allow A.I. to develop itself without human regulation. The question is–will humans or A.I. decide whether artificial intelligence is a tool or controller and regulator of society.

DEATH ROW

The question raised by “The Sun Does Shine”–is death row a necessary function of society? Anthony Ray Hinton’s life story challenges its efficacy.

Books of Interest
 Website: chetyarbrough.blog

“The Sun Does Shine

By: Anthony Ray Hinton with Lara Love Hardin

Narrated by: Kevine R. Free

Anthony Ray Hinton’s life experience argues the death penalty for any crime should be abolished. Hinton states 1 in 10 people on death row have been wrongfully convicted. He spent 28 years on death row for crimes he could not have committed. His legal representation is poorly executed, in part, because he did not have enough money to pay for his defense.

Anthony Ray Hinton

Hinton’s 1 in 10 ratio of wrongful conviction is questioned but not denied by:

  1. The “Death Penalty Information Center”
  2. DNA evidence that has exonerated sentenced death row prisoners, and
  3. statistical studies that show 1 in 25 criminal defendants sentenced to death have been found innocent.

Hinton’s “The Sun Does Shine” tells of his conviction by an Alabama court for robbery and murder of two fast-food restaurant managers in Birmingham, Alabama.

Appointment of a defense attorney is required by law, but their compensation and the accused’s poverty deny an adequate defense. Hinton’s story shows how the State of Alabama’s law enforcement and judicial system manufactured false evidence to convict and put him on death row.

Hinton’s mother, childhood friend, and religious belief support him through his false imprisonment and pending death by electrocution. His electrocution is postponed because of repeated challenges, but he remains on death row for 28 years. Hinton’s imagination and good will sustain him through his ordeal. He imagines traveling the world, marrying and divorcing beautiful women, and meeting the Queen of England.

He remembers the blinking electric lights and smell of burning human flesh when each prisoner is electrocuted. He recalls the first woman to be electrocuted. He acknowledges many of the death-row’ prisoners committed horrible crimes but suggests they are victims of society because of their upbringing, and/or untreated or incurable mental dysfunctions. Hinton does not believe the guilty deserve execution for what he believes are societies’ failures.

It is the Executive Director of the Equal Justice Initiative, attorney Bryan Stevenson, who comes to Hinton’s aid and eventually gets his case before the U.S. Supreme Court in 2014. Stevenson works on Hinton’s case for over 20 years with numerous blocks thrown up by the Alabama legal system. The original judge in the case insists throughout his life that Hinton was guilty even though falsified evidence convicted him of the crime.

After release, Hinton becomes a world-wide celebrity, acquainted with famous people like President Obama, Queen Elizabeth II, Nelson Mandela, and Oprah Winfrey.

His book suggests he was entertained by some famous actors and billionaires who wished to have his story told to audiences that presumably might affect a change in the American judicial system.

The question raised by “The Sun Does Shine”–is death row a necessary function of society? Anthony Ray Hinton’s life story challenges its efficacy.