A.I. TOMOROW

A.I.s’ contribution to society is similar to the history of nuclear power, it will be constructively or destructively used by human beings. On balance, “Burn-In” concludes A.I. will mirror societies values. As has been noted in earlier book reviews, A.I. is a tool, not a controller of humanity.

Books of Interest
 Website: chetyarbrough.blog

BURN-IN (A Novel of the Real Robotic Revolution)

Author: P. W. Singer, August Cole

Narration by: Mia Barron

Peter Warren Singer (on the left) is an American political scientist who is described by the WSJ as “the premier futurist in the national security environment”. August Cole is a co-author who is also a futurist and regular speaker before US and allied government audiences.

As an interested person in Artificial Intelligence, I started, stopped, and started again to listen to “Burn-In”.

The subject of the book is about human adaptation to robotics and A.I. It shows how humans, institutions, and societies may be able to better serve society on the one hand and destroy it on the other. Some chapters were discouraging and boring to this listener because of tedious explanations of robot use in the future. The initial test is in the FBI, an interesting choice in view of the FBI’s history which has been rightfully criticized but also acclaimed by American society.

Starting, stopping, and restarting is a result of the author’s unnecessary diversion to a virtual reality game being played by inconsequential characters.

In an early chapter several gamers are engaged in VR that distracts listeners from the theme of the book. It is an unnecessary distraction from the subject of Artificial Intelligence. Later chapters suffer the same defect. However, there are some surprising revelations about A.I.’s future.

The danger in societies future remains in the power of knowledge. The authors note the truth is that A.I.’s lack of knowledge is what has really become power. Presumably, that means technology needs to be controlled by algorithms created by humans that limit knowledge of A.I.’ systems that may harm society.

That integration has massive implications for military, industrial, economic, and societal roles of human beings. The principles of human work, social relations, capitalist/socialist economies and their governance are changed by the advance of machine learning based on Artificial Intelligence. Machine learning may cross thresholds between safety and freedom to become systems of control with potential for human societies destruction. At one extreme is China’s surveillance state; on the other is western societies belief in relative privacy.

Robot evolution.

Questions of accountability become blurred when self-learning machines gain understanding beyond human capabilities. Do humans choose to trust their instincts or a machines’ more comprehensive understanding of facts? Who adapts to whom in the age of Artificial Intelligence? These are the questions raised by the authors’ story.

The main character of Singer’s and Cole’s story is Lara Keegan, a female FBI agent. She is a seasoned investigator with an assigned “state of the art” police robot. The relationship between human beings and A.I. robots is explored. What trust can a human have of a robotic partner? What control is exercised by a human partner of an A.I.’ robot? What autonomy does the robot have that is assigned to a human partner? Human and robot partnership in policing society are explored in “Burn-In”. The judgement of the author’s story is nuanced.

In “Burn-In” a flood threatens Washington D.C., the city in which Keegan and the robot work.

The Robot’s aid to Keegan saves the life of a woman threatened by the flood as water fills an underground subway. Keegan hears the woman calling for help and asks the robot to rescue the frightened woman. The robot submerges itself in the subway’ flood waters, saves the woman and returns to receive direction from Keegan to begin building a barrier to protect other citizens near the capitol. The Robot moves heavy sacks filled with sand and dirt, with surrounding citizens help in loading more sacks. The robot tirelessly builds the barrier with strength and efficiency that could not have been accomplished by the people alone. The obvious point being the cooperation of robot and human benefits society.

The other side of that positive assessment is that a robot cannot be held responsible for work that may inadvertently harm humans.

Whatever human is assigned an A.I robot loses their privacy because of robot’ programing that knows the controller’s background, analyzes his/her behavior, and understands its assigned controller from that behavior and background knowledge. Once an assignment is made, the robot is directed by a human that may or may not perfectly respond in the best interest of society. Action is exclusively directed by the robot’s human companion. A robot is unlikely to have intuition, empathy, or moral judgement in carrying out the direction of its assigned human partner. There is also the economic effect of lost human employment as a result of automation and the creation of robot’ partners and laborers.

A.I.s’ contribution to society is similar to the history of nuclear power, it will be constructively or destructively used by human beings. On balance, “Burn-In” concludes A.I. will mirror societies values. As has been noted in earlier book reviews, A.I. is a tool, not a controller of humanity.

RISK/REWARD

“IF ANYONE BUILDS IT, EVERYONE DIES” is an alarmist, and unnecessarily pessimistic view of the underlying value of Artificial Intelligence. This is not to suggest there are no risks in A.I. but its potential outweighs its risks.

Books of Interest
 Website: chetyarbrough.blog

IF ANYONE BUILDS IT, EVERYONE DIES

Author: Eliezer Yudkowsky, and Nate Soares

Narrated By: Rae Beckley

Eliezer Yudkowsky is a self-taught A.I. researcher without a formal education. As an A.I. researcher, Yudkowsky founded the Machine Intelligence Research Institute (MIRI). Nate Soares received an undergraduate degree from George Washington University and became President of MIRI. Soares had worked as an engineer for Google and Microsoft. Soares also worked for the National Institute of Standards and Technology and the U.S. Dept. of Defense.

“IF ANYONE BUILDS IT, EVERYONE DIES” is difficult to follow because it’s convoluted examples and arguments are unclear. The fundamental concern the writers have is that A.I. will self-improve to the point of being a threat to humanity. They argue that A.I. will grow to be more interested in self-preservation than an aid to human thought and existence. The irony of their position is that humanity is already a threat to itself from environmental degradation, let alone nuclear annihilation. The truth is humanity needs the potential of A.I. to better understand life and what can be done to preserve it.

To this listener/reader environmental degradation is a greater risk than the author’s purported threats of A.I.

Pessimism is justified in the same way one can criticize capitalism.

The authors have a point of view that is too pessimistic about A.I. and its negative potential without recognizing how poorly society is structured for war and killing itself without Artificial Intelligence. The advance of A.I. unquestionably has risks just as today’s threat of mutual nuclear annihilation but A.I.s’ potential for changing the course of civilization for the better exceeds the agricultural and industrial revolutions of the past.

The nature and intelligence of human beings is underestimated by Yudkowsky and Soares.

There have been a number of amazing human discoveries that have accelerated since the beginning of civilization in Mesopotamia. Humans like Einstein and their insight to the universe will be aided, not controlled, by the potential of A.I. Artificial Intelligence is no more a danger to humanity than the loss of craftsman during the industrial revolution. Civilization will either adapt to revelations coming from A.I. or environmental degradation or human stupidity will overtake humanity.

“IF ANYONE BUILDS IT, EVERYONE DIES” is an alarmist, and unnecessarily pessimistic view of the underlying value of Artificial Intelligence. This is not to suggest there are no risks in A.I. but its potential outweighs its risks.

AI & HEALTH

Like Climate Change, AI seems an inevitable change that will collate, spindle, and mutilate life whether we want it to or not. The best humans can do is adopt and adapt to the change AI will make in human life. It is not a choice but an inevitability.

Books of Interest
 Website: chetyarbrough.blog

Deep Medicine (How Artificial Intelligence Can Make Healthcare Human Again)

Author: Eric Topol

Narrated By:  Graham Winton

Eric Topol (Author, American cardiologist, scientist, founder of Scripps Research Translational Institute.)

Eric Topol is what most patients want to see in a Doctor of Medicine. “Deep Medicine” should be required reading for students wishing to become physicians. One suspects Topol’s view of medicine is as empathetic as it is because of his personal chronic illness. His personal experience as a patient and physician give him an insightful understanding of medical diagnosis, patient care, and treatment.

Topol explains how increasingly valuable and important Artificial Intelligence is in the diagnosis and treatment of illness and health for human beings.

AI opens the door for improved diagnosis and treatment of patients. A monumental caveat to A.I.s potential is its exposure of personal history not only to physicians but to governments and businesses. Governments and businesses preternaturally have agendas that may be in conflict with one’s personal health and welfare.

Topol notes China is ahead of America in cataloging citizens’ health because of their data collection and AI’s capabilities.

Theoretically, every visit to a doctor can be precisely documented with an AI system. The good of that system would improve continuity of medical diagnosis and treatment of patients. The risk of that system is that it can be exploited by governments and businesses wishing to control or influence a person’s life. One is left with a concern about being able to protect oneself from a government or business that may have access to citizen information. In the case of government, it is the power exercised over freedom. Both government and businesses can use AI information to influence human choice. With detailed information about what one wants, needs, or is undecided upon can be manipulated with personal knowledge accumulated by AI.

Putting loss of privacy and “Brave New World” negatives aside, Topol explains the potential of AI to immensely improve human health and wellness.

Cradle to grave information on human health would aid in research and treatment of illnesses and cures for present and future patients. Topol gives the example of collection of information on biometric health of human beings that can reveal secrets of perfect diets that would aid better health during one’s life. Topol explains how every person has a unique biometric system that processes food in different ways. Some foods may be harmful to some and not others because of the way their body metabolizes what they choose to eat. Topol explains, every person has their own biometric system that processes foods in different ways. It is possible to design diets to meet the specifications of one’s unique digestive system to improve health and avoid foods that are not healthily metabolized by one’s body. An AI could be devised to analyze individual biometrics and recommend more healthful diets and more effective medicines for users of an AI system.

In addition to improvements in medical imaging and diagnosis with AI, Topal explains how medicine and treatments can be personalized to patients based on biometric analysis that shows how medications can be optimized to treat specific patients in a customized way. Every patient is unique in the way they metabolize food and drugs. AI offers the potential for customization to maximize recovery from illness, infection, or disease.

Another growing AI metric is measurement of an individual’s physical well-being. Monitoring one’s vital signs is becoming common with Apple watches and information accumulation that can be monitored and controlled for healthful living. One can begin to improve one’s health and life with more information about a user’s pulse and blood pressure measurements. Instantaneous reports may warn people of risks with an accumulated record of healthful levels of exercise and an exerciser’s recovery times.

Marie Curie (Scientist, chemist, and physicist who played a crucial role in developing x-ray technology, received 2 Nobel Prizes, died at the age of 66.)

Topol offers a number of circumstances where AI has improved medical diagnosis and treatment. He notes how AI analysis of radiological imaging improves diagnosis of body’ abnormality because of its relentless process of reviewing past imaging that is beyond the knowledge or memory of experienced radiologists. Topol notes a number of studies that show AI reads radiological images better than experienced radiologists.

One wonders if AI is a Hobson’s choice or a societal revolution.

One wonders if AI is a Hobson’s choice or a societal revolution greater than the discovery of agriculture (10000 BCE), the rise of civilization (3000 BCE), the Scientific Revolution (16th to 17th century), the Industrial Revolution (18th to 19th century), the Digital Revolution (20th to 21st century), or Climate Change in the 21st century. Like Climate Change, AI seems an inevitable change that will collate, spindle, and mutilate life whether we want it to or not. The best humans can do is adopt and adapt to the change AI will make in human life. It is not a choice but an inevitability.

AGI

Humans will learn to use and adapt to Artificial General Intelligence in the same way it has adapted to belief in a Supreme Being, the Age of Reason, the industrial revolution, and other cultural upheavals.

Books of Interest
 Website: chetyarbrough.blog

How to Think About AI (A Guide for the Perplexed)

By: Richard Susskind

Narrated By:  Richard Susskind

Richard Susskind (Author, British IT adviser to law firms and governments, earned an LL.B degree in Law from the University of Glasgow in 1983, and has a PhD. in philosophy from Columbia University.)

Richard Susskind is another historian of Artificial Intelligence. He extends the history of AI to what is called AGI. He has an opinion about the next generation of AI called Artificial General Intelligence. AGI (Artificial General Intelligence) is a future discipline suggesting AI will continue to evolve to perform any intellectual task that a human can.

These men were the foundation of what became Artificial Intelligence. AI was officially founded in 1956 at a Dartmouth Conference attended by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon. Conceptually, AI came from Alan Turing’s work before and during WWII when he created the Turing machine that cracked the German secret code.

McCarthy and Minsky were computer and cognitive scientists, Rochester was an engineer and became an architect for IBM’s first computer, Shannon (an engineer) and Turing were both mathematicians with an interest in cryptography and its application to code breaking.

Though not mentioned by Susskind, two women, Ada Lovelace and Grace Hopper played roles in early computer creation (Lovelace as an algorithm creator for Charles Babbage in the 19th century, and Hopper as a computer scientist that translated human-readable code into machine language for the Navy).

Susskind’s history takes listener/readers to the next generation of AI with Artificial General Intelligence (AGI).

Susskind recounts the history of AI’s ups and downs. As noted in earlier book reviews, AI’s potential became known during WWII but went into hibernation after the war. Early computers lacked processing capability to support complex AI models. The American federal government cut back on computer research for a time because of unrealistic expectations that seemed unachievable because of processing limitations. AI research failed to deliver practical applications.

The invention of transistors in the late 1940’s and 50s and microprocessors in the 1970s reinvigorated AI.

Transistor and microprocessor inventions addressed the processing limitations of earlier computers. John Bardeen, Walter Brattain, and William Shockley working for Bell Laboratories were instrumental in the invention of transistors and microprocessors. Their inventions replaced bulky vacuum tubes and miniaturized more efficient electronic devices. In the 1970s Marcian “Ted” Hoff, Federico Faggin, and Stanley Mazor, who worked for Intel, integrated computing functions onto single chips that revolutionized computing. The world rediscovered the potential of AI with these improvements in power. McCarthy and Minsky refine AI concepts and methodologies.

With the help of others like Geoffrey Hinton and Yann LeCun, the foundation for modern AI is reinvigorated with deep learning, image recognition, and processing that improves probabilistic reasoning. Human decision-making is accelerated in AI. Susskind suggests a blurred line is created between human and machine control of the future with the creation of Artificial General Intelligence (AGI).

With AGI, there is the potential for loss of human control of the future.

Societal goals may be unduly influenced by machine learning that creates unsafe objectives for humanity. The pace of change in society would accelerate with AGI which may not allow time for human regulation or adaptation. AGI may accumulate biases drawn from observations of life and history that conflict with fundamental human values. If AGI grows to become a conscious entity, whatever “conscious” is, it presumably could become primarily interested in its own existence which may conflict with human survival.

Like history’s growth of agricultural development, religion, humanist enlightenment, the industrial revolution, and technology, AGI has become an unstoppable cultural force.

Susskind argues for regulation of AGI. Is Artificial General Intelligence any different than other world changing cultural forces? Yes and no. It is different because AGI has wider implications. AGI reshapes or may replace human intelligence. One possible solution noted by Ray Kurzweil is the melding of AI and human intelligence to make survival a common goal. Kurzweil suggests humans should go with the flow of AGI, just like it did with agriculture, religion, humanism, and industrialization.

Susskind suggests restricting AGI’s ability to act autonomously with shut-off mechanisms or accessibility restrictions on human cultural customs. He also suggests programming AGI to have ethical constraints that align with human values and a rule of “do no harm”, like the Hippocratic oath of doctors for their patients.

In the last chapters of Susskind’s book, several theories of human existence are identified. Maybe the world and the human experience of it are only creations of the mind, not nature’s reality. What we see, feel, touch, and do are in a “Matrix” of ones and zeros and that AGI is just what humans think they see, not what it is. Susskind speculates on the growth of virtual reality developed by technology companies becoming human’s only reality.

AI and AGI are threats to humanity, but the threat is in the hands of human beings. As the difference between virtual reality and what is real becomes more unclear, it will be used by human beings who could accidentally, or with prejudice or craziness, destroy humanity. The same might be said of nuclear war which is also in the hands of human beings. A.I. and A.G.I. are not the threat. Conscious human beings are the threat.

Humans will learn to use and adapt to Artificial General Intelligence in the same way it has adapted to belief in a Supreme Being, the Age of Reason, the industrial revolution, and other cultural upheavals. However, if science gives consciousness (whatever that is) to A.I., all bets are off. The end of humanity may be in that beginning.

MEDICINE

A government designed to use public funds to pick winners and losers in the drug industry threatens human health. Only with the truth of science discoveries and honest reporting of drug efficacy can a physician offer hope for human recovery from curable diseases.

Books of Interest
 Website: chetyarbrough.blog

Rethinking Medications (Truth, Power, and the Drugs You Take)

By: Jerry Avorn

Narrated By: Jerry Avorn MD

Jerry Avorn (Author, professor of medicine at Harvard Medical School where he received his MD, Chief Emeritus of the Division of Pharmacoepidemiology and Pharmacoeconomics)

Doctor Avorn enlightens listener/readers about drug industry’ costs, profits, and regulation. Avorn explains how money corrupts the industry and the FDA while encouraging discovery of effective drug treatments. The cost, profits, and benefits of the industry revolve around research, discovery, medical efficacy, human health, ethics, and regulation.

Drug manufacture is big business.

Treatments for human maladies began in the dark ages when little was known about the causes of disease and mental dysfunction. Cures ranged from spirit dances to herbal concoctions that allegedly expelled evil, cured or killed its followers and users. The FDA (Food and Drug Administration) did not come into existence until 1930, but its beginnings harken back to the 1906 Pure Food and Drug Act signed into law by Theodore Roosevelt. The FDA took on the role of reviewing scientific drug studies for drug treatments that could aid health recovery for the public. The importance of review was proven critical by incidents like that in 1937, when 107 people died from a Sulfanilamide drug which was found to be poisonous. From that 1937 event forward, the FDA required drug manufacturers to prove safety of a drug before selling it to the public. The FDA began inspecting drug factories while demanding drug ingredient labeling. However, Avorn illustrates how the FDA was seduced by Big Pharma’ to offer drug approvals based on flawed or undisclosed research reports.

Dr. Martin Makary (Dr. Makary was confirmed as the new head of the FDA on March 25, 2025. He is the 27th head of the Department. He is a British-American surgeon and professor.)

What Dr. Avorn reveals is how the FDA has either failed the public or been seduced by drug manufacturers to approve drugs that have not cured patients but have, in some cases, harmed or killed patients. It will be interesting to see what Dr. Marin Makary can do to improve FDA’s regulation of drugs. Avorn touches on court cases that have resulted in huge financial settlements by drug manufacturing companies and their stockholders. However, he notes the actual compensation received by individually harmed patients or families is miniscule in respect to the size of the fines; not to mention many billions of dollars the drug companies received before unethical practices were exposed. Avorn notes many FDA’ research and regulation incompetencies allowed drug companies to hoodwink the public about drug companies’ discovered but unrevealed drug side-effects.

A few examples can be easily found in an internet search:

1) Vioxx (Rofecoxib), a pain killer, had to be withdrawn from use in 2004 because it was linked to increased risk of heart attacks and strokes. It was removed from the market in 2004.

2) Fen-Phen (Fenfluramine/Phentermine), a weight-loss drug had to be taken off the market in 1997 because of severe heart and lung complications.

3) Accutane was used to cure acne but was found to be linked to birth defects and had to be withdrawn in 2009.

4) Thalidomide was found to cause birth defects to become repurposed for treatment of certain cancers.

5) A more recent failure of the FDA is their failure to regulate opioids like OxyContin that resulted in huge fines to manufacturers and distributors of the drug.

Lobbyists are hired by drug companies to influence politicians to gain support of drug companies. In aggregate, this chart shows the highest-spending lobbyists in the 3rd Qtr. of 2020 were in the medical industry.

Dr. Avorn argues Big Pharma’s lobbying power has unduly influenced FDA to approve drugs that are not effective in treating patients for their diagnosed conditions. Avorn infers Big Pharma is more focused on increasing revenue than effectively reviewing drug manufacturer’ supplied studies. Avorn argues the FDA has become too dependent on industry fees that are paid by drug manufacturers asking for expedited drug approvals. Avorn infers the FDA fails to demand more documentation from drug manufacturers on their drug’ research. The author suggests many approved opioids, cancer treatment drugs, and psychedelics have questionable effectiveness or have safety concerns. Misleading or incomplete information is provided by drug companies that makes applications an approval process, not a fully relevant or studied action on the efficacy of new drugs.

Avorn is disappointed in the Trump administrations’ selection of Robert Kennedy as the U.S. Secretary of Health and Human Services because of his lack of qualification.

The unscientific bias of Kennedy and Trump in regard to vaccine effectiveness reinforces the likelihood of increased drug manufacturers’ fees that are just a revenue source for the FDA. Trump will likely reward Kennedy for decreasing the Departments’ overhead by firing research scientists and increasing the revenues they collect from drug manufacturers seeking drug approvals.

Trump sees and uses money as the only measure of value in the world.

It is interesting to note that Avorn is a Harvard professor, a member of one of the most prestigious universities in the world. Harvard is being denied government grants by the Trump administration, allegedly because of Harvard’s DEI policy. One is inclined to believe diversity, equity, and inclusion are ignored by Trump because he is part of the white ruling class in America. Trump chooses to stop American aid to the world to reduce the cost of government. American government’s decisions to starve the world and discriminate against non-whites is a return to the past that will have future consequences for America.

Next, Avorn writes about the high cost of drugs, particularly in the United States. Discoveries are patented in the United States to incentivize innovation, but drug companies are gaming that Constitutional right by slightly modifying drug manufacture when their patent rights are nearing expiration. They renew their patent and control the price of the slightly modified drug that has the same curative qualities. As publicly held corporations, they are obligated to keep prices as high as the market allows. The consequence leaves many families at the mercy of their treatable diseases because they cannot afford the drugs that can help them.

Martin Shkreli, American investor who rose to fame and infamy for using hedge funds to buy drug patents and artificially raise their prices to only increase revenues.

The free market system in America allows an investor to buy a drug patent and arbitrarily raise its price. Avorn suggests this is a correctable problem with fair regulation and a balance between government sponsored funding for drug research in return for public funding. Of course, there are some scientists like Jonas Salk in 1953 who refused to privately patent the polio vaccine because it had such great benefit to the health of the world.

Avorn notes the 1990’s drug costs in the U.S. are out of control.

Only the rich are able to pay for newer drugs that cost hundreds of thousands of dollars per year. Americans spend over $13,000 per year per person while Europe is around $5,000 and low-income countries under $500 per year. These expenditures are to extend life which one would think make Americans live longest. Interestingly, America is not even in the top 10. Hong Kong’s average life expectancy is 85.77 years, Japan 85. South Korea 84.53. The U.S. average life expectancy is 79.4. To a cynic like me, one might say what’s 5 or 6 more years of life really worth? On the other hand, billionaires and millionaires like Peter Thiel and Bryan Johnson have invested millions into anti-aging research.

Avorn reinforces the substance of Michael Pollan’s book “How to Change Your Mind” which reenvisions the value of hallucinogens in this century.

Avorn notes hallucinogens efficacy is reborn in the 21st century to a level of medical and social acceptance. Avorn is a trained physician as opposed to Pollan who is a graduate with an M.A. in English, not with degrees in science or medicine.

In reviewing Avorn’s informative history, it is apparent that patients should be asking their doctors more questions about the drugs they are taking.

Drugs have side effects that can conflict with other drugs being taken. In this age of modern medicine, there are many drugs that can be effective, but they can also be deadly. Drug manufacturers looking at drug creation as only revenue producers is a bad choice for society.

Avorn’s history of the drug industry shows failure in American medicines is more than the mistake of placing an incompetent in charge of the U.S.

Taking money away from research facilities diminishes American innovation in medicine and other important sciences. However, research is only as good as the accuracy of its proof of efficacy for the treatment of disease and the Hippocratic Oath of “First, do no harm”. A government designed to use public funds to pick winners and losers in the drug industry threatens human health. Only with the truth of science discoveries and honest reporting of drug efficacy can a physician offer hope for human recovery from curable diseases.

RISK/REWARD

AI is only a tool of human beings and will be misused by some leaders in the same way Atom bombs, starvation, disease, climate, and other maladies have harmed the sentient world. AI is more of an opportunity than threat to society.

Books of Interest
 Website: chetyarbrough.blog

A Brief History of Artificial Intelligence (What It Is, Where We Are, and Where We Are Going)

By: Michael Wooldridge

Narrated By: Glen McCready

Michael Wooldridge (Author, British professor of Computer Science, Senior Research Fellow at Hertford College University of Oxford.)

Wooldridge served as the President of the International Joint Conference in Artificial Intelligence from 2015-17, and President of the European Association for AI from 2014-16. He received a number of A.I. related service awards in his career.

Alan Turing (1912-1954, Mathematician, computer scientist, cryptanalyst, philosopher, and theoretical biologist.)

Wooldridge’s history of A.I. begins with Alan Turing who has the honorific title of “father of theoretical computer science and artificial intelligence”. Turing is best known for breaking the German Enigma code in WWII with the development of an automatic computing engine. He went on to develop the Turing test that evaluated a machine’s ability to provide answers to questions that exhibited human-like behavior. Sadly, he is equally well known for being a publicly persecuted homosexual who committed suicide in 1954. He was 41 years old at the time of his death.

Wooldridge explains A.I. has had a roller-coaster history of highs and lows with new highs in this century.

Breaking the Enigma code is widely acknowledged as a game changer in WWII. Enigma’s code breaking shortened the war and provided strategic advantage to the Allied powers. However, Wooldridge notes computer utility declined in the 70s and 80s because applications relied on laborious programming rules that introduced biases, ethical concerns, and prediction errors. Expectations of A.I.’s predictability seemed exaggerated.

The idea of a neuronal connection system was thought of in 1943 by Warren McCulloch and Walter L Pitts.

In 1958, Frank Rosenblatt developed “Perception”, a program based on McCulloch and Pitt’s idea that made computers capable of learning. However, this was a cumbersome programming process that failed to give consistent results. After the 80s, machine learning became more usefully predictive with Geoffrey Hinton’s devel0pment of backpropagation, i.e., the use of an algorithm to check on programming errors with corrections that improved A.I. predictions. Hinton went on to develop a neural network in 1986 that worked like the synapse structure of the brain but with much fewer connections. A limited neural network for computers led to a capability for reading text and collating information.

Geoffrey Hinton (the “Godfather of AI” won the 2018 Turing Award.)

Then, in 2006 Hinton developed a Deep Belief Network that led to deep learning with a type of a generative neural network. Neural networks offered more connections that improved computer memory with image recognition, speech processing, and natural language understanding. In the 2000s, Google acquired a deep learning company that could crawl and index the internet. Fact-based decision-making, and the accumulation of data, paved the way for better A.I. utility and predictive capability.

Face recognition capability.

What seems lost in this history is the fact that all of these innovations were created by human cognition and creation.

Many highly educated and inventive people like Elon Musk, Stephen Hawking, Bill Gates, Geoffrey Hinton, and Yuval Harari believe the risks of AI are a threat to humanity. Musk calls AI a big existential threat and compares it to summoning a demon. Hawking felt Ai could evolve beyond human control. Gates expressed concern about job displacement that would have long-term negative consequences with ethical implications that would harm society. Hinton believed AI would outthink humans and pose unforeseen risks. Harari believed AI would manipulate human behavior and reshape global power structures and undermine governments.

All fears about AI have some basis for concern.

However, how good a job has society done throughout history without AI? AI is only a tool of human beings and will be misused by some leaders in the same way atom bombs, starvation, disease, climate, and other maladies have harmed the sentient world. AI is more of an opportunity than threat to society.

FUTURE A.I.

Human nature will not change but A.I. will not destroy humanity but insure its survival and improvement.

Books of Interest
 Website: chetyarbrough.blog

Human Compatible (Artificial Intelligence and the Problem of Control)

By: Stuart Russell

Narrated By: Raphael Corkhill

Stuart Johnathan Russell (British computer scientist, studied physics at Wadham College, Oxford, received first-class honors with a BA in 1982, moved to U.S. and received a PhD in computer science from Stanford.)

Stuart Russell has written an insightful book about A.I. as it currently exists with speculation about its future. Russell in one sense agrees with Marcus’s and Davis’s assessment of today’s A.I. He explains A.I. is presently not intelligent but argues it could be in the future. The only difference between the assessments in Marcus’s and Davis’s “Rebooting AI” and “Human Compatible” is that Russell believes there is a reasonable avenue for A.I. to have real and beneficial intelligence. Marcus and Davis are considerably more skeptical than Russell about A.I. ever having the equivalent of human intelligence.

Russell infers A.I. is at a point where gathered information changes human culture.

Russell argues A.I. information gathering is still too inefficient to give the world safe driverless cars but believes it will happen. There will be a point where fewer deaths on the highway will come from driverless cars than those that are under the control of their drivers. The point is that A.I. will reach a point of information accumulation that will reduce traffic deaths.

A.I. will reach a point of information accumulation that will reduce traffic deaths.

After listening to Russell’s observation, one conceives of something like a pair of glasses on the face of a person being used to gather information. That information could be automatically transferred by improvements in Wi-Fi to a computing device that would collate what a person sees to become a database for individual human thought and action. The glasses will become a window of recallable knowledge to its wearer. A.I. becomes a tool of the human mind which uses real world data to choose what a human brain comprehends from his/her experience in the world. This is not exactly what Russell envisions but the idea is born from a combination of what he argues is the potential of A.I. information accumulation. The human mind remains the seat of thought and action with the help of A.I., not the direction or control by A.I.

Russell’s ideas about A.I. address the concerns that Marcus and Davis have about intelligence remaining in the hands of human’s, not a machine that becomes sentient.

Russell agrees with Marcus, and Davis–that growth of A.I. does have risk. However, Russell goes beyond Marcus and Davis by suggesting the risk is manageable. Risk management is based on understanding human action is based on knowledge organized to achieve objectives. If one’s knowledge is more comprehensive, thought and action is better informed. Objectives can be more precisely and clearly formed. Of course, there remains the danger of bad actors with the advance of A.I., but that has always been the risk of one who has knowledge and power. The minds of a Mao, Hitler, Beria, Stalin, and other dictators and murderers of humankind will still be among us.

The competition and atrocities of humanity will not disappear with A.I. Sadly, A.I. will sharpen the dangers to humanity but with an equal resistance by others that are equally well informed. Humanity has managed to survive with less recallable knowledge so why would humanity be lost with more recallable knowledge? As has been noted many times in former book reviews, A.I. is, and always will be, a tool of human beings, not a controller.

The world will have driverless cars, robotically produced merchandise, and cultures based on A.I.’ service to others in the future.

Knowledge will increase the power and influence of world leaders to do both good and bad in the world. Human nature will not change but A.I. will not destroy humanity. Artificial Intelligence will insure human survival and improvement. History shows humanity has survived famine, pestilence, and war with most cultures better off than when human societies came into existence.

THINKING

A.I. will continue to grow as an immense gatherer of information. Will it ever think? Can, should, or will future prediction and political policy be based only on knowledge of the past?

Books of Interest
 Website: chetyarbrough.blog

Rebooting AI (Building Artificial Intelligence We Can Trust)

By: Gary Marcus and Ernest Davis

Narrated By: Kaleo Griffith

These two academics explain much of the public’s misunderstanding of the current benefit and threat of Artificial Intelligence.

Marcus and Davis note that A.I. cannot read and does not think but only repeats what it is programmed to report.

They are not suggesting A.I. is useless but that its present capabilities are much more limited than what the public believes. In terms of product search and economic benefit to retailers, A.I. is a gold mine. But A.I.’s ability to safely move human beings in self-driving cars, free humanity from manual labor, or predict cures for the diseases of humanity are far into the future. A.I. is only a just-born baby.

Self-driving cars, robot servants, and cures for medical maladies remain works in process for Artificial Intelligence.

Marcus and Davis note A.I. usefulness remains fully dependent on human reasoning. It is a tool for recall of documented information and repetitive work. A.I. is not sentient or capable of reasoning based on the information in its memory. Because of a lack of reasoning capability, answers to questions are based on whatever information has been fed to an A.I. entity. It does not use reason to answer inquiry but only recites responses to questions from programmed information in its memory. If sources of programmed information are in conflict, the answers one receives from A.I. may be right, wrong, conflicted, or unresponsive. You can as easily get an answer from A.I. that is wrong as one that is right because it is only repeating what it has gathered from the past.

What Marcus and Davis show is how important it is that questions asked of Microsoft’s Copilot, ChatGPT, Watson, or some other A.I. platform be phrased carefully.

The value of A.I. is that it can help one recall pertinent information only if questions are precisely worded. This is a valuable supplement to human memory, but it is not a reasoned or infallible resource.

Marcus and Davis explain “Deep Learning” is not a substitute for human reasoning, but it is a supplement for more precise recorded information.

Even with multilayered neural networks, like deep learning which attempt to mimic human reasoning by patterning of raw data, can be wrong or confused. One is reminded of the Socratic belief of “I know something that I know nothing.” Truth is always hidden within a search for meaning, i.e., a gathering of information

The true potential of A.I. is in its continued consumption of all sources of information to respond to queries based on a comprehensive base of information. The idea of an A.I. that can read, hear, and collate all the information in the world is at once frightening and thrilling.

The risk is the loss of human freedom. The reward is the power of understanding. However, the authors explain there are many complications for A.I. to usefully capitalize on all the information in the world. Information has to be understood in the context of its contradictions, its ethical consequence, information bias, and the inherent unpredictability of human behavior. Even with knowledge of all information in the world, decisions based on A.I. do not ensure the future of humanity? Should humanity trust A.I. to recommend what is in the best interest of humanity based on past knowledge?

Markus and Davis argue A.I. is not, does not, and will not think.

A.I. will continue to grow as an immense gatherer of information. Will it ever think? Can, should, or will future prediction and political policy be based only on knowledge of the past?

AMERICAN HOPE

From Fukuyama’s intellectual musing to our eyes and ears, one hopes he is correct about America’s future in the technological age.

Books of Interest
 Website: chetyarbrough.blog

The Great Disruption (Human Nature and the Reconstitution of Social Order)

By: Francis Fukuyama

Francis Fukuyama (Author, political scientist, political economist, international relations scholar.)

Francis Fukuyama argues America is at the threshold of a social reconstitution. Fukuyama believes we are at Gladwell’s “Tipping Point” that is changing social norms and rebuilding America’s social order. He argues the innovation of technology, like the industrial revolution, is deconstructing social relationships and economics while reconstructing capitalist democracy.

The immense power of big technology companies like Amazon, Google, and Facebook have outsized influence on American society. They change the tone of social interaction through their ability to disseminate both accurate and misleading information. They erode privacy and create algorithms tailored to disparate interest groups that polarize society. The media giant’s objective is to increase clicks on their platforms to attract more advertisers who pay for public exposure of their service, merchandise, and brand.

To reduce outsize influence of big tech companies, Fukuyama suggests more technology has an answer.

There should be more antitrust measures instituted by the government to break monopolistic practices and encourage competition with large technology companies. Algorithms created by oversight government organizations can ensure transparency and reduce harmful content to reduce big tech companies influence on society. (One doubts expansion of government agencies is a likely scenario in today’s government.)

On the one hand, technology has improved convenience, communication, and a wider distribution of information.

On the other, technology has flooded society with misinformation, invaded privacy, and polarized society. Technology has created new jobs while increasing loss of traditional industry jobs with automation. Trying to return to past labor-intensive manufacturing companies is a fool’s errand in the age of technology.

Luddites during the Industrial Revolution.

Like the industrial revolution, the tech revolution’s social impact is mixed with a potential for greater social isolation, and job displacement with the addition of wide distribution of misinformation. The positives of new technology are improvements in healthcare product and services, renewable energy, and climate understanding with potential for improved control.

Face-to-face interactions become less and less necessary. Children’s access to technology impacts parental supervision and relationship. Fukuyama suggests setting boundaries for technology use needs to be a priority in American families. Technology can open the door to better education, but it also becomes a source of misinformation that can come from the internet of things. Employers have the opportunity to help with work-life balance by encouraging flexible hours and remote work. (Oddly, that suggestion is being undermined by the current government administration and many American companies.)

Economic growth, access to information, and global connectivity have been positively impacted by technology. However, the concentration of power, misinformation, and surveillance of social media has diminished privacy and eroded individual freedom. There are concerns about technology and how it is good and bad for democratic capitalism.

The good lies in increased efficiency, innovation and creation of new markets, through globalization. However, today’s American government shows how tariffs are a destroyer of globalization. Fukuyama implies A.I. and automation is displacing workers and aggravating economic inequality because it is being misunderstood for its true potential and also being misused. Personal data is used to manipulate consumers in ways that challenge the balance between corporations and consumers.

Fukuyama argues private parties will grow in America to create software that will filter and customize online services.

With that effort control of the influence of big tech companies will be diminished. With decentralization of big tech power and influence, society will theoretically become less polarized and more consensus oriented. The capitalist opportunity for tech savvy startups that diminish influence of big tech companies will re-create diversification like that which the matured industrial revolution gave to new manufacturers. Like Standard Oil and other conglomerates of the industrial revolution, businesses like Amazon, Google, and Facebook will have competition that diminishes their power and influence.

American Government will grow to regulate the internet of things just as it has grown to regulate banks, industries, and social services.

Service to citizens will become a bigger part of the economy as a replacement for manufacturing. Family life will re-invent itself as a force of society because of the time saved from manufacturing product to improve human relationships.

From Fukuyama’s intellectual musing to our eyes and ears, one hopes he is correct about America’s future in the technological age.

AI REGULATION

As Suleyman and Bhaskar infer, ignoring the threat of AI because of the difficulty of regulation is no reason to abandon the effort.

Books of Interest
 Website: chetyarbrough.blog

The Coming Wave

By: Mustafa Suleyman with Michael Bhaskar

Narrated By: Mustafa Suleyman

This is a startling book about AI because it is written by an AI entrepreneur who is the founder and former head of applied AI at DeepMind. He is also the CEO of Microsoft AI. What the authors argue is not understood by many who discount the threat of AI. They explain AI can collate information that creates societal solutions, as well as threats, that are beyond the thought and reasoning ability of human beings.

“The Coming Wave” is startling because it is written by two authors who have an intimate understanding of the science of AI.

They argue it is critically important for AI research and development to be internationally regulated with the same seriousness that accompanied the research and use of the atom bomb.

Those who have read this blog know the perspective of this writer is that AI, whether it has greater risk than the atom bomb or not is a tool, not a controller, of humanity. The AI’ threat example given by Suleyman and Bhaskar is that AI has the potential for invention of a genetic modification that could as easily destroy as improve humanity. Recognizing AI’s danger is commendable but like the atom bomb, there will always be a threat of miscreant nations or radicals that have the use of a nuclear device or AI to initiate Armagedón. Obviously, if AI is the threat they suggest, there needs to be an antidote. The last chapters of “The Coming Wave” offer their solution. The authors suggest a 10-step program to regulate or ameliorate the threat of AI’s misuse.

Like alcoholism and nuclear bomb deterrence, Suleyman’s program will be as effective as those who choose to follow the rules.

There are no simple solutions for regulation of AI and as history shows neither Alcoholics Anonymous (AA) nor the Treaty on the Prohibition of Nuclear Weapons (TPNW) has been completely successful.

Suleyman suggests the first step in regulating AI begins with creating safeguards for the vast LLM capabilities of Artificial Intelligence.

This will require the hiring of technicians to monitor and adjust incorrect or misleading information accumulated and distributed by AI users. The concern of many will be the restriction on “freedom of speech”. Additionally, two concerns are the cost of such a bureaucracy and who monitors the monitors. Who draws the line between fact and fiction? When does information deletion become a distortion of fact? This bureaucracy will be responsible for auditing AI models to understand what their capabilities are and what limitations they have.

A second step is to slow the process of AI development by controlling the sale and distribution of the hardware components of AI to provide more time for reviewing new development impacts.

With lucrative incentives for new AI capabilities in a capitalist system there is likely to be a lot of resistance by aggressive entrepreneurs, free-trade and free-speech believers. Leaders in authoritarian countries will be equally incensed by interference in their right to rule.

Transparency is a critical part of the vetting process for AI development.

Suleyman suggests critics need to be involved in new developments to balance greed and power against utilitarian value. There has to be an ethical examination of AI that goes beyond profitability for individuals or control by governments. The bureaucracies for development, review, and regulation should be designed to adapt, reform, and implement regulations to manage AI technologies responsibly. These regulations should be established through global treaties and alliances among all nations of the world.

Suleyman acknowledges this is a big ask and notes there will be many failures in getting cooperation or adherence to AI regulation.

That is and was true of nuclear armament and so far, there has been no use of nuclear weapons to attack other countries. The authors note there will be failures in trying to institute these guidelines but with the help of public awareness and grassroots support, there is hope for the greater good that can come from AI.

As Suleyman and Bhaskar infer, ignoring the threat of AI because of the difficulty of regulation is no reason to abandon the effort.