MOST INTERESTING ESSAYS 12/4/25: THEORY & TRUTH, MEMORY & INTELLIGENCE, PSYCHIATRY, WRITING, EGYPT IN 2019, LIVE OR DIE, GARDEN OF EDEN, SOCIAL DYSFUNCTION, DEATH ROW, RIGHT & WRONG, FRANTZ FANON, TRUTHINESS, CONSPIRACY, LIBERALITY, LIFE IS LIQUID, BECOMING god-LIKE, TIPPING POINT, VANISHING WORLD
Spinney makes some interesting points that may or may not be the principal origin and evolution of language difference. Her ideas seem plausible, just as Newton’s physics seemed entirely correct until Einstein proved otherwise.
Books of Interest Website: chetyarbrough.blog
Proto (How One Language Went Global)
Author: Laura Spinney
NarratedBy: Emma Spurgin-Hussey
Laura Spinney (British science journalist, novelist, and non-fiction writer.)
Laura Spinney has written a challenging book for non-linguistic learners. Her book, “Proto”, focuses on a single ancient language she calls Proto-Indo European (PIE) that is said to have spread across the world to form half of the world’s spoken languages. She is not suggesting a new origin theory but argues languages around the world are synthesized by language structure and use. She suggests genetics, human cooperative effort, and recurring mythological beliefs are the basis of adopted languages.
A contrast between the way Spinney’s theory of the spread of a language and others is that it is based on wide use of peoples’ words in daily activity rather than a dictation by leaders who exercise control over a gathered group of people.
Spinney’s historical view for language development is in a people’s events of the day, repeated word use, and changing mythological stories that cultivate and spread a language. The language grows, changes, and spreads based on wider adoption by those who are communicating daily experiences to others. As inventions like horseback riding and wheeled transport show their value to an individual, its descriptions spread new words to one person that grows to many in that culture who communicate its value to others.
As one reads/listens to Spinney’s story, the reasons for differences in language appear based on the timing of ancient cultures growth when one area of the world is populated longer than another.
Every populated area creates their own mythologies. Mythologies are different because they are created by local events, burial rituals, and the desire to explain the “not understood” to others. Additionally, people live in environmentally different areas of the world. A native American has no reason to precisely or creatively describe snow whereas an Eskimo who deals with snow on a daily basis uses more precise and creative words to describe snow’s characteristics and its effect on their lives.
Whether true or not, this is an interesting hypothesis on the growth of language.
PIE, of course, is only one family of languages but her idea of its spread seems applicable to other equally important languages. As in all stories of ancient cultures, there is misrepresentation or misunderstanding because of not being there as languages are formed. Spinney acknowledges the fragmentary evidence of her theory which makes her conclusions tentative, if not suspect. Human nature is to relate facts that make sense of one’s own beliefs and may not accurately recall or report actual experience because of research bias. Power of leaders is diminished or discounted by Spinney’s theory of the spread of language.
Spinney believes PIE originated among the Yamnaya people, north of the Black Sea in what is now eastern Ukraine and southern Russia.
From there it spread westward into Europe, southward into Antolia, eastward into Central and South Asia, and into the Tarim Basin in western China. She believes PIE expansion is primarily because of technological innovations like the wheel and domestication of horses. This is interesting because it suggests the spread of language did not come from conflicts among warring regions but the utility of new technological discoveries.
Will today’s technology bring nations together or reinforce the silos of our differences?
Spinney makes some interesting points that may or may not be the principal origin and evolution of language difference. Her ideas seem plausible, just as Newton’s physics seemed entirely correct until Einstein proved otherwise.
Steven Johnson notes how innovations and societal change does not come from a singular genius. Innovation and social change come from a confluence of geniuses, managers, and consumers.
Books of Interest Website: chetyarbrough.blog
How We Got to Now (Six Innovations That Made the Modern World)
Author: Steven Johnson
NarratedBy: George Newbern
Steven Johnson (Author, journalist)
Steven Johnson has written a moderately interesting book about innovation. He writes of six discoveries that came from the experience of everyday life. Glass, temperature, sound, health, time, and light are taken for granted in the 21st century. What Johnson explains is how these six elements were the basis of extraordinary human innovation and change in society.
Barovier Art Deco Murano glass pendant.
Glass has been around for centuries with the earliest found in Ancient Egypt. The heat of desert sands created glass in the form of beads that became jewelry in pre-Christian times. As the world industrialized, glass gathered new uses. Glass became mirrors to reflect human images, lenses for glasses, windows, and structural components of buildings. From the art of 15th-century to Leeuwenhoek’s creation of microscopes to Galileo’s telescopes to strengthening and lightening of high-rise construction materials to invention of fiber-optic cables, glass changed society.
Willis Carrier (1876-1950, designed the first modern air conditioning system in 1902.)
The benefit of cold temperatures helped preserve food and led to wider exploration of the world to avoid the cold. In warmer climates, experience of food preservation and human shelter from heat incentivized society to invent refrigeration for food and air conditioning for buildings. Public health and food safety improved with refrigeration. The cold preserved blood for future medical use and food for later consumption. The value of extreme cold led to cryogenics that aided fertility treatments by freezing sperm, eggs, and embryos for long term biological storage.
Heddy Lamarr (1914-2000, Hollywood star who patented a radio signal device that could change frequencies for secret messages during WWII.)
Johnson explains how sound innovation led to everything from the phonograph to sonar to coded messages during the war years. During WWII, secret communications between military strategists were critical. The often-recalled code breaking story of Alan Turing and the Enigma machine was a breakthrough for Allies to read German secrets. Interestingly, the famous actress, Heddy Lamarr patented a radio signal device for Allied powers’ secret communications.
As cities formed and people congregated in closer proximity, innovations in sanitation, water, and air purification grew to improve public health.
Johnson notes how light innovation grew from candles to light bulbs to lasers that changed the way humans can communicate and live after dark. Thomas Edison and the invention of the light bulb required the management skill of many to spread light around the world.
Thomas Edison (1847-1931)
An innovator’s timing makes a difference because the lack of a consumer can delay change like it did with Charles Babbage and Ada Lovelace in their 1837 concept of a general-purpose computer.
Charles Baggage 1791-1871Ada Lovelace 1815-1852
Ada Lovelace, the daughter of Lord Bryon, becomes the first computer software programmer in history. This was nearly 100 years before computer programing became important.
To improve human productivity, time became important. Precise timekeeping improved productivity, navigation, industrialization, and global coordination.
Johnson notes how innovations and societal change does not come from a singular genius. Innovation and social change come from a confluence of geniuses, managers, and consumers. He suggests Barovier, Leeuwenhoek, Galilei, Tudor, Carrier, and Lamarr were geniuses in their innovative ideas about glass, cold, and sound but it is a confluence of ideas, accidents, collaborations, and market desire that made them successful. The same may be said of Edison with light, Jobs with computers, and Musk with electric vehicles.
Like Climate Change, AI seems an inevitable change that will collate, spindle, and mutilate life whether we want it to or not. The best humans can do is adopt and adapt to the change AI will make in human life. It is not a choice but an inevitability.
Books of Interest Website: chetyarbrough.blog
Deep Medicine (How Artificial Intelligence Can Make Healthcare Human Again)
Author: Eric Topol
Narrated By: Graham Winton
Eric Topol (Author, American cardiologist, scientist, founder of Scripps Research Translational Institute.)
Eric Topol is what most patients want to see in a Doctor of Medicine. “Deep Medicine” should be required reading for students wishing to become physicians. One suspects Topol’s view of medicine is as empathetic as it is because of his personal chronic illness. His personal experience as a patient and physician give him an insightful understanding of medical diagnosis, patient care, and treatment.
Topol explains how increasingly valuable and important Artificial Intelligence is in the diagnosis and treatment of illness and health for human beings.
AI opens the door for improved diagnosis and treatment of patients. A monumental caveat to A.I.s potential is its exposure of personal history not only to physicians but to governments and businesses. Governments and businesses preternaturally have agendas that may be in conflict with one’s personal health and welfare.
Topol notes China is ahead of America in cataloging citizens’ health because of their data collection and AI’s capabilities.
Theoretically, every visit to a doctor can be precisely documented with an AI system. The good of that system would improve continuity of medical diagnosis and treatment of patients. The risk of that system is that it can be exploited by governments and businesses wishing to control or influence a person’s life. One is left with a concern about being able to protect oneself from a government or business that may have access to citizen information. In the case of government, it is the power exercised over freedom. Both government and businesses can use AI information to influence human choice. With detailed information about what one wants, needs, or is undecided upon can be manipulated with personal knowledge accumulated by AI.
Putting loss of privacy and “Brave New World” negatives aside, Topol explains the potential of AI to immensely improve human health and wellness.
Cradle to grave information on human health would aid in research and treatment of illnesses and cures for present and future patients. Topol gives the example of collection of information on biometric health of human beings that can reveal secrets of perfect diets that would aid better health during one’s life. Topol explains how every person has a unique biometric system that processes food in different ways. Some foods may be harmful to some and not others because of the way their body metabolizes what they choose to eat. Topol explains, every person has their own biometric system that processes foods in different ways. It is possible to design diets to meet the specifications of one’s unique digestive system to improve health and avoid foods that are not healthily metabolized by one’s body. An AI could be devised to analyze individual biometrics and recommend more healthful diets and more effective medicines for users of an AI system.
In addition to improvements in medical imaging and diagnosis with AI, Topal explains how medicine and treatments can be personalized to patients based on biometric analysis that shows how medications can be optimized to treat specific patients in a customized way. Every patient is unique in the way they metabolize food and drugs. AI offers the potential for customization to maximize recovery from illness, infection, or disease.
Another growing AI metric is measurement of an individual’s physical well-being. Monitoring one’s vital signs is becoming common with Apple watches and information accumulation that can be monitored and controlled for healthful living. One can begin to improve one’s health and life with more information about a user’s pulse and blood pressure measurements. Instantaneous reports may warn people of risks with an accumulated record of healthful levels of exercise and an exerciser’s recovery times.
Marie Curie (Scientist, chemist, and physicist who played a crucial role in developing x-ray technology, received 2 Nobel Prizes, died at the age of 66.)
Topol offers a number of circumstances where AI has improved medical diagnosis and treatment. He notes how AI analysis of radiological imaging improves diagnosis of body’ abnormality because of its relentless process of reviewing past imaging that is beyond the knowledge or memory of experienced radiologists. Topol notes a number of studies that show AI reads radiological images better than experienced radiologists.
One wonders if AI is a Hobson’s choice or a societal revolution.
One wonders if AI is a Hobson’s choice or a societal revolution greater than the discovery of agriculture (10000 BCE), the rise of civilization (3000 BCE), the Scientific Revolution (16th to 17th century), the Industrial Revolution (18th to 19th century), the Digital Revolution (20th to 21st century), or Climate Change in the 21st century. Like Climate Change, AI seems an inevitable change that will collate, spindle, and mutilate life whether we want it to or not. The best humans can do is adopt and adapt to the change AI will make in human life. It is not a choice but an inevitability.
“Apple in China” is a message to the entire world about the risks of technological relocation solely based on reducing costs of labor in a politically and culturally divided world. This is a book every employer should listen to or read.
Books of Interest Website: chetyarbrough.blog
Apple in China (The Capture of the World’s Greatest Company)
Author: Patrick McGees
Narrated By: Fred Sanders
Patrick McGee (Author, technology/business journalist, San Francisco Correspondent for “Financial Times”.)
Patrick McGee has written an important book about world trade. He reveals a shocking story about Apple and the risk of basing a corporation’s economic future on a singular aspect of its success, i.e. cost of manufacturing. This is a story of two companies and the world’s labor market. Foxconn and Apple look to China, Taiwan, South Korea, Ireland, and Asian countries that vie for the role of the cheapest and best labor markets in the world. Foxconn’s and much of Apple’s search and success as a tech company is based on finding the cheapest labor in the world for the manufacture of product. However, McGee explains how that view makes Apple and other international corporations vulnerable to the politics of nation-states that have a mix of economic and political agendas. McGee explains how politics can be a greater cost than benefit to a business enterprise because of nation-state’ politics.
The power of political leadership in business enterprise is on display in America today with Donald Trump and his doomed effort to return America to a 20th century manufacturing behemoth.
McGee’s story is about the impact of China’s government on Apple and Foxconn led by Tim Cook and Terry Gou. Tim Cook is the wunderkind hired by Steve Jobs before his death, and Terry Gou is the Taiwanese billionaire who founded Foxconn which is now headed by Young Liu who was educated in Taiwan and the United States.
Tim Cook (CEO of Apple Inc.)
McGee explains why and how Tim Cook became the CEO of Apple. Jobs who was known as a poor manager of people, needed a manager who emulated Jobs’ drive but understood how to manager an organization to become bigger while remaining profitable. Cook is characterized as someone who has a near photographic memory. His analysis of reports from subordinates could be used to advance company goals or change a subordinate’s understanding of anything they propose that is not practicable or goal focused. What McGee argues is that Tim Cook’s focus on the cost of manufacturing became an Achilles heel when he hires Foxconn to organize Apple’s iPhone manufacturing to be done mostly in one country, China.
To accomplish iPhone manufacture in China, Cook had to transfer thousands of American engineers to train laborers in the assembly of Apple products.
Cook needed a go-between which became Foxconn, a Taiwanese company that is the largest electronics labor contractor in the world. Foxconn is also China’s largest private-sector employer with over 800k employees. Foxconn employees assemble iPhones, semiconductors, and electronics for some of the largest American technology companies in the world, e.g. Apple, Microsoft, and Dell. Foxconn’s relationship with China is further complicated by the international relationship between Taiwan and China. Foxconn has built a lucrative business in the tech industry because of its labor intensity and the desire of tech companies to minimize overhead to improve their profits.
World trade has made Foxconn the leading international labor subcontractor in the world. They employ an estimated 800,000 employees in China alone.
The desire to bring Taiwan under the control of communist China is a background conflict between Xi and Terry Gou. It may be unlikely that Gou would ever be elected President of Taiwan, but his candidacy is a cloud of suspicion to knowledgeable Chinese, Taiwanese, and American leaders. McGee notes Foxconn’s tax audits and land-use investigations by Chinese authorities that some believe are politically motivated. Foxconn has been criticized for poor working conditions because of incidents of worker protests, suicides, and labor strikes. China’s posture on those working conditions is ambiguous and most American businesses are ignorant or uncaring. A China crackdown on labor conditions would have wide effects on the global tech industry.
For Apple to lower costs of iPhone assembly, Foxconn contracted China’s people at low wages, to support what would be unfair labor practices in America, to assemble iPhones.
This benefited Apple in the first years of their association with Foxconn in China. However, later in the transition President Xi spread false reports of poor and unfair warranty practices being offered Chinese consumers of Apple products. Contrary to Xi’s claims, McGee explains that Apple warranties were the same in China as they were throughout the world.
McGee infers politics were behind Xi’s false claims about iPhone warranties.
China’s economy benefited from Apple’s move for cheaper manufacturing costs. China gained an immense technology boost from the retraining of Chinese citizens by Apple’s experienced engineers. With iPhone manufacturing in China, Apple’s revenues rose from $24 billion in 2007 to $201 billion in 2022. Apple invested an estimated $275 billion in China’s economy over 5 years. However, with Xi’s lies and vilification of Apple’s warranty, Chinese smartphone giants like Huawei, Xiaomi, Oppo, and Vivo increased sales. One presumes, Tesla followed a similar cost and benefit reward with its labor and technology transfer to China’s electric vehicle manufacturers.
McGee notes the bad publicity for Apple in the Chinese market threatens Apple’s future in three ways.
One, its loss of sales in China, two, a significant change in low-cost manufacturing advantages with rising Chinese labor cost, and three, Apple’ technology transfer to Chinese companies. Add to those lost advantages is Apple’s relocation costs to another country for iPhone manufacture.
GENERAL GEORGE C. MARSHALL (1880-1959)
An interesting comparison McGee makes between Apple’s $275 billion investment in China for iPhone assembly is that it is more than double the amount used in the Marshall Plan to rebuild Europe after WWII.
McGee notes Apple has a supply chain vulnerability from the Chinese government’s relationship with key suppliers of iPhone components wherever they are assembled. “Apple in China” is a message to the entire world about the risks of technological relocation solely based on reducing costs of labor in a politically and culturally divided world. This is a book every employer should listen to or read.
Musk, like all human beings, is imperfect. His association with a President who feels money is more important than humanity only feeds Musk’s ineptitude as a manager of people.
Books of Interest Website: chetyarbrough.blog
Hubris Maximus (The Shattering of Elon Musk)
By: Faiz Siddiqui
Narrated By: André Santana
Faiz Siddiqui (Author, technology reporter for The Washington Post)
Faiz Siddiqui exposes the character of Elon Musk as a brilliant entrepreneur with an outsized pride in his ability that reflects an arrogance that diminishes his genius. Musk’s success with Tesla and SpaceX accomplishments are equal, and in some ways exceed, the business successes of John D. Rockefeller and Steve Jobs. In wealth, Musk exceeds Rockefeller and in inventiveness, he competes with Steve Jobs.
As brilliant as Musk shows himself to be, his fragile ego diminishes his genius.
Siddiqui reveals how petty Musk can be while balancing that pettiness with his contribution to creative ideas that will live far beyond his mortal life. Musk’s development of space travel and communication satellites for the world with a non-governmental, free enterprise operation is a tribute to the power of capitalism. His next immense contribution, though controversial and a work in progress, will be self-driving transportation.
Elon Musk’s Successful Return of Rockets Launched into Space.
Siddiqui’s picture of Musk’s flawed personality is somewhat balanced by the image of a person driven to succeed. However, that drive is not something that naturally translates to organizational performance. Musk is not a developer of people and should not be in charge of an organization’s management. Like Apple employees that kept some of their work undisclosed to Steve Jobs when the mobile phone was being considered, Musk needs to leave management of employees to others. People management is a skill set that Musk does not have as was made quite clear with his acquisition of Twitter and his work with DOGE. DOGE feeds Musk’s managerial weaknesses with President Trump’s mistaken belief that cost of government is more important than effectiveness. DOGE is a growing tragedy of American governance.
Musk is right about the value of self-driving vehicles, but he is trying to produce the wrong product to prove his belief.
Self-driving vehicles will reduce traffic accidents, injuries, and death but the product to achieve that goal is what Musk should be working on. The game of Go is estimated to have 10 to the 172nd power of possible positions. Self-driving cars probably have a similar astronomical number of possible causes of accidents.
Musk, or someone with his creative genius, needs to create a product that can be sold to all vehicle manufacturers.
This newly invented product would use AI to learn, reinforce understanding of vehicular movements, accidents, and incidents. That accumulated information would allow creative play in the same way GO became an unbeatable game for human beings playing against a programed computer. Musk is putting the cart before the horse by building cars and then making them safe, self-driving vehicles. The first step is to gather information from as many driven vehicles as possible, collate that information, and use computer power to creatively play with the information. That information, like learning the moves of GO would create self-driving algorithms that would reduce self-driving vehicle’ accidents, injuries, and deaths.
A sad reveal in “Hubris Maximus” is that an American treasure, Elon Musk, is being vilified for the wrong reasons.
Musk’s contribution to the reduction of air pollution has benefited the world. His vision of interstellar travel may be the next step in human expedition, exploration, and habitation of the universe. Earth’s interconnectedness is vitally enhanced by Musk’s satellite system. The universe is humanity’s next frontier.
Musk, like all human beings, is imperfect. His association with a President who feels money is more important than humanity only feeds Musk’s ineptitude as a manager of people.
Humans will learn to use and adapt to Artificial General Intelligence in the same way it has adapted to belief in a Supreme Being, the Age of Reason, the industrial revolution, and other cultural upheavals.
Books of Interest Website: chetyarbrough.blog
How to Think About AI (A Guide for the Perplexed)
By: Richard Susskind
Narrated By: Richard Susskind
Richard Susskind (Author, British IT adviser to law firms and governments, earned an LL.B degree in Law from the University of Glasgow in 1983, and has a PhD. in philosophy from Columbia University.)
Richard Susskind is another historian of Artificial Intelligence. He extends the history of AI to what is called AGI. He has an opinion about the next generation of AI called Artificial General Intelligence. AGI (Artificial General Intelligence) is a future discipline suggesting AI will continue to evolve to perform any intellectual task that a human can.
These men were the foundation of what became Artificial Intelligence. AI was officially founded in 1956 at a Dartmouth Conference attended by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon. Conceptually, AI came from Alan Turing’s work before and during WWII when he created the Turing machine that cracked the German secret code.
McCarthy and Minsky were computer and cognitive scientists, Rochester was an engineer and became an architect for IBM’s first computer, Shannon (an engineer) and Turing were both mathematicians with an interest in cryptography and its application to code breaking.
Though not mentioned by Susskind, two women, Ada Lovelace and Grace Hopper played roles in early computer creation (Lovelace as an algorithm creator for Charles Babbage in the 19th century, and Hopper as a computer scientist that translated human-readable code into machine language for the Navy).
Susskind’s history takes listener/readers to the next generation of AI with Artificial General Intelligence (AGI).
Susskind recounts the history of AI’s ups and downs. As noted in earlier book reviews, AI’s potential became known during WWII but went into hibernation after the war. Early computers lacked processing capability to support complex AI models. The American federal government cut back on computer research for a time because of unrealistic expectations that seemed unachievable because of processing limitations. AI research failed to deliver practical applications.
The invention of transistors in the late 1940’s and 50s and microprocessors in the 1970s reinvigorated AI.
Transistor and microprocessor inventions addressed the processing limitations of earlier computers. John Bardeen, Walter Brattain, and William Shockley working for Bell Laboratories were instrumental in the invention of transistors and microprocessors. Their inventions replaced bulky vacuum tubes and miniaturized more efficient electronic devices. In the 1970s Marcian “Ted” Hoff, Federico Faggin, and Stanley Mazor, who worked for Intel, integrated computing functions onto single chips that revolutionized computing. The world rediscovered the potential of AI with these improvements in power. McCarthy and Minsky refine AI concepts and methodologies.
Geoffrey Hinton (British Canadian computer scientist.)Yann LeCun (French American computer scientist.)
With the help of others like Geoffrey Hinton and Yann LeCun, the foundation for modern AI is reinvigorated with deep learning, image recognition, and processing that improves probabilistic reasoning. Human decision-making is accelerated in AI. Susskind suggests a blurred line is created between human and machine control of the future with the creation of Artificial General Intelligence (AGI).
With AGI, there is the potential for loss of human control of the future.
Societal goals may be unduly influenced by machine learning that creates unsafe objectives for humanity. The pace of change in society would accelerate with AGI which may not allow time for human regulation or adaptation. AGI may accumulate biases drawn from observations of life and history that conflict with fundamental human values. If AGI grows to become a conscious entity, whatever “conscious” is, it presumably could become primarily interested in its own existence which may conflict with human survival.
Like history’s growth of agricultural development, religion, humanist enlightenment, the industrial revolution, and technology, AGI has become an unstoppable cultural force.
Susskind argues for regulation of AGI. Is Artificial General Intelligence any different than other world changing cultural forces? Yes and no. It is different because AGI has wider implications. AGI reshapes or may replace human intelligence. One possible solution noted by Ray Kurzweil is the melding of AI and human intelligence to make survival a common goal. Kurzweil suggests humans should go with the flow of AGI, just like it did with agriculture, religion, humanism, and industrialization.
Susskind suggests restricting AGI’s ability to act autonomously with shut-off mechanisms or accessibility restrictions on human cultural customs. He also suggests programming AGI to have ethical constraints that align with human values and a rule of “do no harm”, like the Hippocratic oath of doctors for their patients.
In the last chapters of Susskind’s book, several theories of human existence are identified. Maybe the world and the human experience of it are only creations of the mind, not nature’s reality. What we see, feel, touch, and do are in a “Matrix” of ones and zeros and that AGI is just what humans think they see, not what it is. Susskind speculates on the growth of virtual reality developed by technology companies becoming human’s only reality.
AI and AGI are threats to humanity, but the threat is in the hands of human beings. As the difference between virtual reality and what is real becomes more unclear, it will be used by human beings who could accidentally, or with prejudice or craziness, destroy humanity. The same might be said of nuclear war which is also in the hands of human beings. A.I. and A.G.I. are not the threat. Conscious human beings are the threat.
Humans will learn to use and adapt to Artificial General Intelligence in the same way it has adapted to belief in a Supreme Being, the Age of Reason, the industrial revolution, and other cultural upheavals. However, if science gives consciousness (whatever that is) to A.I., all bets are off. The end of humanity may be in that beginning.
The author makes a point in “The Dream Hotel”, but her book is a tedious repetition of the risk of human digitization that is a growing concern in this 21st century world.
Books of Interest Website: chetyarbrough.blog
The Dream Hotel (A Novel)
By: Laila Lalami
Narrated By: Frankie Corzo, Barton Caplan
Laila Lalami (Moroccan-American novelist, essayist, and professor, earned a PhD in linguistics, finalist for the Pulitzer Prize for “The Moor’s Account”.)
Laila Lalami imagines a “Brave New World” in which algorithms predict probabilities of lethal criminal behavior. She creates a nation-state with a human behavior monitoring and detention system for every human that might commit a lethal crime. The growing collection of data about human thought and action suggests a level of truth and possibility.
Lalami creates a state that monitors, collates, and creates probability algorithms for human behavior.
To a degree, that state already exists. The difference is that the algorithms are to get people to buy things in capitalist countries and jail or murder people in authoritarian countries. One might argue America and most western countries are in the first category while Russia, and North Korea are in the second.
Lalami’s description of the detention system, like many bureaucratic organizations, is inefficient and bound by rules that defeat their ideal purpose.
A young mother named Hussein is coming back from a business trip. She is detained because of data collected on her about where she has been, what she did on her business trip, her foreign sounding name, and the kind of relationship she has with her husband and twin children. An algorithm has been created based on a profile of her life. It flags the young woman so that she has a number slightly over a probability threshold of someone who might kill their husband. Of course, this is ridiculous on its face. Whether she murders her husband or not is based on innate errors of behavioral prediction and bureaucratic confusion.
Every organization or bureaucracy staffed by human beings has a level of confusion and inefficiency that is compounded by information inaccuracy.
That does not make the organization bad or good, but it does mean, like today’s American government’s bad decisions on foreign aid or FDA bureaucracy throws the baby out with the bath water. Lalami’s point is that detention because of one’s name, family relationship, and presumed prediction for murder, based on a digitized life, is absurd. Algorithms cannot predict or explain human behavior. At best, an algorithm has a level of predictability, but life is too complex to be measured by a fictive number created by an algorithm.
The author makes a point in “The Dream Hotel”, but her book is a tedious repetition of the risk of human digitization that is a growing concern in this 21st century world.
A government designed to use public funds to pick winners and losers in the drug industry threatens human health. Only with the truth of science discoveries and honest reporting of drug efficacy can a physician offer hope for human recovery from curable diseases.
Books of Interest Website: chetyarbrough.blog
Rethinking Medications (Truth, Power, and the Drugs You Take)
By: Jerry Avorn
Narrated By: Jerry Avorn MD
Jerry Avorn (Author, professor of medicine at Harvard Medical School where he received his MD, Chief Emeritus of the Division of Pharmacoepidemiology and Pharmacoeconomics)
Doctor Avorn enlightens listener/readers about drug industry’ costs, profits, and regulation. Avorn explains how money corrupts the industry and the FDA while encouraging discovery of effective drug treatments. The cost, profits, and benefits of the industry revolve around research, discovery, medical efficacy, human health, ethics, and regulation.
Drug manufacture is big business.
Treatments for human maladies began in the dark ages when little was known about the causes of disease and mental dysfunction. Cures ranged from spirit dances to herbal concoctions that allegedly expelled evil, cured or killed its followers and users. The FDA (Food and Drug Administration) did not come into existence until 1930, but its beginnings harken back to the 1906 Pure Food and Drug Act signed into law by Theodore Roosevelt. The FDA took on the role of reviewing scientific drug studies for drug treatments that could aid health recovery for the public. The importance of review was proven critical by incidents like that in 1937, when 107 people died from a Sulfanilamide drug which was found to be poisonous. From that 1937 event forward, the FDA required drug manufacturers to prove safety of a drug before selling it to the public. The FDA began inspecting drug factories while demanding drug ingredient labeling. However, Avorn illustrates how the FDA was seduced by Big Pharma’ to offer drug approvals based on flawed or undisclosed research reports.
Dr. Martin Makary (Dr. Makary was confirmed as the new head of the FDA on March 25, 2025. He is the 27th head of the Department. He is a British-American surgeon and professor.)
What Dr. Avorn reveals is how the FDA has either failed the public or been seduced by drug manufacturers to approve drugs that have not cured patients but have, in some cases, harmed or killed patients. It will be interesting to see what Dr. Marin Makary can do to improve FDA’s regulation of drugs. Avorn touches on court cases that have resulted in huge financial settlements by drug manufacturing companies and their stockholders. However, he notes the actual compensation received by individually harmed patients or families is miniscule in respect to the size of the fines; not to mention many billions of dollars the drug companies received before unethical practices were exposed. Avorn notes many FDA’ research and regulation incompetencies allowed drug companies to hoodwink the public about drug companies’ discovered but unrevealed drug side-effects.
A few examples can be easily found in an internet search:
1) Vioxx (Rofecoxib), a pain killer, had to be withdrawn from use in 2004 because it was linked to increased risk of heart attacks and strokes. It was removed from the market in 2004.
2) Fen-Phen (Fenfluramine/Phentermine), a weight-loss drug had to be taken off the market in 1997 because of severe heart and lung complications.
3) Accutane was used to cure acne but was found to be linked to birth defects and had to be withdrawn in 2009.
4) Thalidomide was found to cause birth defects to become repurposed for treatment of certain cancers.
5) A more recent failure of the FDA is their failure to regulate opioids like OxyContin that resulted in huge fines to manufacturers and distributors of the drug.
Lobbyists are hired by drug companies to influence politicians to gain support of drug companies. In aggregate, this chart shows the highest-spending lobbyists in the 3rd Qtr. of 2020 were in the medical industry.
Dr. Avorn argues Big Pharma’s lobbying power has unduly influenced FDA to approve drugs that are not effective in treating patients for their diagnosed conditions. Avorn infers Big Pharma is more focused on increasing revenue than effectively reviewing drug manufacturer’ supplied studies. Avorn argues the FDA has become too dependent on industry fees that are paid by drug manufacturers asking for expedited drug approvals. Avorn infers the FDA fails to demand more documentation from drug manufacturers on their drug’ research. The author suggests many approved opioids, cancer treatment drugs, and psychedelics have questionable effectiveness or have safety concerns. Misleading or incomplete information is provided by drug companies that makes applications an approval process, not a fully relevant or studied action on the efficacy of new drugs.
Avorn is disappointed in the Trump administrations’ selection of Robert Kennedy as the U.S. Secretary of Health and Human Services because of his lack of qualification.
The unscientific bias of Kennedy and Trump in regard to vaccine effectiveness reinforces the likelihood of increased drug manufacturers’ fees that are just a revenue source for the FDA. Trump will likely reward Kennedy for decreasing the Departments’ overhead by firing research scientists and increasing the revenues they collect from drug manufacturers seeking drug approvals.
Trump sees and uses money as the only measure of value in the world.
It is interesting to note that Avorn is a Harvard professor, a member of one of the most prestigious universities in the world. Harvard is being denied government grants by the Trump administration, allegedly because of Harvard’s DEI policy. One is inclined to believe diversity, equity, and inclusion are ignored by Trump because he is part of the white ruling class in America. Trump chooses to stop American aid to the world to reduce the cost of government. American government’s decisions to starve the world and discriminate against non-whites is a return to the past that will have future consequences for America.
Next, Avorn writes about the high cost of drugs, particularly in the United States. Discoveries are patented in the United States to incentivize innovation, but drug companies are gaming that Constitutional right by slightly modifying drug manufacture when their patent rights are nearing expiration. They renew their patent and control the price of the slightly modified drug that has the same curative qualities. As publicly held corporations, they are obligated to keep prices as high as the market allows. The consequence leaves many families at the mercy of their treatable diseases because they cannot afford the drugs that can help them.
Martin Shkreli, American investor who rose to fame and infamy for using hedge funds to buy drug patents and artificially raise their prices to only increase revenues.
The free market system in America allows an investor to buy a drug patent and arbitrarily raise its price. Avorn suggests this is a correctable problem with fair regulation and a balance between government sponsored funding for drug research in return for public funding. Of course, there are some scientists like Jonas Salk in 1953 who refused to privately patent the polio vaccine because it had such great benefit to the health of the world.
Avorn notes the 1990’s drug costs in the U.S. are out of control.
Only the rich are able to pay for newer drugs that cost hundreds of thousands of dollars per year. Americans spend over $13,000 per year per person while Europe is around $5,000 and low-income countries under $500 per year. These expenditures are to extend life which one would think make Americans live longest. Interestingly, America is not even in the top 10. Hong Kong’s average life expectancy is 85.77 years, Japan 85. South Korea 84.53. The U.S. average life expectancy is 79.4. To a cynic like me, one might say what’s 5 or 6 more years of life really worth? On the other hand, billionaires and millionaires like Peter Thiel and Bryan Johnson have invested millions into anti-aging research.
Avorn reinforces the substance of Michael Pollan’s book “How to Change Your Mind” which reenvisions the value of hallucinogens in this century.
Avorn notes hallucinogens efficacy is reborn in the 21st century to a level of medical and social acceptance. Avorn is a trained physician as opposed to Pollan who is a graduate with an M.A. in English, not with degrees in science or medicine.
In reviewing Avorn’s informative history, it is apparent that patients should be asking their doctors more questions about the drugs they are taking.
Drugs have side effects that can conflict with other drugs being taken. In this age of modern medicine, there are many drugs that can be effective, but they can also be deadly. Drug manufacturers looking at drug creation as only revenue producers is a bad choice for society.
Avorn’s history of the drug industry shows failure in American medicines is more than the mistake of placing an incompetent in charge of the U.S.
Taking money away from research facilities diminishes American innovation in medicine and other important sciences. However, research is only as good as the accuracy of its proof of efficacy for the treatment of disease and the Hippocratic Oath of “First, do no harm”. A government designed to use public funds to pick winners and losers in the drug industry threatens human health. Only with the truth of science discoveries and honest reporting of drug efficacy can a physician offer hope for human recovery from curable diseases.
AI is only a tool of human beings and will be misused by some leaders in the same way Atom bombs, starvation, disease, climate, and other maladies have harmed the sentient world. AI is more of an opportunity than threat to society.
Books of Interest Website: chetyarbrough.blog
A Brief History of Artificial Intelligence (What It Is, Where We Are, and Where We Are Going)
By: Michael Wooldridge
Narrated By: Glen McCready
Michael Wooldridge (Author, British professor of Computer Science, Senior Research Fellow at Hertford College University of Oxford.)
Wooldridge served as the President of the International Joint Conference in Artificial Intelligence from 2015-17, and President of the European Association for AI from 2014-16. He received a number of A.I. related service awards in his career.
Alan Turing (1912-1954, Mathematician, computer scientist, cryptanalyst, philosopher, and theoretical biologist.)
Wooldridge’s history of A.I. begins with Alan Turing who has the honorific title of “father of theoretical computer science and artificial intelligence”. Turing is best known for breaking the German Enigma code in WWII with the development of an automatic computing engine. He went on to develop the Turing test that evaluated a machine’s ability to provide answers to questions that exhibited human-like behavior. Sadly, he is equally well known for being a publicly persecuted homosexual who committed suicide in 1954. He was 41 years old at the time of his death.
Wooldridge explains A.I. has had a roller-coaster history of highs and lows with new highs in this century.
Breaking the Enigma code is widely acknowledged as a game changer in WWII. Enigma’s code breaking shortened the war and provided strategic advantage to the Allied powers. However, Wooldridge notes computer utility declined in the 70s and 80s because applications relied on laborious programming rules that introduced biases, ethical concerns, and prediction errors. Expectations of A.I.’s predictability seemed exaggerated.
The idea of a neuronal connection system was thought of in 1943 by Warren McCulloch and Walter L Pitts.
In 1958, Frank Rosenblatt developed “Perception”, a program based on McCulloch and Pitt’s idea that made computers capable of learning. However, this was a cumbersome programming process that failed to give consistent results. After the 80s, machine learning became more usefully predictive with Geoffrey Hinton’s devel0pment of backpropagation, i.e., the use of an algorithm to check on programming errors with corrections that improved A.I. predictions. Hinton went on to develop a neural network in 1986 that worked like the synapse structure of the brain but with much fewer connections. A limited neural network for computers led to a capability for reading text and collating information.
Geoffrey Hinton (the “Godfather of AI” won the 2018 Turing Award.)
Then, in 2006 Hinton developed a Deep Belief Network that led to deep learning with a type of a generative neural network. Neural networks offered more connections that improved computer memory with image recognition, speech processing, and natural language understanding. In the 2000s, Google acquired a deep learning company that could crawl and index the internet. Fact-based decision-making, and the accumulation of data, paved the way for better A.I. utility and predictive capability.
Face recognition capability.
What seems lost in this history is the fact that all of these innovations were created by human cognition and creation.
Many highly educated and inventive people like Elon Musk, Stephen Hawking, Bill Gates, Geoffrey Hinton, and Yuval Harari believe the risks of AI are a threat to humanity. Musk calls AI a big existential threat and compares it to summoning a demon. Hawking felt Ai could evolve beyond human control. Gates expressed concern about job displacement that would have long-term negative consequences with ethical implications that would harm society. Hinton believed AI would outthink humans and pose unforeseen risks. Harari believed AI would manipulate human behavior and reshape global power structures and undermine governments.
All fears about AI have some basis for concern.
However, how good a job has society done throughout history without AI? AI is only a tool of human beings and will be misused by some leaders in the same way atom bombs, starvation, disease, climate, and other maladies have harmed the sentient world. AI is more of an opportunity than threat to society.
Human nature will not change but A.I. will not destroy humanity but insure its survival and improvement.
Books of Interest Website: chetyarbrough.blog
Human Compatible (Artificial Intelligence and the Problem of Control)
By: Stuart Russell
Narrated By: Raphael Corkhill
Stuart Johnathan Russell (British computer scientist, studied physics at Wadham College, Oxford, received first-class honors with a BA in 1982, moved to U.S. and received a PhD in computer science from Stanford.)
Stuart Russell has written an insightful book about A.I. as it currently exists with speculation about its future. Russell in one sense agrees with Marcus’s and Davis’s assessment of today’s A.I. He explains A.I. is presently not intelligent but argues it could be in the future. The only difference between the assessments in Marcus’s and Davis’s “Rebooting AI” and “Human Compatible” is that Russell believes there is a reasonable avenue for A.I. to have real and beneficial intelligence. Marcus and Davis are considerably more skeptical than Russell about A.I. ever having the equivalent of human intelligence.
Russell infers A.I. is at a point where gathered information changes human culture.
Russell argues A.I. information gathering is still too inefficient to give the world safe driverless cars but believes it will happen. There will be a point where fewer deaths on the highway will come from driverless cars than those that are under the control of their drivers. The point is that A.I. will reach a point of information accumulation that will reduce traffic deaths.
A.I. will reach a point of information accumulation that will reduce traffic deaths.
After listening to Russell’s observation, one conceives of something like a pair of glasses on the face of a person being used to gather information. That information could be automatically transferred by improvements in Wi-Fi to a computing device that would collate what a person sees to become a database for individual human thought and action. The glasses will become a window of recallable knowledge to its wearer. A.I. becomes a tool of the human mind which uses real world data to choose what a human brain comprehends from his/her experience in the world. This is not exactly what Russell envisions but the idea is born from a combination of what he argues is the potential of A.I. information accumulation. The human mind remains the seat of thought and action with the help of A.I., not the direction or control by A.I.
Russell’s ideas about A.I. address the concerns that Marcus and Davis have about intelligence remaining in the hands of human’s, not a machine that becomes sentient.
Russell agrees with Marcus, and Davis–that growth of A.I. does have risk. However, Russell goes beyond Marcus and Davis by suggesting the risk is manageable. Risk management is based on understanding human action is based on knowledge organized to achieve objectives. If one’s knowledge is more comprehensive, thought and action is better informed. Objectives can be more precisely and clearly formed. Of course, there remains the danger of bad actors with the advance of A.I., but that has always been the risk of one who has knowledge and power. The minds of a Mao, Hitler, Beria, Stalin, and other dictators and murderers of humankind will still be among us.
The competition and atrocities of humanity will not disappear with A.I. Sadly, A.I. will sharpen the dangers to humanity but with an equal resistance by others that are equally well informed. Humanity has managed to survive with less recallable knowledge so why would humanity be lost with more recallable knowledge? As has been noted many times in former book reviews, A.I. is, and always will be, a tool of human beings, not a controller.
The world will have driverless cars, robotically produced merchandise, and cultures based on A.I.’ service to others in the future.
Knowledge will increase the power and influence of world leaders to do both good and bad in the world. Human nature will not change but A.I. will not destroy humanity. Artificial Intelligence will insure human survival and improvement. History shows humanity has survived famine, pestilence, and war with most cultures better off than when human societies came into existence.