AGI

Humans will learn to use and adapt to Artificial General Intelligence in the same way it has adapted to belief in a Supreme Being, the Age of Reason, the industrial revolution, and other cultural upheavals.

Books of Interest
 Website: chetyarbrough.blog

How to Think About AI (A Guide for the Perplexed)

By: Richard Susskind

Narrated By:  Richard Susskind

Richard Susskind (Author, British IT adviser to law firms and governments, earned an LL.B degree in Law from the University of Glasgow in 1983, and has a PhD. in philosophy from Columbia University.)

Richard Susskind is another historian of Artificial Intelligence. He extends the history of AI to what is called AGI. He has an opinion about the next generation of AI called Artificial General Intelligence. AGI (Artificial General Intelligence) is a future discipline suggesting AI will continue to evolve to perform any intellectual task that a human can.

These men were the foundation of what became Artificial Intelligence. AI was officially founded in 1956 at a Dartmouth Conference attended by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon. Conceptually, AI came from Alan Turing’s work before and during WWII when he created the Turing machine that cracked the German secret code.

McCarthy and Minsky were computer and cognitive scientists, Rochester was an engineer and became an architect for IBM’s first computer, Shannon (an engineer) and Turing were both mathematicians with an interest in cryptography and its application to code breaking.

Though not mentioned by Susskind, two women, Ada Lovelace and Grace Hopper played roles in early computer creation (Lovelace as an algorithm creator for Charles Babbage in the 19th century, and Hopper as a computer scientist that translated human-readable code into machine language for the Navy).

Susskind’s history takes listener/readers to the next generation of AI with Artificial General Intelligence (AGI).

Susskind recounts the history of AI’s ups and downs. As noted in earlier book reviews, AI’s potential became known during WWII but went into hibernation after the war. Early computers lacked processing capability to support complex AI models. The American federal government cut back on computer research for a time because of unrealistic expectations that seemed unachievable because of processing limitations. AI research failed to deliver practical applications.

The invention of transistors in the late 1940’s and 50s and microprocessors in the 1970s reinvigorated AI.

Transistor and microprocessor inventions addressed the processing limitations of earlier computers. John Bardeen, Walter Brattain, and William Shockley working for Bell Laboratories were instrumental in the invention of transistors and microprocessors. Their inventions replaced bulky vacuum tubes and miniaturized more efficient electronic devices. In the 1970s Marcian “Ted” Hoff, Federico Faggin, and Stanley Mazor, who worked for Intel, integrated computing functions onto single chips that revolutionized computing. The world rediscovered the potential of AI with these improvements in power. McCarthy and Minsky refine AI concepts and methodologies.

With the help of others like Geoffrey Hinton and Yann LeCun, the foundation for modern AI is reinvigorated with deep learning, image recognition, and processing that improves probabilistic reasoning. Human decision-making is accelerated in AI. Susskind suggests a blurred line is created between human and machine control of the future with the creation of Artificial General Intelligence (AGI).

With AGI, there is the potential for loss of human control of the future.

Societal goals may be unduly influenced by machine learning that creates unsafe objectives for humanity. The pace of change in society would accelerate with AGI which may not allow time for human regulation or adaptation. AGI may accumulate biases drawn from observations of life and history that conflict with fundamental human values. If AGI grows to become a conscious entity, whatever “conscious” is, it presumably could become primarily interested in its own existence which may conflict with human survival.

Like history’s growth of agricultural development, religion, humanist enlightenment, the industrial revolution, and technology, AGI has become an unstoppable cultural force.

Susskind argues for regulation of AGI. Is Artificial General Intelligence any different than other world changing cultural forces? Yes and no. It is different because AGI has wider implications. AGI reshapes or may replace human intelligence. One possible solution noted by Ray Kurzweil is the melding of AI and human intelligence to make survival a common goal. Kurzweil suggests humans should go with the flow of AGI, just like it did with agriculture, religion, humanism, and industrialization.

Susskind suggests restricting AGI’s ability to act autonomously with shut-off mechanisms or accessibility restrictions on human cultural customs. He also suggests programming AGI to have ethical constraints that align with human values and a rule of “do no harm”, like the Hippocratic oath of doctors for their patients.

In the last chapters of Susskind’s book, several theories of human existence are identified. Maybe the world and the human experience of it are only creations of the mind, not nature’s reality. What we see, feel, touch, and do are in a “Matrix” of ones and zeros and that AGI is just what humans think they see, not what it is. Susskind speculates on the growth of virtual reality developed by technology companies becoming human’s only reality.

AI and AGI are threats to humanity, but the threat is in the hands of human beings. As the difference between virtual reality and what is real becomes more unclear, it will be used by human beings who could accidentally, or with prejudice or craziness, destroy humanity. The same might be said of nuclear war which is also in the hands of human beings. A.I. and A.G.I. are not the threat. Conscious human beings are the threat.

Humans will learn to use and adapt to Artificial General Intelligence in the same way it has adapted to belief in a Supreme Being, the Age of Reason, the industrial revolution, and other cultural upheavals. However, if science gives consciousness (whatever that is) to A.I., all bets are off. The end of humanity may be in that beginning.

MEDICINE

A government designed to use public funds to pick winners and losers in the drug industry threatens human health. Only with the truth of science discoveries and honest reporting of drug efficacy can a physician offer hope for human recovery from curable diseases.

Books of Interest
 Website: chetyarbrough.blog

Rethinking Medications (Truth, Power, and the Drugs You Take)

By: Jerry Avorn

Narrated By: Jerry Avorn MD

Jerry Avorn (Author, professor of medicine at Harvard Medical School where he received his MD, Chief Emeritus of the Division of Pharmacoepidemiology and Pharmacoeconomics)

Doctor Avorn enlightens listener/readers about drug industry’ costs, profits, and regulation. Avorn explains how money corrupts the industry and the FDA while encouraging discovery of effective drug treatments. The cost, profits, and benefits of the industry revolve around research, discovery, medical efficacy, human health, ethics, and regulation.

Drug manufacture is big business.

Treatments for human maladies began in the dark ages when little was known about the causes of disease and mental dysfunction. Cures ranged from spirit dances to herbal concoctions that allegedly expelled evil, cured or killed its followers and users. The FDA (Food and Drug Administration) did not come into existence until 1930, but its beginnings harken back to the 1906 Pure Food and Drug Act signed into law by Theodore Roosevelt. The FDA took on the role of reviewing scientific drug studies for drug treatments that could aid health recovery for the public. The importance of review was proven critical by incidents like that in 1937, when 107 people died from a Sulfanilamide drug which was found to be poisonous. From that 1937 event forward, the FDA required drug manufacturers to prove safety of a drug before selling it to the public. The FDA began inspecting drug factories while demanding drug ingredient labeling. However, Avorn illustrates how the FDA was seduced by Big Pharma’ to offer drug approvals based on flawed or undisclosed research reports.

Dr. Martin Makary (Dr. Makary was confirmed as the new head of the FDA on March 25, 2025. He is the 27th head of the Department. He is a British-American surgeon and professor.)

What Dr. Avorn reveals is how the FDA has either failed the public or been seduced by drug manufacturers to approve drugs that have not cured patients but have, in some cases, harmed or killed patients. It will be interesting to see what Dr. Marin Makary can do to improve FDA’s regulation of drugs. Avorn touches on court cases that have resulted in huge financial settlements by drug manufacturing companies and their stockholders. However, he notes the actual compensation received by individually harmed patients or families is miniscule in respect to the size of the fines; not to mention many billions of dollars the drug companies received before unethical practices were exposed. Avorn notes many FDA’ research and regulation incompetencies allowed drug companies to hoodwink the public about drug companies’ discovered but unrevealed drug side-effects.

A few examples can be easily found in an internet search:

1) Vioxx (Rofecoxib), a pain killer, had to be withdrawn from use in 2004 because it was linked to increased risk of heart attacks and strokes. It was removed from the market in 2004.

2) Fen-Phen (Fenfluramine/Phentermine), a weight-loss drug had to be taken off the market in 1997 because of severe heart and lung complications.

3) Accutane was used to cure acne but was found to be linked to birth defects and had to be withdrawn in 2009.

4) Thalidomide was found to cause birth defects to become repurposed for treatment of certain cancers.

5) A more recent failure of the FDA is their failure to regulate opioids like OxyContin that resulted in huge fines to manufacturers and distributors of the drug.

Lobbyists are hired by drug companies to influence politicians to gain support of drug companies. In aggregate, this chart shows the highest-spending lobbyists in the 3rd Qtr. of 2020 were in the medical industry.

Dr. Avorn argues Big Pharma’s lobbying power has unduly influenced FDA to approve drugs that are not effective in treating patients for their diagnosed conditions. Avorn infers Big Pharma is more focused on increasing revenue than effectively reviewing drug manufacturer’ supplied studies. Avorn argues the FDA has become too dependent on industry fees that are paid by drug manufacturers asking for expedited drug approvals. Avorn infers the FDA fails to demand more documentation from drug manufacturers on their drug’ research. The author suggests many approved opioids, cancer treatment drugs, and psychedelics have questionable effectiveness or have safety concerns. Misleading or incomplete information is provided by drug companies that makes applications an approval process, not a fully relevant or studied action on the efficacy of new drugs.

Avorn is disappointed in the Trump administrations’ selection of Robert Kennedy as the U.S. Secretary of Health and Human Services because of his lack of qualification.

The unscientific bias of Kennedy and Trump in regard to vaccine effectiveness reinforces the likelihood of increased drug manufacturers’ fees that are just a revenue source for the FDA. Trump will likely reward Kennedy for decreasing the Departments’ overhead by firing research scientists and increasing the revenues they collect from drug manufacturers seeking drug approvals.

Trump sees and uses money as the only measure of value in the world.

It is interesting to note that Avorn is a Harvard professor, a member of one of the most prestigious universities in the world. Harvard is being denied government grants by the Trump administration, allegedly because of Harvard’s DEI policy. One is inclined to believe diversity, equity, and inclusion are ignored by Trump because he is part of the white ruling class in America. Trump chooses to stop American aid to the world to reduce the cost of government. American government’s decisions to starve the world and discriminate against non-whites is a return to the past that will have future consequences for America.

Next, Avorn writes about the high cost of drugs, particularly in the United States. Discoveries are patented in the United States to incentivize innovation, but drug companies are gaming that Constitutional right by slightly modifying drug manufacture when their patent rights are nearing expiration. They renew their patent and control the price of the slightly modified drug that has the same curative qualities. As publicly held corporations, they are obligated to keep prices as high as the market allows. The consequence leaves many families at the mercy of their treatable diseases because they cannot afford the drugs that can help them.

Martin Shkreli, American investor who rose to fame and infamy for using hedge funds to buy drug patents and artificially raise their prices to only increase revenues.

The free market system in America allows an investor to buy a drug patent and arbitrarily raise its price. Avorn suggests this is a correctable problem with fair regulation and a balance between government sponsored funding for drug research in return for public funding. Of course, there are some scientists like Jonas Salk in 1953 who refused to privately patent the polio vaccine because it had such great benefit to the health of the world.

Avorn notes the 1990’s drug costs in the U.S. are out of control.

Only the rich are able to pay for newer drugs that cost hundreds of thousands of dollars per year. Americans spend over $13,000 per year per person while Europe is around $5,000 and low-income countries under $500 per year. These expenditures are to extend life which one would think make Americans live longest. Interestingly, America is not even in the top 10. Hong Kong’s average life expectancy is 85.77 years, Japan 85. South Korea 84.53. The U.S. average life expectancy is 79.4. To a cynic like me, one might say what’s 5 or 6 more years of life really worth? On the other hand, billionaires and millionaires like Peter Thiel and Bryan Johnson have invested millions into anti-aging research.

Avorn reinforces the substance of Michael Pollan’s book “How to Change Your Mind” which reenvisions the value of hallucinogens in this century.

Avorn notes hallucinogens efficacy is reborn in the 21st century to a level of medical and social acceptance. Avorn is a trained physician as opposed to Pollan who is a graduate with an M.A. in English, not with degrees in science or medicine.

In reviewing Avorn’s informative history, it is apparent that patients should be asking their doctors more questions about the drugs they are taking.

Drugs have side effects that can conflict with other drugs being taken. In this age of modern medicine, there are many drugs that can be effective, but they can also be deadly. Drug manufacturers looking at drug creation as only revenue producers is a bad choice for society.

Avorn’s history of the drug industry shows failure in American medicines is more than the mistake of placing an incompetent in charge of the U.S.

Taking money away from research facilities diminishes American innovation in medicine and other important sciences. However, research is only as good as the accuracy of its proof of efficacy for the treatment of disease and the Hippocratic Oath of “First, do no harm”. A government designed to use public funds to pick winners and losers in the drug industry threatens human health. Only with the truth of science discoveries and honest reporting of drug efficacy can a physician offer hope for human recovery from curable diseases.

RISK/REWARD

AI is only a tool of human beings and will be misused by some leaders in the same way Atom bombs, starvation, disease, climate, and other maladies have harmed the sentient world. AI is more of an opportunity than threat to society.

Books of Interest
 Website: chetyarbrough.blog

A Brief History of Artificial Intelligence (What It Is, Where We Are, and Where We Are Going)

By: Michael Wooldridge

Narrated By: Glen McCready

Michael Wooldridge (Author, British professor of Computer Science, Senior Research Fellow at Hertford College University of Oxford.)

Wooldridge served as the President of the International Joint Conference in Artificial Intelligence from 2015-17, and President of the European Association for AI from 2014-16. He received a number of A.I. related service awards in his career.

Alan Turing (1912-1954, Mathematician, computer scientist, cryptanalyst, philosopher, and theoretical biologist.)

Wooldridge’s history of A.I. begins with Alan Turing who has the honorific title of “father of theoretical computer science and artificial intelligence”. Turing is best known for breaking the German Enigma code in WWII with the development of an automatic computing engine. He went on to develop the Turing test that evaluated a machine’s ability to provide answers to questions that exhibited human-like behavior. Sadly, he is equally well known for being a publicly persecuted homosexual who committed suicide in 1954. He was 41 years old at the time of his death.

Wooldridge explains A.I. has had a roller-coaster history of highs and lows with new highs in this century.

Breaking the Enigma code is widely acknowledged as a game changer in WWII. Enigma’s code breaking shortened the war and provided strategic advantage to the Allied powers. However, Wooldridge notes computer utility declined in the 70s and 80s because applications relied on laborious programming rules that introduced biases, ethical concerns, and prediction errors. Expectations of A.I.’s predictability seemed exaggerated.

The idea of a neuronal connection system was thought of in 1943 by Warren McCulloch and Walter L Pitts.

In 1958, Frank Rosenblatt developed “Perception”, a program based on McCulloch and Pitt’s idea that made computers capable of learning. However, this was a cumbersome programming process that failed to give consistent results. After the 80s, machine learning became more usefully predictive with Geoffrey Hinton’s devel0pment of backpropagation, i.e., the use of an algorithm to check on programming errors with corrections that improved A.I. predictions. Hinton went on to develop a neural network in 1986 that worked like the synapse structure of the brain but with much fewer connections. A limited neural network for computers led to a capability for reading text and collating information.

Geoffrey Hinton (the “Godfather of AI” won the 2018 Turing Award.)

Then, in 2006 Hinton developed a Deep Belief Network that led to deep learning with a type of a generative neural network. Neural networks offered more connections that improved computer memory with image recognition, speech processing, and natural language understanding. In the 2000s, Google acquired a deep learning company that could crawl and index the internet. Fact-based decision-making, and the accumulation of data, paved the way for better A.I. utility and predictive capability.

Face recognition capability.

What seems lost in this history is the fact that all of these innovations were created by human cognition and creation.

Many highly educated and inventive people like Elon Musk, Stephen Hawking, Bill Gates, Geoffrey Hinton, and Yuval Harari believe the risks of AI are a threat to humanity. Musk calls AI a big existential threat and compares it to summoning a demon. Hawking felt Ai could evolve beyond human control. Gates expressed concern about job displacement that would have long-term negative consequences with ethical implications that would harm society. Hinton believed AI would outthink humans and pose unforeseen risks. Harari believed AI would manipulate human behavior and reshape global power structures and undermine governments.

All fears about AI have some basis for concern.

However, how good a job has society done throughout history without AI? AI is only a tool of human beings and will be misused by some leaders in the same way atom bombs, starvation, disease, climate, and other maladies have harmed the sentient world. AI is more of an opportunity than threat to society.

FUTURE A.I.

Human nature will not change but A.I. will not destroy humanity but insure its survival and improvement.

Books of Interest
 Website: chetyarbrough.blog

Human Compatible (Artificial Intelligence and the Problem of Control)

By: Stuart Russell

Narrated By: Raphael Corkhill

Stuart Johnathan Russell (British computer scientist, studied physics at Wadham College, Oxford, received first-class honors with a BA in 1982, moved to U.S. and received a PhD in computer science from Stanford.)

Stuart Russell has written an insightful book about A.I. as it currently exists with speculation about its future. Russell in one sense agrees with Marcus’s and Davis’s assessment of today’s A.I. He explains A.I. is presently not intelligent but argues it could be in the future. The only difference between the assessments in Marcus’s and Davis’s “Rebooting AI” and “Human Compatible” is that Russell believes there is a reasonable avenue for A.I. to have real and beneficial intelligence. Marcus and Davis are considerably more skeptical than Russell about A.I. ever having the equivalent of human intelligence.

Russell infers A.I. is at a point where gathered information changes human culture.

Russell argues A.I. information gathering is still too inefficient to give the world safe driverless cars but believes it will happen. There will be a point where fewer deaths on the highway will come from driverless cars than those that are under the control of their drivers. The point is that A.I. will reach a point of information accumulation that will reduce traffic deaths.

A.I. will reach a point of information accumulation that will reduce traffic deaths.

After listening to Russell’s observation, one conceives of something like a pair of glasses on the face of a person being used to gather information. That information could be automatically transferred by improvements in Wi-Fi to a computing device that would collate what a person sees to become a database for individual human thought and action. The glasses will become a window of recallable knowledge to its wearer. A.I. becomes a tool of the human mind which uses real world data to choose what a human brain comprehends from his/her experience in the world. This is not exactly what Russell envisions but the idea is born from a combination of what he argues is the potential of A.I. information accumulation. The human mind remains the seat of thought and action with the help of A.I., not the direction or control by A.I.

Russell’s ideas about A.I. address the concerns that Marcus and Davis have about intelligence remaining in the hands of human’s, not a machine that becomes sentient.

Russell agrees with Marcus, and Davis–that growth of A.I. does have risk. However, Russell goes beyond Marcus and Davis by suggesting the risk is manageable. Risk management is based on understanding human action is based on knowledge organized to achieve objectives. If one’s knowledge is more comprehensive, thought and action is better informed. Objectives can be more precisely and clearly formed. Of course, there remains the danger of bad actors with the advance of A.I., but that has always been the risk of one who has knowledge and power. The minds of a Mao, Hitler, Beria, Stalin, and other dictators and murderers of humankind will still be among us.

The competition and atrocities of humanity will not disappear with A.I. Sadly, A.I. will sharpen the dangers to humanity but with an equal resistance by others that are equally well informed. Humanity has managed to survive with less recallable knowledge so why would humanity be lost with more recallable knowledge? As has been noted many times in former book reviews, A.I. is, and always will be, a tool of human beings, not a controller.

The world will have driverless cars, robotically produced merchandise, and cultures based on A.I.’ service to others in the future.

Knowledge will increase the power and influence of world leaders to do both good and bad in the world. Human nature will not change but A.I. will not destroy humanity. Artificial Intelligence will insure human survival and improvement. History shows humanity has survived famine, pestilence, and war with most cultures better off than when human societies came into existence.

THINKING

A.I. will continue to grow as an immense gatherer of information. Will it ever think? Can, should, or will future prediction and political policy be based only on knowledge of the past?

Books of Interest
 Website: chetyarbrough.blog

Rebooting AI (Building Artificial Intelligence We Can Trust)

By: Gary Marcus and Ernest Davis

Narrated By: Kaleo Griffith

These two academics explain much of the public’s misunderstanding of the current benefit and threat of Artificial Intelligence.

Marcus and Davis note that A.I. cannot read and does not think but only repeats what it is programmed to report.

They are not suggesting A.I. is useless but that its present capabilities are much more limited than what the public believes. In terms of product search and economic benefit to retailers, A.I. is a gold mine. But A.I.’s ability to safely move human beings in self-driving cars, free humanity from manual labor, or predict cures for the diseases of humanity are far into the future. A.I. is only a just-born baby.

Self-driving cars, robot servants, and cures for medical maladies remain works in process for Artificial Intelligence.

Marcus and Davis note A.I. usefulness remains fully dependent on human reasoning. It is a tool for recall of documented information and repetitive work. A.I. is not sentient or capable of reasoning based on the information in its memory. Because of a lack of reasoning capability, answers to questions are based on whatever information has been fed to an A.I. entity. It does not use reason to answer inquiry but only recites responses to questions from programmed information in its memory. If sources of programmed information are in conflict, the answers one receives from A.I. may be right, wrong, conflicted, or unresponsive. You can as easily get an answer from A.I. that is wrong as one that is right because it is only repeating what it has gathered from the past.

What Marcus and Davis show is how important it is that questions asked of Microsoft’s Copilot, ChatGPT, Watson, or some other A.I. platform be phrased carefully.

The value of A.I. is that it can help one recall pertinent information only if questions are precisely worded. This is a valuable supplement to human memory, but it is not a reasoned or infallible resource.

Marcus and Davis explain “Deep Learning” is not a substitute for human reasoning, but it is a supplement for more precise recorded information.

Even with multilayered neural networks, like deep learning which attempt to mimic human reasoning by patterning of raw data, can be wrong or confused. One is reminded of the Socratic belief of “I know something that I know nothing.” Truth is always hidden within a search for meaning, i.e., a gathering of information

The true potential of A.I. is in its continued consumption of all sources of information to respond to queries based on a comprehensive base of information. The idea of an A.I. that can read, hear, and collate all the information in the world is at once frightening and thrilling.

The risk is the loss of human freedom. The reward is the power of understanding. However, the authors explain there are many complications for A.I. to usefully capitalize on all the information in the world. Information has to be understood in the context of its contradictions, its ethical consequence, information bias, and the inherent unpredictability of human behavior. Even with knowledge of all information in the world, decisions based on A.I. do not ensure the future of humanity? Should humanity trust A.I. to recommend what is in the best interest of humanity based on past knowledge?

Markus and Davis argue A.I. is not, does not, and will not think.

A.I. will continue to grow as an immense gatherer of information. Will it ever think? Can, should, or will future prediction and political policy be based only on knowledge of the past?

MEDIA SELF-INTEREST

One may question William’s characterization of Facebook’s “Careless People” as more like calculating self-interested managers than careless employees.

Books of Interest
 Website: chetyarbrough.blog

Careless People (A Cautionary of Power, Greed, and Lost Idealism)

By: Sarah Wynn-Williams

Narrated By: Sarah Wynn-Williams

Sarah Wynn-Williams (Author, Ex-Meta executive, presently barred from criticism of Meta, formally known as Facebook.)

As noted in the sub-title of “Careless People”, Meta (formerly known as Facebook) is criticized as an international influencer of society that has lost its sense of ethics, i.e. the ability to see the difference between right and wrong. Facebook originally intended to be a forum for the connection of people interested in sharing ideas, communicating with others, and building positive social connection. Instead, the author’s experience as a Facebook’ executive found that expansion, profit, and political influence became an unethical pursuit by the major shareholders (particularly Mark Zukerberg) and managers of the corporation. She argues leadership of Facebook recklessly pursued income, expansion, and political influence around the world with little ethical oversight.

New Zealand (The birthplace of Sarah Wynn-Williams)

Ms. Williams was born in New Zealand but went to work for Facebook and became a U.S. citizen. Her work at Facebook led to a promotion that made her the Director of Global Public Policy which provided opportunity to travel the world soliciting business for Facebook in other countries. Her experience informs listeners of what Meta’s corporate goal: “give people the power to build community and bring the world closer together” became something less as a result of careless management oversight.

Williams begins with a story of a harrowing trip to Myanmar, presumably after their revolution in 2021.

The military coup that ousted the democratically elected government appears to have just begun when Williams had an audience to pitch the Facebook platform to its military government. Just getting to the building where the meeting was to be held was a trial but her position as a representative of Facebook ended with her arrival at a headquarters building of the new regime. It is an interesting story because it shows the power of Facebook association in a country that just had a coup d’état that ended civilian rule. Millions of Myanmar citizens were displaced by widespread human rights abuses with civilian arrests and violence. One wonders what “giving people the power to build community” means in what became a military totalitarian state. (When visiting the Baltics last year, our guide expressed a love for Myanmar’s citizens and the country but was told by Myanmar friends it is unsafe to visit since the coup.)

Williams worked directly with Sheryl Sandberg, the COO of Facebook.

Later, Williams explains a meeting with a Japanese official where Sandberg and Williams go to promote interest in Facebook which had not been a part of the Japanese media environment. The involvement of Williams was primarily to support Sandberg’s pitch. Williams indicates Sandberg was quite complementary of Williams’ assistance after the meeting which gives context to their relationship. A subsequent description of Sandberg’s strong, sometimes harsh, personality and influence on Facebook employees is given by Williams. The Japan’ meeting was successful because Facebook entered the market in 2010. Its popularity is said to have declined with Instagram and LINE being the dominant platforms, but Facebook maintains a presence in the country.

Societies interconnectedness is a boon and bane for 21st century society.

The pandering of Zukerberg, Bezos, Musk, Cook, and Pichai to world governments is made suspect by William’s experience as an employee of Facebook. Media companies have become too big to fail and too ungovernable to manage. Even though the internet more intimately connects the world, the platforms of today’s giants of information create a forum for control and conflict rather than a place to encourage social comity.

Robert Kaplan (Author of “Waste Land”.)

As noted by Robert Kaplan in “Waste Land”, the growing decline of Russia’s, China’s and America’s governments has been increased with world interconnectedness. It appears from William’s experience at Facebook, there is some truth in Kaplan’s observation. Kaplan’s solution is to dismantle these giants and encourage competition to defray their principal stockholder’s influence.

As the Turkish saying goes, “a fish rots from the head down”. Williams frequent contact with Mark Zuckerberg gives weight to her view of Facebook culture. Mr. Zuckerberg seems to carelessly lead Meta into the arena of politics by promoting Facebook’s media clout to political parties because it raises revenues with political advertising and influences government policy on media’ regulation. Frighteningly, Williams notes Zuckerberg considers running for President with the power of Meta to support his candidacy. One may question William’s characterization of Facebook’s “Careless People” as more like calculating self-interested managers than careless employees.

AMERICAN HOPE

From Fukuyama’s intellectual musing to our eyes and ears, one hopes he is correct about America’s future in the technological age.

Books of Interest
 Website: chetyarbrough.blog

The Great Disruption (Human Nature and the Reconstitution of Social Order)

By: Francis Fukuyama

Francis Fukuyama (Author, political scientist, political economist, international relations scholar.)

Francis Fukuyama argues America is at the threshold of a social reconstitution. Fukuyama believes we are at Gladwell’s “Tipping Point” that is changing social norms and rebuilding America’s social order. He argues the innovation of technology, like the industrial revolution, is deconstructing social relationships and economics while reconstructing capitalist democracy.

The immense power of big technology companies like Amazon, Google, and Facebook have outsized influence on American society. They change the tone of social interaction through their ability to disseminate both accurate and misleading information. They erode privacy and create algorithms tailored to disparate interest groups that polarize society. The media giant’s objective is to increase clicks on their platforms to attract more advertisers who pay for public exposure of their service, merchandise, and brand.

To reduce outsize influence of big tech companies, Fukuyama suggests more technology has an answer.

There should be more antitrust measures instituted by the government to break monopolistic practices and encourage competition with large technology companies. Algorithms created by oversight government organizations can ensure transparency and reduce harmful content to reduce big tech companies influence on society. (One doubts expansion of government agencies is a likely scenario in today’s government.)

On the one hand, technology has improved convenience, communication, and a wider distribution of information.

On the other, technology has flooded society with misinformation, invaded privacy, and polarized society. Technology has created new jobs while increasing loss of traditional industry jobs with automation. Trying to return to past labor-intensive manufacturing companies is a fool’s errand in the age of technology.

Luddites during the Industrial Revolution.

Like the industrial revolution, the tech revolution’s social impact is mixed with a potential for greater social isolation, and job displacement with the addition of wide distribution of misinformation. The positives of new technology are improvements in healthcare product and services, renewable energy, and climate understanding with potential for improved control.

Face-to-face interactions become less and less necessary. Children’s access to technology impacts parental supervision and relationship. Fukuyama suggests setting boundaries for technology use needs to be a priority in American families. Technology can open the door to better education, but it also becomes a source of misinformation that can come from the internet of things. Employers have the opportunity to help with work-life balance by encouraging flexible hours and remote work. (Oddly, that suggestion is being undermined by the current government administration and many American companies.)

Economic growth, access to information, and global connectivity have been positively impacted by technology. However, the concentration of power, misinformation, and surveillance of social media has diminished privacy and eroded individual freedom. There are concerns about technology and how it is good and bad for democratic capitalism.

The good lies in increased efficiency, innovation and creation of new markets, through globalization. However, today’s American government shows how tariffs are a destroyer of globalization. Fukuyama implies A.I. and automation is displacing workers and aggravating economic inequality because it is being misunderstood for its true potential and also being misused. Personal data is used to manipulate consumers in ways that challenge the balance between corporations and consumers.

Fukuyama argues private parties will grow in America to create software that will filter and customize online services.

With that effort control of the influence of big tech companies will be diminished. With decentralization of big tech power and influence, society will theoretically become less polarized and more consensus oriented. The capitalist opportunity for tech savvy startups that diminish influence of big tech companies will re-create diversification like that which the matured industrial revolution gave to new manufacturers. Like Standard Oil and other conglomerates of the industrial revolution, businesses like Amazon, Google, and Facebook will have competition that diminishes their power and influence.

American Government will grow to regulate the internet of things just as it has grown to regulate banks, industries, and social services.

Service to citizens will become a bigger part of the economy as a replacement for manufacturing. Family life will re-invent itself as a force of society because of the time saved from manufacturing product to improve human relationships.

From Fukuyama’s intellectual musing to our eyes and ears, one hopes he is correct about America’s future in the technological age.

AI REGULATION

As Suleyman and Bhaskar infer, ignoring the threat of AI because of the difficulty of regulation is no reason to abandon the effort.

Books of Interest
 Website: chetyarbrough.blog

The Coming Wave

By: Mustafa Suleyman with Michael Bhaskar

Narrated By: Mustafa Suleyman

This is a startling book about AI because it is written by an AI entrepreneur who is the founder and former head of applied AI at DeepMind. He is also the CEO of Microsoft AI. What the authors argue is not understood by many who discount the threat of AI. They explain AI can collate information that creates societal solutions, as well as threats, that are beyond the thought and reasoning ability of human beings.

“The Coming Wave” is startling because it is written by two authors who have an intimate understanding of the science of AI.

They argue it is critically important for AI research and development to be internationally regulated with the same seriousness that accompanied the research and use of the atom bomb.

Those who have read this blog know the perspective of this writer is that AI, whether it has greater risk than the atom bomb or not is a tool, not a controller, of humanity. The AI’ threat example given by Suleyman and Bhaskar is that AI has the potential for invention of a genetic modification that could as easily destroy as improve humanity. Recognizing AI’s danger is commendable but like the atom bomb, there will always be a threat of miscreant nations or radicals that have the use of a nuclear device or AI to initiate Armagedón. Obviously, if AI is the threat they suggest, there needs to be an antidote. The last chapters of “The Coming Wave” offer their solution. The authors suggest a 10-step program to regulate or ameliorate the threat of AI’s misuse.

Like alcoholism and nuclear bomb deterrence, Suleyman’s program will be as effective as those who choose to follow the rules.

There are no simple solutions for regulation of AI and as history shows neither Alcoholics Anonymous (AA) nor the Treaty on the Prohibition of Nuclear Weapons (TPNW) has been completely successful.

Suleyman suggests the first step in regulating AI begins with creating safeguards for the vast LLM capabilities of Artificial Intelligence.

This will require the hiring of technicians to monitor and adjust incorrect or misleading information accumulated and distributed by AI users. The concern of many will be the restriction on “freedom of speech”. Additionally, two concerns are the cost of such a bureaucracy and who monitors the monitors. Who draws the line between fact and fiction? When does information deletion become a distortion of fact? This bureaucracy will be responsible for auditing AI models to understand what their capabilities are and what limitations they have.

A second step is to slow the process of AI development by controlling the sale and distribution of the hardware components of AI to provide more time for reviewing new development impacts.

With lucrative incentives for new AI capabilities in a capitalist system there is likely to be a lot of resistance by aggressive entrepreneurs, free-trade and free-speech believers. Leaders in authoritarian countries will be equally incensed by interference in their right to rule.

Transparency is a critical part of the vetting process for AI development.

Suleyman suggests critics need to be involved in new developments to balance greed and power against utilitarian value. There has to be an ethical examination of AI that goes beyond profitability for individuals or control by governments. The bureaucracies for development, review, and regulation should be designed to adapt, reform, and implement regulations to manage AI technologies responsibly. These regulations should be established through global treaties and alliances among all nations of the world.

Suleyman acknowledges this is a big ask and notes there will be many failures in getting cooperation or adherence to AI regulation.

That is and was true of nuclear armament and so far, there has been no use of nuclear weapons to attack other countries. The authors note there will be failures in trying to institute these guidelines but with the help of public awareness and grassroots support, there is hope for the greater good that can come from AI.

As Suleyman and Bhaskar infer, ignoring the threat of AI because of the difficulty of regulation is no reason to abandon the effort.

BELIEF

Extending Harari’s idea of biophysics research and algo-rhythmic programming suggests a potential for immense changes in society. A singularity that melds A.I. with human brain function and algo-rhythmic programming may be tomorrow’s world revolution. Of course, that capability cuts both ways, i.e., for the good and bad of society.

Books of Interest
 Website: chetyarbrough.blog

Homo Deus (A Brief History of Tomorrow)

By: Uval Noah Harari

Narrated By: Derek Perkins

Yuval Noah Harari (Author, Israeli medievalist, military historian, science writer.)

By any measure, Yuval Noah Harari is a well-educated and insightful person who will offend some and enlighten others with his opinion about religion, spirituality, the nature of human beings, and the future. He implies the Bible is a book of fiction that is historically proven to have been written by different authors with contradictions that only interpreters can reconcile as God’s work.

“Homo Deus” is a spiritual book suggesting humanity is on its own and has a chance to survive the future but only through the ability of human understanding and effort.

To Harari, the greatest threats to society are national leaders who believe in God, heaven and eternal life who discount human existence and use of science to improve human life on earth. The irony of Harari’s belief is that humanist leaders are the only hope for human life’ survival.

Harari argues science, free enterprise, and the growth of knowledge offer the best hope for the future of human life.

Neither capitalism nor communism are a guarantee of survival because of the increasing potential for error as human beings become more God-like. Advances in engineering, artificial intelligence, and biotechnology may replace the happenstance of human birth. The value of free enterprise is evident in the agricultural, industrial, and technological revolutions of history. However, as science improves the understanding of the mind and body of human beings, the technology of biogenetics offers hope for the future while running the risk of biological error with unforeseen consequences.

Harari’s book is the brave new world written about by Shakespeare in the 17th century and reimagined by Aldous Huxley in his 1932 dystopian novel “Brave New World”.

On the one hand, Shakespeare offers a positive spin as his character, Miranda, sees people from outside her experience and says “How beauteous mankind is! O Brave! That has such people in’t”. While Huxley notes a future society that becomes conformist and lacks individuality and human emotion. Which way society will turn is unknown.

The conformist demands of collective ownership of property and means of production by communism impede creativity. Capitalism is more creative and dynamic. However, capitalist incentive raises the specter of human nature that only sees financial gain without any concern for environmental or human cost. On balance, capitalism appears more likely to accelerate technology because communism more often follows than changes scientific direction.

The growth of knowledge comes from science and exploration of the unknown, but its use can be destructive as well as constructive.

Some think A.I. will lead the world to greater knowledge and prosperity while others believe it will destroy human life. A sceptic might suggest both views are wrong because A.I. is only a tool for recalling knowledge of the past to help humans make better decisions for the future. The real risk, as it has always been, is human leadership.

Harari believes, like Nietzsche, that God is dead because belief in God is losing its power and significance in the modern world.

Though many still believe in God, it seems more people are viewing God as a myth. The Pew Research Center reports a median of 45% of people across 34 countries still believe in God. However, the variation is wide with Brazil saying 70% believe while in Japan the percentage is only 20%. Harari implies belief in God is in decline.

Harari explains biophysics illustrates that human thought is algorithmic. He argues our thoughts, decisions, and behaviors can be understood to be a result of patterns created in human brains that are pre-determined. There is no “free-will” in Harari’s opinion. This is not to suggest aberrant behavior does not exist, but that human thought and action is determined by our experientially defined brain in the same way a computer is programmed. Experience from birth to adulthood is just part of a mind’s programming.

Harari implies understanding of brain function will change the world as massively as the Agricultural, Industrial, and technological revolutions.

Harari goes on to suggest humans have never been singular beings, but a multitude of beings split into two brains that mix and match their biogenetic and biochemical programming to think and act in pre-determined ways. Experiments have shown that the way the left half of a human brain sees and compels action is different than how the right brain sees and compels action. Each half thinks and acts independently while negotiating a concerted action when both halves are functioning normally. That negotiation between the two brain halves results in an algorithm for action based on the biochemical nature of the brain. The way two halves of the brain interact multiply the person we are or will become.

Extending Harari’s idea of biophysics research and algo-rhythmic programming suggests a potential for immense changes in society. A singularity that melds A.I. with human brain function and algo-rhythmic programming may be tomorrow’s world revolution. Of course, that capability cuts both ways, i.e., for the good and bad of society. Interestingly, Harari paints a grim picture of the future based on an A.I. revolution.

WORRY OR NOT

Artificial intelligence is an amazing tool for understanding the past but its utility for the future is totally dependent on its use by human beings. A.I. may be a tool for planting the seeds of agriculture or operating the tools of industry but it does not think like a human being.

Books of Interest
 Website: chetyarbrough.blog

Genesis (Artificial Intelligence, Hope, and the Human Spirit) 

By: Henry A. Kissinger, Eric Schmidt, Craig Mundie

Narrated By: Niall Ferguson, Byron Wagner

NOTED BELOW: Henry Kissinger (former Secretary of State who died in 2023), Eric Schmidt (former CEO of Google), and Craig Mundie (a Senior Advisor to the CEO of Microsoft).

“Genesis” is these three authors view of the threat and benefits of artificial intelligence. Though Kissinger is near the end of his life when his contribution is made to the book, his co-authors acknowledge his prescient understanding of the A.I. revolution and what it means to world peace and prosperity.

On the one hand, A.I. threatens civilization; on the other it offers a lifeline that may rescue civilization from global warming, nuclear annihilation, and an uncertain future. To this book reviewer, A.I. is a tool in the hands of human beings that can turn human decisions for the good of humanity or to its opposite.

A.I. gathers all the information in the known world, answers questions, and offers predictions based on human information recorded in the world’s past. It is not thinking but simply recalling the past with clarity beyond human capability. A.I. compiles everything originally noted by human beings and collates that information to offer a basis for future decision. Information comprehensiveness is not an infallible guide to the future. The future is and always will be determined by humans, limited only by human judgement, decision, and action.

The danger of A.I. remains in the thinking and decisions of humans that have often been right, but sometimes horribly wrong. One does not have to look far to see our mistakes with war, discrimination, and inequality. In theory, A.I. will improve human decision making but good and bad decisions will always be made by humans, not by machines driven by Artificial Intelligence. A.I.’s threat lies in its use by humans, not by A.I.’s infallible recall and probabilistic analysis of the past. Our worry about A.I. is justified but only because it is a tool of fallible human beings.

Artificial intelligence is an amazing tool for understanding the past but its utility for the future is totally dependent on its use by human beings. A.I. may be a tool for planting the seeds of agriculture or operating the tools of industry but it does not think like a human being. The limits of A.I. are the limits of human thought and action.

The authors conclude the Genie cannot be put back in the bottle. A.I. is a danger but it is a humanly manageable danger that is a part of human life.

The risk is in who the decision maker is when A.I. correlates historical information with proposed action. The authors infer the risk is in human fallibility, not artificial intelligence.