AI & HEALTH

Like Climate Change, AI seems an inevitable change that will collate, spindle, and mutilate life whether we want it to or not. The best humans can do is adopt and adapt to the change AI will make in human life. It is not a choice but an inevitability.

Books of Interest
 Website: chetyarbrough.blog

Deep Medicine (How Artificial Intelligence Can Make Healthcare Human Again)

Author: Eric Topol

Narrated By:  Graham Winton

Eric Topol (Author, American cardiologist, scientist, founder of Scripps Research Translational Institute.)

Eric Topol is what most patients want to see in a Doctor of Medicine. “Deep Medicine” should be required reading for students wishing to become physicians. One suspects Topol’s view of medicine is as empathetic as it is because of his personal chronic illness. His personal experience as a patient and physician give him an insightful understanding of medical diagnosis, patient care, and treatment.

Topol explains how increasingly valuable and important Artificial Intelligence is in the diagnosis and treatment of illness and health for human beings.

AI opens the door for improved diagnosis and treatment of patients. A monumental caveat to A.I.s potential is its exposure of personal history not only to physicians but to governments and businesses. Governments and businesses preternaturally have agendas that may be in conflict with one’s personal health and welfare.

Topol notes China is ahead of America in cataloging citizens’ health because of their data collection and AI’s capabilities.

Theoretically, every visit to a doctor can be precisely documented with an AI system. The good of that system would improve continuity of medical diagnosis and treatment of patients. The risk of that system is that it can be exploited by governments and businesses wishing to control or influence a person’s life. One is left with a concern about being able to protect oneself from a government or business that may have access to citizen information. In the case of government, it is the power exercised over freedom. Both government and businesses can use AI information to influence human choice. With detailed information about what one wants, needs, or is undecided upon can be manipulated with personal knowledge accumulated by AI.

Putting loss of privacy and “Brave New World” negatives aside, Topol explains the potential of AI to immensely improve human health and wellness.

Cradle to grave information on human health would aid in research and treatment of illnesses and cures for present and future patients. Topol gives the example of collection of information on biometric health of human beings that can reveal secrets of perfect diets that would aid better health during one’s life. Topol explains how every person has a unique biometric system that processes food in different ways. Some foods may be harmful to some and not others because of the way their body metabolizes what they choose to eat. Topol explains, every person has their own biometric system that processes foods in different ways. It is possible to design diets to meet the specifications of one’s unique digestive system to improve health and avoid foods that are not healthily metabolized by one’s body. An AI could be devised to analyze individual biometrics and recommend more healthful diets and more effective medicines for users of an AI system.

In addition to improvements in medical imaging and diagnosis with AI, Topal explains how medicine and treatments can be personalized to patients based on biometric analysis that shows how medications can be optimized to treat specific patients in a customized way. Every patient is unique in the way they metabolize food and drugs. AI offers the potential for customization to maximize recovery from illness, infection, or disease.

Another growing AI metric is measurement of an individual’s physical well-being. Monitoring one’s vital signs is becoming common with Apple watches and information accumulation that can be monitored and controlled for healthful living. One can begin to improve one’s health and life with more information about a user’s pulse and blood pressure measurements. Instantaneous reports may warn people of risks with an accumulated record of healthful levels of exercise and an exerciser’s recovery times.

Marie Curie (Scientist, chemist, and physicist who played a crucial role in developing x-ray technology, received 2 Nobel Prizes, died at the age of 66.)

Topol offers a number of circumstances where AI has improved medical diagnosis and treatment. He notes how AI analysis of radiological imaging improves diagnosis of body’ abnormality because of its relentless process of reviewing past imaging that is beyond the knowledge or memory of experienced radiologists. Topol notes a number of studies that show AI reads radiological images better than experienced radiologists.

One wonders if AI is a Hobson’s choice or a societal revolution.

One wonders if AI is a Hobson’s choice or a societal revolution greater than the discovery of agriculture (10000 BCE), the rise of civilization (3000 BCE), the Scientific Revolution (16th to 17th century), the Industrial Revolution (18th to 19th century), the Digital Revolution (20th to 21st century), or Climate Change in the 21st century. Like Climate Change, AI seems an inevitable change that will collate, spindle, and mutilate life whether we want it to or not. The best humans can do is adopt and adapt to the change AI will make in human life. It is not a choice but an inevitability.

AGI

Humans will learn to use and adapt to Artificial General Intelligence in the same way it has adapted to belief in a Supreme Being, the Age of Reason, the industrial revolution, and other cultural upheavals.

Books of Interest
 Website: chetyarbrough.blog

How to Think About AI (A Guide for the Perplexed)

By: Richard Susskind

Narrated By:  Richard Susskind

Richard Susskind (Author, British IT adviser to law firms and governments, earned an LL.B degree in Law from the University of Glasgow in 1983, and has a PhD. in philosophy from Columbia University.)

Richard Susskind is another historian of Artificial Intelligence. He extends the history of AI to what is called AGI. He has an opinion about the next generation of AI called Artificial General Intelligence. AGI (Artificial General Intelligence) is a future discipline suggesting AI will continue to evolve to perform any intellectual task that a human can.

These men were the foundation of what became Artificial Intelligence. AI was officially founded in 1956 at a Dartmouth Conference attended by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon. Conceptually, AI came from Alan Turing’s work before and during WWII when he created the Turing machine that cracked the German secret code.

McCarthy and Minsky were computer and cognitive scientists, Rochester was an engineer and became an architect for IBM’s first computer, Shannon (an engineer) and Turing were both mathematicians with an interest in cryptography and its application to code breaking.

Though not mentioned by Susskind, two women, Ada Lovelace and Grace Hopper played roles in early computer creation (Lovelace as an algorithm creator for Charles Babbage in the 19th century, and Hopper as a computer scientist that translated human-readable code into machine language for the Navy).

Susskind’s history takes listener/readers to the next generation of AI with Artificial General Intelligence (AGI).

Susskind recounts the history of AI’s ups and downs. As noted in earlier book reviews, AI’s potential became known during WWII but went into hibernation after the war. Early computers lacked processing capability to support complex AI models. The American federal government cut back on computer research for a time because of unrealistic expectations that seemed unachievable because of processing limitations. AI research failed to deliver practical applications.

The invention of transistors in the late 1940’s and 50s and microprocessors in the 1970s reinvigorated AI.

Transistor and microprocessor inventions addressed the processing limitations of earlier computers. John Bardeen, Walter Brattain, and William Shockley working for Bell Laboratories were instrumental in the invention of transistors and microprocessors. Their inventions replaced bulky vacuum tubes and miniaturized more efficient electronic devices. In the 1970s Marcian “Ted” Hoff, Federico Faggin, and Stanley Mazor, who worked for Intel, integrated computing functions onto single chips that revolutionized computing. The world rediscovered the potential of AI with these improvements in power. McCarthy and Minsky refine AI concepts and methodologies.

With the help of others like Geoffrey Hinton and Yann LeCun, the foundation for modern AI is reinvigorated with deep learning, image recognition, and processing that improves probabilistic reasoning. Human decision-making is accelerated in AI. Susskind suggests a blurred line is created between human and machine control of the future with the creation of Artificial General Intelligence (AGI).

With AGI, there is the potential for loss of human control of the future.

Societal goals may be unduly influenced by machine learning that creates unsafe objectives for humanity. The pace of change in society would accelerate with AGI which may not allow time for human regulation or adaptation. AGI may accumulate biases drawn from observations of life and history that conflict with fundamental human values. If AGI grows to become a conscious entity, whatever “conscious” is, it presumably could become primarily interested in its own existence which may conflict with human survival.

Like history’s growth of agricultural development, religion, humanist enlightenment, the industrial revolution, and technology, AGI has become an unstoppable cultural force.

Susskind argues for regulation of AGI. Is Artificial General Intelligence any different than other world changing cultural forces? Yes and no. It is different because AGI has wider implications. AGI reshapes or may replace human intelligence. One possible solution noted by Ray Kurzweil is the melding of AI and human intelligence to make survival a common goal. Kurzweil suggests humans should go with the flow of AGI, just like it did with agriculture, religion, humanism, and industrialization.

Susskind suggests restricting AGI’s ability to act autonomously with shut-off mechanisms or accessibility restrictions on human cultural customs. He also suggests programming AGI to have ethical constraints that align with human values and a rule of “do no harm”, like the Hippocratic oath of doctors for their patients.

In the last chapters of Susskind’s book, several theories of human existence are identified. Maybe the world and the human experience of it are only creations of the mind, not nature’s reality. What we see, feel, touch, and do are in a “Matrix” of ones and zeros and that AGI is just what humans think they see, not what it is. Susskind speculates on the growth of virtual reality developed by technology companies becoming human’s only reality.

AI and AGI are threats to humanity, but the threat is in the hands of human beings. As the difference between virtual reality and what is real becomes more unclear, it will be used by human beings who could accidentally, or with prejudice or craziness, destroy humanity. The same might be said of nuclear war which is also in the hands of human beings. A.I. and A.G.I. are not the threat. Conscious human beings are the threat.

Humans will learn to use and adapt to Artificial General Intelligence in the same way it has adapted to belief in a Supreme Being, the Age of Reason, the industrial revolution, and other cultural upheavals. However, if science gives consciousness (whatever that is) to A.I., all bets are off. The end of humanity may be in that beginning.

SURVEILLANCE SOCIETY

The author makes a point in “The Dream Hotel”, but her book is a tedious repetition of the risk of human digitization that is a growing concern in this 21st century world.

Books of Interest
 Website: chetyarbrough.blog

The Dream Hotel (A Novel)

By: Laila Lalami

Narrated By:  Frankie Corzo, Barton Caplan

Laila Lalami (Moroccan-American novelist, essayist, and professor, earned a PhD in linguistics, finalist for the Pulitzer Prize for “The Moor’s Account”.)

Laila Lalami imagines a “Brave New World” in which algorithms predict probabilities of lethal criminal behavior. She creates a nation-state with a human behavior monitoring and detention system for every human that might commit a lethal crime. The growing collection of data about human thought and action suggests a level of truth and possibility.

Lalami creates a state that monitors, collates, and creates probability algorithms for human behavior.

To a degree, that state already exists. The difference is that the algorithms are to get people to buy things in capitalist countries and jail or murder people in authoritarian countries. One might argue America and most western countries are in the first category while Russia, and North Korea are in the second.

Lalami’s description of the detention system, like many bureaucratic organizations, is inefficient and bound by rules that defeat their ideal purpose.

A young mother named Hussein is coming back from a business trip. She is detained because of data collected on her about where she has been, what she did on her business trip, her foreign sounding name, and the kind of relationship she has with her husband and twin children. An algorithm has been created based on a profile of her life. It flags the young woman so that she has a number slightly over a probability threshold of someone who might kill their husband. Of course, this is ridiculous on its face. Whether she murders her husband or not is based on innate errors of behavioral prediction and bureaucratic confusion.

Every organization or bureaucracy staffed by human beings has a level of confusion and inefficiency that is compounded by information inaccuracy.

That does not make the organization bad or good, but it does mean, like today’s American government’s bad decisions on foreign aid or FDA bureaucracy throws the baby out with the bath water. Lalami’s point is that detention because of one’s name, family relationship, and presumed prediction for murder, based on a digitized life, is absurd. Algorithms cannot predict or explain human behavior. At best, an algorithm has a level of predictability, but life is too complex to be measured by a fictive number created by an algorithm.

The author makes a point in “The Dream Hotel”, but her book is a tedious repetition of the risk of human digitization that is a growing concern in this 21st century world.

RISK/REWARD

AI is only a tool of human beings and will be misused by some leaders in the same way Atom bombs, starvation, disease, climate, and other maladies have harmed the sentient world. AI is more of an opportunity than threat to society.

Books of Interest
 Website: chetyarbrough.blog

A Brief History of Artificial Intelligence (What It Is, Where We Are, and Where We Are Going)

By: Michael Wooldridge

Narrated By: Glen McCready

Michael Wooldridge (Author, British professor of Computer Science, Senior Research Fellow at Hertford College University of Oxford.)

Wooldridge served as the President of the International Joint Conference in Artificial Intelligence from 2015-17, and President of the European Association for AI from 2014-16. He received a number of A.I. related service awards in his career.

Alan Turing (1912-1954, Mathematician, computer scientist, cryptanalyst, philosopher, and theoretical biologist.)

Wooldridge’s history of A.I. begins with Alan Turing who has the honorific title of “father of theoretical computer science and artificial intelligence”. Turing is best known for breaking the German Enigma code in WWII with the development of an automatic computing engine. He went on to develop the Turing test that evaluated a machine’s ability to provide answers to questions that exhibited human-like behavior. Sadly, he is equally well known for being a publicly persecuted homosexual who committed suicide in 1954. He was 41 years old at the time of his death.

Wooldridge explains A.I. has had a roller-coaster history of highs and lows with new highs in this century.

Breaking the Enigma code is widely acknowledged as a game changer in WWII. Enigma’s code breaking shortened the war and provided strategic advantage to the Allied powers. However, Wooldridge notes computer utility declined in the 70s and 80s because applications relied on laborious programming rules that introduced biases, ethical concerns, and prediction errors. Expectations of A.I.’s predictability seemed exaggerated.

The idea of a neuronal connection system was thought of in 1943 by Warren McCulloch and Walter L Pitts.

In 1958, Frank Rosenblatt developed “Perception”, a program based on McCulloch and Pitt’s idea that made computers capable of learning. However, this was a cumbersome programming process that failed to give consistent results. After the 80s, machine learning became more usefully predictive with Geoffrey Hinton’s devel0pment of backpropagation, i.e., the use of an algorithm to check on programming errors with corrections that improved A.I. predictions. Hinton went on to develop a neural network in 1986 that worked like the synapse structure of the brain but with much fewer connections. A limited neural network for computers led to a capability for reading text and collating information.

Geoffrey Hinton (the “Godfather of AI” won the 2018 Turing Award.)

Then, in 2006 Hinton developed a Deep Belief Network that led to deep learning with a type of a generative neural network. Neural networks offered more connections that improved computer memory with image recognition, speech processing, and natural language understanding. In the 2000s, Google acquired a deep learning company that could crawl and index the internet. Fact-based decision-making, and the accumulation of data, paved the way for better A.I. utility and predictive capability.

Face recognition capability.

What seems lost in this history is the fact that all of these innovations were created by human cognition and creation.

Many highly educated and inventive people like Elon Musk, Stephen Hawking, Bill Gates, Geoffrey Hinton, and Yuval Harari believe the risks of AI are a threat to humanity. Musk calls AI a big existential threat and compares it to summoning a demon. Hawking felt Ai could evolve beyond human control. Gates expressed concern about job displacement that would have long-term negative consequences with ethical implications that would harm society. Hinton believed AI would outthink humans and pose unforeseen risks. Harari believed AI would manipulate human behavior and reshape global power structures and undermine governments.

All fears about AI have some basis for concern.

However, how good a job has society done throughout history without AI? AI is only a tool of human beings and will be misused by some leaders in the same way atom bombs, starvation, disease, climate, and other maladies have harmed the sentient world. AI is more of an opportunity than threat to society.

THINKING

A.I. will continue to grow as an immense gatherer of information. Will it ever think? Can, should, or will future prediction and political policy be based only on knowledge of the past?

Books of Interest
 Website: chetyarbrough.blog

Rebooting AI (Building Artificial Intelligence We Can Trust)

By: Gary Marcus and Ernest Davis

Narrated By: Kaleo Griffith

These two academics explain much of the public’s misunderstanding of the current benefit and threat of Artificial Intelligence.

Marcus and Davis note that A.I. cannot read and does not think but only repeats what it is programmed to report.

They are not suggesting A.I. is useless but that its present capabilities are much more limited than what the public believes. In terms of product search and economic benefit to retailers, A.I. is a gold mine. But A.I.’s ability to safely move human beings in self-driving cars, free humanity from manual labor, or predict cures for the diseases of humanity are far into the future. A.I. is only a just-born baby.

Self-driving cars, robot servants, and cures for medical maladies remain works in process for Artificial Intelligence.

Marcus and Davis note A.I. usefulness remains fully dependent on human reasoning. It is a tool for recall of documented information and repetitive work. A.I. is not sentient or capable of reasoning based on the information in its memory. Because of a lack of reasoning capability, answers to questions are based on whatever information has been fed to an A.I. entity. It does not use reason to answer inquiry but only recites responses to questions from programmed information in its memory. If sources of programmed information are in conflict, the answers one receives from A.I. may be right, wrong, conflicted, or unresponsive. You can as easily get an answer from A.I. that is wrong as one that is right because it is only repeating what it has gathered from the past.

What Marcus and Davis show is how important it is that questions asked of Microsoft’s Copilot, ChatGPT, Watson, or some other A.I. platform be phrased carefully.

The value of A.I. is that it can help one recall pertinent information only if questions are precisely worded. This is a valuable supplement to human memory, but it is not a reasoned or infallible resource.

Marcus and Davis explain “Deep Learning” is not a substitute for human reasoning, but it is a supplement for more precise recorded information.

Even with multilayered neural networks, like deep learning which attempt to mimic human reasoning by patterning of raw data, can be wrong or confused. One is reminded of the Socratic belief of “I know something that I know nothing.” Truth is always hidden within a search for meaning, i.e., a gathering of information

The true potential of A.I. is in its continued consumption of all sources of information to respond to queries based on a comprehensive base of information. The idea of an A.I. that can read, hear, and collate all the information in the world is at once frightening and thrilling.

The risk is the loss of human freedom. The reward is the power of understanding. However, the authors explain there are many complications for A.I. to usefully capitalize on all the information in the world. Information has to be understood in the context of its contradictions, its ethical consequence, information bias, and the inherent unpredictability of human behavior. Even with knowledge of all information in the world, decisions based on A.I. do not ensure the future of humanity? Should humanity trust A.I. to recommend what is in the best interest of humanity based on past knowledge?

Markus and Davis argue A.I. is not, does not, and will not think.

A.I. will continue to grow as an immense gatherer of information. Will it ever think? Can, should, or will future prediction and political policy be based only on knowledge of the past?

MEDIA SELF-INTEREST

One may question William’s characterization of Facebook’s “Careless People” as more like calculating self-interested managers than careless employees.

Books of Interest
 Website: chetyarbrough.blog

Careless People (A Cautionary of Power, Greed, and Lost Idealism)

By: Sarah Wynn-Williams

Narrated By: Sarah Wynn-Williams

Sarah Wynn-Williams (Author, Ex-Meta executive, presently barred from criticism of Meta, formally known as Facebook.)

As noted in the sub-title of “Careless People”, Meta (formerly known as Facebook) is criticized as an international influencer of society that has lost its sense of ethics, i.e. the ability to see the difference between right and wrong. Facebook originally intended to be a forum for the connection of people interested in sharing ideas, communicating with others, and building positive social connection. Instead, the author’s experience as a Facebook’ executive found that expansion, profit, and political influence became an unethical pursuit by the major shareholders (particularly Mark Zukerberg) and managers of the corporation. She argues leadership of Facebook recklessly pursued income, expansion, and political influence around the world with little ethical oversight.

New Zealand (The birthplace of Sarah Wynn-Williams)

Ms. Williams was born in New Zealand but went to work for Facebook and became a U.S. citizen. Her work at Facebook led to a promotion that made her the Director of Global Public Policy which provided opportunity to travel the world soliciting business for Facebook in other countries. Her experience informs listeners of what Meta’s corporate goal: “give people the power to build community and bring the world closer together” became something less as a result of careless management oversight.

Williams begins with a story of a harrowing trip to Myanmar, presumably after their revolution in 2021.

The military coup that ousted the democratically elected government appears to have just begun when Williams had an audience to pitch the Facebook platform to its military government. Just getting to the building where the meeting was to be held was a trial but her position as a representative of Facebook ended with her arrival at a headquarters building of the new regime. It is an interesting story because it shows the power of Facebook association in a country that just had a coup d’état that ended civilian rule. Millions of Myanmar citizens were displaced by widespread human rights abuses with civilian arrests and violence. One wonders what “giving people the power to build community” means in what became a military totalitarian state. (When visiting the Baltics last year, our guide expressed a love for Myanmar’s citizens and the country but was told by Myanmar friends it is unsafe to visit since the coup.)

Williams worked directly with Sheryl Sandberg, the COO of Facebook.

Later, Williams explains a meeting with a Japanese official where Sandberg and Williams go to promote interest in Facebook which had not been a part of the Japanese media environment. The involvement of Williams was primarily to support Sandberg’s pitch. Williams indicates Sandberg was quite complementary of Williams’ assistance after the meeting which gives context to their relationship. A subsequent description of Sandberg’s strong, sometimes harsh, personality and influence on Facebook employees is given by Williams. The Japan’ meeting was successful because Facebook entered the market in 2010. Its popularity is said to have declined with Instagram and LINE being the dominant platforms, but Facebook maintains a presence in the country.

Societies interconnectedness is a boon and bane for 21st century society.

The pandering of Zukerberg, Bezos, Musk, Cook, and Pichai to world governments is made suspect by William’s experience as an employee of Facebook. Media companies have become too big to fail and too ungovernable to manage. Even though the internet more intimately connects the world, the platforms of today’s giants of information create a forum for control and conflict rather than a place to encourage social comity.

Robert Kaplan (Author of “Waste Land”.)

As noted by Robert Kaplan in “Waste Land”, the growing decline of Russia’s, China’s and America’s governments has been increased with world interconnectedness. It appears from William’s experience at Facebook, there is some truth in Kaplan’s observation. Kaplan’s solution is to dismantle these giants and encourage competition to defray their principal stockholder’s influence.

As the Turkish saying goes, “a fish rots from the head down”. Williams frequent contact with Mark Zuckerberg gives weight to her view of Facebook culture. Mr. Zuckerberg seems to carelessly lead Meta into the arena of politics by promoting Facebook’s media clout to political parties because it raises revenues with political advertising and influences government policy on media’ regulation. Frighteningly, Williams notes Zuckerberg considers running for President with the power of Meta to support his candidacy. One may question William’s characterization of Facebook’s “Careless People” as more like calculating self-interested managers than careless employees.

TOO LATE

Ideally, public good and ethics will be taught in advance of the melding of technology and government, i.e., not after mistakes are made. However, history suggests humans will blunder down the road of experience with A.I., making mistakes, and trying to correct them after they occur.

Books of Interest
 Website: chetyarbrough.

The Technological Republic (Hard Power, Soft Belief, and the Future of the West)

By: Alexander C. Karp and Nicholas W. Zamiska

Narrated By: Nicholas W. Zamiska

The authors are the founder and operations manager of the American software company, Palantir Technolgies. Palantir has been hired by the U. S. Department of Defense, the Intelligence Community, agencies of NATO countries, and Western corporations to provide analytic platforms for defense analysis, healthcare, finance, and manufacturing.

They believe artificial intelligence research and development has lost its way.

They argue Silicon Valley has lost focus on what is important for survival of society and Western values. They suggest A.I. should be focusing on serving humanity in ways that responsibly regulate nuclear weapons and protect society from existential risks like climate change, pandemics, asteroid collisions, etc., that threaten human extinction. The authors provide a powerful criticism of technology and its national purpose.

Karp and Zamiska argue that technology is focusing on consumerism rather nuclear annihilation or existential risk.

By focusing on convenience and entertainment for financial success, fundamental problems like the threat of nuclear war, homelessness, inequality, and climate change are ignored or relegated to the trash heap of history. (“Trash heap of history” is the belief that what happens, happens and society can do nothing about it.) The west has become complacent with short-term focus on profit and consumer demand. The authors argue that the greater good is no longer thought of as an important societal goal. The primary goal is making money that enriches creators and company owners by making purchases more convenient to and for consumers.

Aldous Huxley (English writer and philosopher, 1894-1963, author of “Brave New World”.)

Their argument is there should be more collaboration between tech and government. Historically, government is only as good as the information it has to make societal decisions. A computer program can be programmed with false information like the error of weapons of mass destruction that led to an invasion of Iraq that was a bad decision. The domino theory input that led to the Vietnam war; so, on and so on. There is also the threat of an elected President that uses the power of technology to do the wrong thing because of his/her incompetence. There is the risk of government gathering personal information and using it to cross the line into a “Brave New World” where innovation, free thought, and independent action are discouraged or legislated against so people can be sent to jail for breaking the law?

Possibly, melding technology with government is an answer, but it is a chicken and egg concern. Education about public good and ethical practices should begin as soon as the egg cracks, not after hatchlings are already old enough to work. Phrases that come to mind are “What’s done is done” or “The die is cast”.

The authors argue the West needs to up-its-game if it wishes to create a peaceful and prosperous future for a society that is founded on the ideal of human freedom.

Without future generations creating policies based on ethical purpose for the public good, one infers western culture will spiral into individual isolation and self-interest that diminishes western culture and ideals.

Ideally, public good and ethics will be taught in advance of the melding of technology and government, i.e., not after mistakes are made. However, history suggests humans will blunder down the road of experience with A.I., making mistakes, and trying to correct them after they occur.

AI REGULATION

As Suleyman and Bhaskar infer, ignoring the threat of AI because of the difficulty of regulation is no reason to abandon the effort.

Books of Interest
 Website: chetyarbrough.blog

The Coming Wave

By: Mustafa Suleyman with Michael Bhaskar

Narrated By: Mustafa Suleyman

This is a startling book about AI because it is written by an AI entrepreneur who is the founder and former head of applied AI at DeepMind. He is also the CEO of Microsoft AI. What the authors argue is not understood by many who discount the threat of AI. They explain AI can collate information that creates societal solutions, as well as threats, that are beyond the thought and reasoning ability of human beings.

“The Coming Wave” is startling because it is written by two authors who have an intimate understanding of the science of AI.

They argue it is critically important for AI research and development to be internationally regulated with the same seriousness that accompanied the research and use of the atom bomb.

Those who have read this blog know the perspective of this writer is that AI, whether it has greater risk than the atom bomb or not is a tool, not a controller, of humanity. The AI’ threat example given by Suleyman and Bhaskar is that AI has the potential for invention of a genetic modification that could as easily destroy as improve humanity. Recognizing AI’s danger is commendable but like the atom bomb, there will always be a threat of miscreant nations or radicals that have the use of a nuclear device or AI to initiate Armagedón. Obviously, if AI is the threat they suggest, there needs to be an antidote. The last chapters of “The Coming Wave” offer their solution. The authors suggest a 10-step program to regulate or ameliorate the threat of AI’s misuse.

Like alcoholism and nuclear bomb deterrence, Suleyman’s program will be as effective as those who choose to follow the rules.

There are no simple solutions for regulation of AI and as history shows neither Alcoholics Anonymous (AA) nor the Treaty on the Prohibition of Nuclear Weapons (TPNW) has been completely successful.

Suleyman suggests the first step in regulating AI begins with creating safeguards for the vast LLM capabilities of Artificial Intelligence.

This will require the hiring of technicians to monitor and adjust incorrect or misleading information accumulated and distributed by AI users. The concern of many will be the restriction on “freedom of speech”. Additionally, two concerns are the cost of such a bureaucracy and who monitors the monitors. Who draws the line between fact and fiction? When does information deletion become a distortion of fact? This bureaucracy will be responsible for auditing AI models to understand what their capabilities are and what limitations they have.

A second step is to slow the process of AI development by controlling the sale and distribution of the hardware components of AI to provide more time for reviewing new development impacts.

With lucrative incentives for new AI capabilities in a capitalist system there is likely to be a lot of resistance by aggressive entrepreneurs, free-trade and free-speech believers. Leaders in authoritarian countries will be equally incensed by interference in their right to rule.

Transparency is a critical part of the vetting process for AI development.

Suleyman suggests critics need to be involved in new developments to balance greed and power against utilitarian value. There has to be an ethical examination of AI that goes beyond profitability for individuals or control by governments. The bureaucracies for development, review, and regulation should be designed to adapt, reform, and implement regulations to manage AI technologies responsibly. These regulations should be established through global treaties and alliances among all nations of the world.

Suleyman acknowledges this is a big ask and notes there will be many failures in getting cooperation or adherence to AI regulation.

That is and was true of nuclear armament and so far, there has been no use of nuclear weapons to attack other countries. The authors note there will be failures in trying to institute these guidelines but with the help of public awareness and grassroots support, there is hope for the greater good that can come from AI.

As Suleyman and Bhaskar infer, ignoring the threat of AI because of the difficulty of regulation is no reason to abandon the effort.