Books of Interest
Website: chetyarbrough.blog
The Coming Wave
By: Mustafa Suleyman with Michael Bhaskar
Narrated By: Mustafa Suleyman


This is a startling book about AI because it is written by an AI entrepreneur who is the founder and former head of applied AI at DeepMind. He is also the CEO of Microsoft AI. What the authors argue is not understood by many who discount the threat of AI. They explain AI can collate information that creates societal solutions, as well as threats, that are beyond the thought and reasoning ability of human beings.

“The Coming Wave” is startling because it is written by two authors who have an intimate understanding of the science of AI.
They argue it is critically important for AI research and development to be internationally regulated with the same seriousness that accompanied the research and use of the atom bomb.
Those who have read this blog know the perspective of this writer is that AI, whether it has greater risk than the atom bomb or not is a tool, not a controller, of humanity. The AI’ threat example given by Suleyman and Bhaskar is that AI has the potential for invention of a genetic modification that could as easily destroy as improve humanity. Recognizing AI’s danger is commendable but like the atom bomb, there will always be a threat of miscreant nations or radicals that have the use of a nuclear device or AI to initiate Armagedón. Obviously, if AI is the threat they suggest, there needs to be an antidote. The last chapters of “The Coming Wave” offer their solution. The authors suggest a 10-step program to regulate or ameliorate the threat of AI’s misuse.

Like alcoholism and nuclear bomb deterrence, Suleyman’s program will be as effective as those who choose to follow the rules.
There are no simple solutions for regulation of AI and as history shows neither Alcoholics Anonymous (AA) nor the Treaty on the Prohibition of Nuclear Weapons (TPNW) has been completely successful.

Suleyman suggests the first step in regulating AI begins with creating safeguards for the vast LLM capabilities of Artificial Intelligence.
This will require the hiring of technicians to monitor and adjust incorrect or misleading information accumulated and distributed by AI users. The concern of many will be the restriction on “freedom of speech”. Additionally, two concerns are the cost of such a bureaucracy and who monitors the monitors. Who draws the line between fact and fiction? When does information deletion become a distortion of fact? This bureaucracy will be responsible for auditing AI models to understand what their capabilities are and what limitations they have.
A second step is to slow the process of AI development by controlling the sale and distribution of the hardware components of AI to provide more time for reviewing new development impacts.

With lucrative incentives for new AI capabilities in a capitalist system there is likely to be a lot of resistance by aggressive entrepreneurs, free-trade and free-speech believers. Leaders in authoritarian countries will be equally incensed by interference in their right to rule.

Transparency is a critical part of the vetting process for AI development.
Suleyman suggests critics need to be involved in new developments to balance greed and power against utilitarian value. There has to be an ethical examination of AI that goes beyond profitability for individuals or control by governments. The bureaucracies for development, review, and regulation should be designed to adapt, reform, and implement regulations to manage AI technologies responsibly. These regulations should be established through global treaties and alliances among all nations of the world.
Suleyman acknowledges this is a big ask and notes there will be many failures in getting cooperation or adherence to AI regulation.

That is and was true of nuclear armament and so far, there has been no use of nuclear weapons to attack other countries. The authors note there will be failures in trying to institute these guidelines but with the help of public awareness and grassroots support, there is hope for the greater good that can come from AI.
As Suleyman and Bhaskar infer, ignoring the threat of AI because of the difficulty of regulation is no reason to abandon the effort.





















































