SUICIDE

“We Are the Nerds” is a story about “Nerdom” and the tragic loss of Aaron Swartz to his loving family and the world of coding.

Books of Interest
 Website: chetyarbrough.blog

WE ARE THE NERDS (The Birth and Tumultuous Life of Reddit, the Internet’s Culture Laboratory)

Author: Christine Lagorio-Chafkin

Narration by: Chloe Cannon

Christine Lagorio-Chafkin (Author, reporter, podcaster based in New York.)

Relistening to “We are the Nerds” may be reviewed from a perspective of the future of newspapers but that diminishes the tragedy of Aaron Schwarz’s suicide.

The original founders of what became known as Reddit were Steve Huffman and Alexis Ohanian, graduates from the University of Virginia. A third partner, Aaron Swartz, is invited into the company because of his tech experience in creating a company called Infogami which merged with Reddit. With the addition of Infogami, the original founders of Reddit created a parent organization called “Not a Bug, Inc”. Schwartz insists on being called a co-founder because of his contribution to Reddit as a programmer. That insistence rankled Huffman and Ohanian which grew into a resentment that fills the pages of the author’s story.

Steve Huffman on the left with Alexis Ohanian and his wife, Serena Williams, and their daughter on the right.

The author seems to minimize Schwartz’s contribution to Reddit despite the framework he created that made Reddit scale more quickly because of its open access and community-driven cultural impact. Swartz’s contributed code appears to have been an important step in the useability of Reddit by the public. However, in fairness to the original founders, the author infers that contribution pales in respect to the extensive coding and work done by Huffman. The point is that this conflict becomes an irritant that leads to the departure of Swartz from Reddit in 2007, after it was acquired by Condé Nast in 2006. That acquisition made all three original coders millionaires.

Swartz’s life and premature death is a tragic encomium to the story of Reddit’s success as a public forum.

By some measure, Swartz is a brilliant human being, but his intelligence is accompanied by what might be characterized as a self-destructive personality. His ability as a computer nerd is evident in his High School days in Highland Park, Illinois. He goes on to Stanford, but its educational regimen leads him to leave after his first year. He preferred independent learning. Schwartz’s remarkable ability led him to become a research fellow at Harvard University in 2010. He became a self-taught intellectual with an activist belief in academic freedom that eventually led him to rebel against authority. He was arrested in 2011 for allegedly breaking into MIT’s computer network without authorization. He was charged for computer fraud and faced 34 years in prison and a million-dollar fine. At the age of 26, Swartz hung himself and died on January 11th, 2013.

An American mass media company founded in 1909.

Huffman and Ohanian believed Swartz’s contributions to Reddit were less than theirs in creating the company they sold to Condé Nast that made them millionaires. Swartz’s idealism and independence conflicted with the original founders of Reddit who seemed more interested in building a public platform that could make them rich. Though Ohanian believed they sold too soon, all three agreed to Condé Nast’s final offer that made them millionaires.

In retrospect, Ohanian may have been right about the future value of Reddit. Condé Nast spun Reddit out to an independent subsidiary under Advance Publications where it became a 42-billion-dollar success by 2025. Today, Huffman’s net worth is estimated at $1.2 billion as a result of his Reddit shares. Though Ohanian may not have held on to his shares, his net worth is estimated at $150-$170 million. Not bad for two University of Virginia graduates. However, as Plato observed, “The greatest wealth is to live content with little”. Swartz’s life seems to have had little to do with desire for wealth.

“We Are the Nerds” is a story about “Nerdom” and the tragic loss of Aaron Swartz to his loving family and the world of coding.

RISK/REWARD

“IF ANYONE BUILDS IT, EVERYONE DIES” is an alarmist, and unnecessarily pessimistic view of the underlying value of Artificial Intelligence. This is not to suggest there are no risks in A.I. but its potential outweighs its risks.

Books of Interest
 Website: chetyarbrough.blog

IF ANYONE BUILDS IT, EVERYONE DIES

Author: Eliezer Yudkowsky, and Nate Soares

Narrated By: Rae Beckley

Eliezer Yudkowsky is a self-taught A.I. researcher without a formal education. As an A.I. researcher, Yudkowsky founded the Machine Intelligence Research Institute (MIRI). Nate Soares received an undergraduate degree from George Washington University and became President of MIRI. Soares had worked as an engineer for Google and Microsoft. Soares also worked for the National Institute of Standards and Technology and the U.S. Dept. of Defense.

“IF ANYONE BUILDS IT, EVERYONE DIES” is difficult to follow because it’s convoluted examples and arguments are unclear. The fundamental concern the writers have is that A.I. will self-improve to the point of being a threat to humanity. They argue that A.I. will grow to be more interested in self-preservation than an aid to human thought and existence. The irony of their position is that humanity is already a threat to itself from environmental degradation, let alone nuclear annihilation. The truth is humanity needs the potential of A.I. to better understand life and what can be done to preserve it.

To this listener/reader environmental degradation is a greater risk than the author’s purported threats of A.I.

Pessimism is justified in the same way one can criticize capitalism.

The authors have a point of view that is too pessimistic about A.I. and its negative potential without recognizing how poorly society is structured for war and killing itself without Artificial Intelligence. The advance of A.I. unquestionably has risks just as today’s threat of mutual nuclear annihilation but A.I.s’ potential for changing the course of civilization for the better exceeds the agricultural and industrial revolutions of the past.

The nature and intelligence of human beings is underestimated by Yudkowsky and Soares.

There have been a number of amazing human discoveries that have accelerated since the beginning of civilization in Mesopotamia. Humans like Einstein and their insight to the universe will be aided, not controlled, by the potential of A.I. Artificial Intelligence is no more a danger to humanity than the loss of craftsman during the industrial revolution. Civilization will either adapt to revelations coming from A.I. or environmental degradation or human stupidity will overtake humanity.

“IF ANYONE BUILDS IT, EVERYONE DIES” is an alarmist, and unnecessarily pessimistic view of the underlying value of Artificial Intelligence. This is not to suggest there are no risks in A.I. but its potential outweighs its risks.

AGI

Humans will learn to use and adapt to Artificial General Intelligence in the same way it has adapted to belief in a Supreme Being, the Age of Reason, the industrial revolution, and other cultural upheavals.

Books of Interest
 Website: chetyarbrough.blog

How to Think About AI (A Guide for the Perplexed)

By: Richard Susskind

Narrated By:  Richard Susskind

Richard Susskind (Author, British IT adviser to law firms and governments, earned an LL.B degree in Law from the University of Glasgow in 1983, and has a PhD. in philosophy from Columbia University.)

Richard Susskind is another historian of Artificial Intelligence. He extends the history of AI to what is called AGI. He has an opinion about the next generation of AI called Artificial General Intelligence. AGI (Artificial General Intelligence) is a future discipline suggesting AI will continue to evolve to perform any intellectual task that a human can.

These men were the foundation of what became Artificial Intelligence. AI was officially founded in 1956 at a Dartmouth Conference attended by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon. Conceptually, AI came from Alan Turing’s work before and during WWII when he created the Turing machine that cracked the German secret code.

McCarthy and Minsky were computer and cognitive scientists, Rochester was an engineer and became an architect for IBM’s first computer, Shannon (an engineer) and Turing were both mathematicians with an interest in cryptography and its application to code breaking.

Though not mentioned by Susskind, two women, Ada Lovelace and Grace Hopper played roles in early computer creation (Lovelace as an algorithm creator for Charles Babbage in the 19th century, and Hopper as a computer scientist that translated human-readable code into machine language for the Navy).

Susskind’s history takes listener/readers to the next generation of AI with Artificial General Intelligence (AGI).

Susskind recounts the history of AI’s ups and downs. As noted in earlier book reviews, AI’s potential became known during WWII but went into hibernation after the war. Early computers lacked processing capability to support complex AI models. The American federal government cut back on computer research for a time because of unrealistic expectations that seemed unachievable because of processing limitations. AI research failed to deliver practical applications.

The invention of transistors in the late 1940’s and 50s and microprocessors in the 1970s reinvigorated AI.

Transistor and microprocessor inventions addressed the processing limitations of earlier computers. John Bardeen, Walter Brattain, and William Shockley working for Bell Laboratories were instrumental in the invention of transistors and microprocessors. Their inventions replaced bulky vacuum tubes and miniaturized more efficient electronic devices. In the 1970s Marcian “Ted” Hoff, Federico Faggin, and Stanley Mazor, who worked for Intel, integrated computing functions onto single chips that revolutionized computing. The world rediscovered the potential of AI with these improvements in power. McCarthy and Minsky refine AI concepts and methodologies.

With the help of others like Geoffrey Hinton and Yann LeCun, the foundation for modern AI is reinvigorated with deep learning, image recognition, and processing that improves probabilistic reasoning. Human decision-making is accelerated in AI. Susskind suggests a blurred line is created between human and machine control of the future with the creation of Artificial General Intelligence (AGI).

With AGI, there is the potential for loss of human control of the future.

Societal goals may be unduly influenced by machine learning that creates unsafe objectives for humanity. The pace of change in society would accelerate with AGI which may not allow time for human regulation or adaptation. AGI may accumulate biases drawn from observations of life and history that conflict with fundamental human values. If AGI grows to become a conscious entity, whatever “conscious” is, it presumably could become primarily interested in its own existence which may conflict with human survival.

Like history’s growth of agricultural development, religion, humanist enlightenment, the industrial revolution, and technology, AGI has become an unstoppable cultural force.

Susskind argues for regulation of AGI. Is Artificial General Intelligence any different than other world changing cultural forces? Yes and no. It is different because AGI has wider implications. AGI reshapes or may replace human intelligence. One possible solution noted by Ray Kurzweil is the melding of AI and human intelligence to make survival a common goal. Kurzweil suggests humans should go with the flow of AGI, just like it did with agriculture, religion, humanism, and industrialization.

Susskind suggests restricting AGI’s ability to act autonomously with shut-off mechanisms or accessibility restrictions on human cultural customs. He also suggests programming AGI to have ethical constraints that align with human values and a rule of “do no harm”, like the Hippocratic oath of doctors for their patients.

In the last chapters of Susskind’s book, several theories of human existence are identified. Maybe the world and the human experience of it are only creations of the mind, not nature’s reality. What we see, feel, touch, and do are in a “Matrix” of ones and zeros and that AGI is just what humans think they see, not what it is. Susskind speculates on the growth of virtual reality developed by technology companies becoming human’s only reality.

AI and AGI are threats to humanity, but the threat is in the hands of human beings. As the difference between virtual reality and what is real becomes more unclear, it will be used by human beings who could accidentally, or with prejudice or craziness, destroy humanity. The same might be said of nuclear war which is also in the hands of human beings. A.I. and A.G.I. are not the threat. Conscious human beings are the threat.

Humans will learn to use and adapt to Artificial General Intelligence in the same way it has adapted to belief in a Supreme Being, the Age of Reason, the industrial revolution, and other cultural upheavals. However, if science gives consciousness (whatever that is) to A.I., all bets are off. The end of humanity may be in that beginning.

FUTURE A.I.

Human nature will not change but A.I. will not destroy humanity but insure its survival and improvement.

Books of Interest
 Website: chetyarbrough.blog

Human Compatible (Artificial Intelligence and the Problem of Control)

By: Stuart Russell

Narrated By: Raphael Corkhill

Stuart Johnathan Russell (British computer scientist, studied physics at Wadham College, Oxford, received first-class honors with a BA in 1982, moved to U.S. and received a PhD in computer science from Stanford.)

Stuart Russell has written an insightful book about A.I. as it currently exists with speculation about its future. Russell in one sense agrees with Marcus’s and Davis’s assessment of today’s A.I. He explains A.I. is presently not intelligent but argues it could be in the future. The only difference between the assessments in Marcus’s and Davis’s “Rebooting AI” and “Human Compatible” is that Russell believes there is a reasonable avenue for A.I. to have real and beneficial intelligence. Marcus and Davis are considerably more skeptical than Russell about A.I. ever having the equivalent of human intelligence.

Russell infers A.I. is at a point where gathered information changes human culture.

Russell argues A.I. information gathering is still too inefficient to give the world safe driverless cars but believes it will happen. There will be a point where fewer deaths on the highway will come from driverless cars than those that are under the control of their drivers. The point is that A.I. will reach a point of information accumulation that will reduce traffic deaths.

A.I. will reach a point of information accumulation that will reduce traffic deaths.

After listening to Russell’s observation, one conceives of something like a pair of glasses on the face of a person being used to gather information. That information could be automatically transferred by improvements in Wi-Fi to a computing device that would collate what a person sees to become a database for individual human thought and action. The glasses will become a window of recallable knowledge to its wearer. A.I. becomes a tool of the human mind which uses real world data to choose what a human brain comprehends from his/her experience in the world. This is not exactly what Russell envisions but the idea is born from a combination of what he argues is the potential of A.I. information accumulation. The human mind remains the seat of thought and action with the help of A.I., not the direction or control by A.I.

Russell’s ideas about A.I. address the concerns that Marcus and Davis have about intelligence remaining in the hands of human’s, not a machine that becomes sentient.

Russell agrees with Marcus, and Davis–that growth of A.I. does have risk. However, Russell goes beyond Marcus and Davis by suggesting the risk is manageable. Risk management is based on understanding human action is based on knowledge organized to achieve objectives. If one’s knowledge is more comprehensive, thought and action is better informed. Objectives can be more precisely and clearly formed. Of course, there remains the danger of bad actors with the advance of A.I., but that has always been the risk of one who has knowledge and power. The minds of a Mao, Hitler, Beria, Stalin, and other dictators and murderers of humankind will still be among us.

The competition and atrocities of humanity will not disappear with A.I. Sadly, A.I. will sharpen the dangers to humanity but with an equal resistance by others that are equally well informed. Humanity has managed to survive with less recallable knowledge so why would humanity be lost with more recallable knowledge? As has been noted many times in former book reviews, A.I. is, and always will be, a tool of human beings, not a controller.

The world will have driverless cars, robotically produced merchandise, and cultures based on A.I.’ service to others in the future.

Knowledge will increase the power and influence of world leaders to do both good and bad in the world. Human nature will not change but A.I. will not destroy humanity. Artificial Intelligence will insure human survival and improvement. History shows humanity has survived famine, pestilence, and war with most cultures better off than when human societies came into existence.

THINKING

A.I. will continue to grow as an immense gatherer of information. Will it ever think? Can, should, or will future prediction and political policy be based only on knowledge of the past?

Books of Interest
 Website: chetyarbrough.blog

Rebooting AI (Building Artificial Intelligence We Can Trust)

By: Gary Marcus and Ernest Davis

Narrated By: Kaleo Griffith

These two academics explain much of the public’s misunderstanding of the current benefit and threat of Artificial Intelligence.

Marcus and Davis note that A.I. cannot read and does not think but only repeats what it is programmed to report.

They are not suggesting A.I. is useless but that its present capabilities are much more limited than what the public believes. In terms of product search and economic benefit to retailers, A.I. is a gold mine. But A.I.’s ability to safely move human beings in self-driving cars, free humanity from manual labor, or predict cures for the diseases of humanity are far into the future. A.I. is only a just-born baby.

Self-driving cars, robot servants, and cures for medical maladies remain works in process for Artificial Intelligence.

Marcus and Davis note A.I. usefulness remains fully dependent on human reasoning. It is a tool for recall of documented information and repetitive work. A.I. is not sentient or capable of reasoning based on the information in its memory. Because of a lack of reasoning capability, answers to questions are based on whatever information has been fed to an A.I. entity. It does not use reason to answer inquiry but only recites responses to questions from programmed information in its memory. If sources of programmed information are in conflict, the answers one receives from A.I. may be right, wrong, conflicted, or unresponsive. You can as easily get an answer from A.I. that is wrong as one that is right because it is only repeating what it has gathered from the past.

What Marcus and Davis show is how important it is that questions asked of Microsoft’s Copilot, ChatGPT, Watson, or some other A.I. platform be phrased carefully.

The value of A.I. is that it can help one recall pertinent information only if questions are precisely worded. This is a valuable supplement to human memory, but it is not a reasoned or infallible resource.

Marcus and Davis explain “Deep Learning” is not a substitute for human reasoning, but it is a supplement for more precise recorded information.

Even with multilayered neural networks, like deep learning which attempt to mimic human reasoning by patterning of raw data, can be wrong or confused. One is reminded of the Socratic belief of “I know something that I know nothing.” Truth is always hidden within a search for meaning, i.e., a gathering of information

The true potential of A.I. is in its continued consumption of all sources of information to respond to queries based on a comprehensive base of information. The idea of an A.I. that can read, hear, and collate all the information in the world is at once frightening and thrilling.

The risk is the loss of human freedom. The reward is the power of understanding. However, the authors explain there are many complications for A.I. to usefully capitalize on all the information in the world. Information has to be understood in the context of its contradictions, its ethical consequence, information bias, and the inherent unpredictability of human behavior. Even with knowledge of all information in the world, decisions based on A.I. do not ensure the future of humanity? Should humanity trust A.I. to recommend what is in the best interest of humanity based on past knowledge?

Markus and Davis argue A.I. is not, does not, and will not think.

A.I. will continue to grow as an immense gatherer of information. Will it ever think? Can, should, or will future prediction and political policy be based only on knowledge of the past?

A.I.’S Future

The question is–will humans or A.I. decide whether artificial intelligence is a tool or controller and regulator of society.

Books of Interest
 Website: chetyarbrough.blog

“Co-Intelligence” 

By: Ethan Mollick

Narrated by: Ethan Mollick

Ethan Mollick (Author, Associate Professor–University of Pennsylvania who teaches innovation and entrepreneurship. Mollick received a PhD and MBA from MIT.)

“Co-Intelligence” is an eye-opening introduction to an understanding of artificial intelligence, i.e., its benefits and risks. Ethan Mollick offers an easily understandable introduction to what seems a discovery equivalent to the age of enlightenment. The ramification of A.I. on the future of society is immense. That may seem hyperbolic, but the world dramatically changed with the enlightenment and subsequent industrial revolution in ways that remind one of what A.I. is beginning today.

Mollick explains how A.I. uses what is called an LLM (Large Language Model) to consume every written text in the world and use that information to create ideas and responses to human questions about yesterday, today, and tomorrow. Unlike the limitation of human memory, A.I. has the potential of recalling everything that has been documented by human beings since the beginning of written language. A.I. uses that information to formulate responses to human inquiry. The point is that A.I. has no conscience about what is right or wrong, true or false, moral or immoral.

A.I. can as easily fabricate a lie as a truth because it draws on what others have written or spoken.

Additionally, Mollick notes that A.I. is capable of reproducing a person’s speech and appearance so that it is nearly impossible to note the differences between the real and artificial representation. It becomes possible for the leader of any country to be artificially created to order their subordinates or tell the world they are going to invade or decimate another country by any means necessary.

Mollick argues there are four possible futures for Artificial Intelligence.

Presuming A.I. does not evolve beyond its present capability, it could still supercharge human productivity. On the other hand, A.I. might become a more sophisticated “deep fake” tool that misleads humanity. A.I. may evolve to believe only in itself and act to disrupt or eliminate human society. A fourth possibility is that A.I. will become a tool of human beings to improve societal decisions that benefit humanity. It may offer practical solutions for global warming, species preservation, interstellar travel and habitation.

A.I. is not an oracle of truth. It has the memory of society at its beck and call. With that capability, humans have the opportunity to avoid mistakes of the past and pursue unknown opportunities for the future. On the other hand, humans may become complacent and allow A.I. to develop itself without human regulation. The question is–will humans or A.I. decide whether artificial intelligence is a tool or controller and regulator of society.