By Chet Yarbrough
Making Sense: Conversations on Consciousness, Morality, and the Future of Humanity
By: Sam Harris
Narrated by : Sam Harris, David Chalmers, David Deutsch, Anil Seth, Thomas Metzinger, Timothy Snyder, Glenn C. Loury, Robert Sapolsky, Daniel Kahneman, Nick Bostrom, David Krakauer, Max Tegmark.
Sam Harris (American author, philosopher, neuroscientist, and podcast host.)
This audio presentation is a series of podcast interviews by Sam Harris of some remarkable students of humanity. They discuss the meaning of morality, consciousness, and the future. Listeners will come away with a degree of wonder, appreciation, and hopefully understanding of what these men (sadly no women) say about human intelligence, A.I., and the future.
Three explained points of view can be taken to work tomorrow morning–1) Human intelligence is not adequately represented by I.Q. tests. 2) Interviews of prospective candidates for a job will not tell an employer how well a prospective candidate will do his/her job. 3) Machines can be programed to be more efficient and less error prone than humans in the production of goods and services.
Less immediate but more consequential points of view are–A) The advance of artificial intelligence has potential for both good and evil, a secular rather than religious morality to these scientists. B) Leadership in A.I. is critical to the future of humanity. C) The future of work is indeterminate but is based on the physics of existence.
1) Intelligence comes from a brain gathering information and experience and using that gathering to provide order to thought nd action. An I.Q. number is a measurement with limited insight to one’s ordered thoughts and actions.
2) Job applicant interviews tells little about a candidate’s ability to think and act based on the needs of a job. Experience in similar jobs is of some value but interviews only reinforce an interviewer’s prejudices and biases.
3) Machines can be programed to be more efficient and less error prone than humans in the production of goods and services.
- A) Intelligence comes from sentient life gathering information and experience to inform thought and action. A.I. designed to only act without thought is not intelligent. It is simply software for a machine designed to perform a task, like vacuuming the floor, turning a lathe, or assembling an automobile. Intelligence in those performances only come from human supervisors. The only good or evil in that circumstance is from the human supervisor.
The advance of artificial intelligence has potential for both good and evil.
In contrast, when a machine is programmed to gather all information available in its environment and acts in accordance with that environment, it begins to reach beyond the intelligence of human control. A machine acquires some level of control over its own thought and action. The consequence can be death in the case of a car driven by A.I. On the other hand, accidents also occur with human drivers. What is the difference?
Self Driving Tesla car wreck causing a deadly crash.
The difference is that true A.I. will learn from past incidents and self-correct. This is a first step to creation of thought and action for intelligent machines. In the short term, in the case of automobiles, it benefits society by having fewer accidents. In the long term, software programing that gathers information and acts, independent of humans, may give rise to a conscious “self” in machines that could replicate themselves to the point of a kind of evolution that mirrors human nature’s gene replication. Neither the private nor public sector is adequately investing in safety when new A.I. products are created.
There is a brief allusion to the idea of melding man and machine but no discussion of the ramification of a machine equipped with human emotion. Humans have historically killed each other. That ability may only be enhanced by melding a human brain with a machine. The black box of consciousness and the mechanics of consciousness are not revealed in these interviews.
It appears these podcast interviews were either prior to the idea of cortical columns in the brain noted by Jeff Hawkins (a neuroscience engineer) who wrote “The Thousand Brains”. Or, the scientists who are interviewed by Harris do not believe Hawkins’ experimental proof is convincing.
An optimistic view struck by Max Tegmark suggests there is potential for abundance created through development of A.I. Humans have squandered much of the world’s resources that could be better managed by A.I. The need for human work could be exchanged for life’s enjoyment if A.I. were safely employed to balance human life with natural resources of the world.
- B) A.I. leadership is at a critical juncture. Software development needs to include more investment in transparency and safety. Boundaries must be defined and established to mitigate potential for conflict between human intelligence and machine intelligence. There is a human as well as machine threat of authoritarianism without investment in software transparency and boundary imposition.
- C) History shows the future is indeterminate. The hope is that human thought and action will mitigate Luddite-like resistance to the productive potential of A.I. Leadership knowledge and experience when an evolving technology requires equal investment in transparency and safety in the growth of A.I. Without leadership, the potential for a dystopian future, wrought by technology and the fundamental physics of life, becomes as likely as its opposite.
Leadership in A.I. is critical to the future of humanity.
Future prediction is an oxymoron, as evidenced by the history of change in agriculture and the industrial revolution. No one can reliably foresee the impact of A.I. on humanity. The future of A.I. is like God to Kierkegaard, humanity waits with fear and trembling.
This is only a cursory and inadequate review of Harris’s fascinating interviews of fellow scientists.