The Rise of Natural Stupidity

Introduction

It’s a tale as old as time.  Every movement gives birth to a counter-movement.  Newton codified it as his third law: For every action there is an equal and opposite reaction.  While Newton was describing the characteristics of material objects and laws of physics, his observation holds true for social as well as physical movement.  Every strongly-held belief seems to engender the creation of an opposite belief.  Another way to put it is that for every Superman there is a Lex Luthor.  Religious worshippers butt heads with atheists; Republicans with Democrats; Yankee fans with Red Sox fans; the list is endless.  Now as we embark on the age of Artificial Intelligence (“AI”), we are witnessing the rise of its counter-movement, its nemisis: Natural Stupidity (“NS”).

It is always exciting to believe you are witnessing the birth of something.  Certainly that seems to be the prevailing sentiment about AI.  That somehow AI is heralding a new era of computer processing; its output is unlike anything computers have produced before.  Which message appears deeply unsettling to many people.  Most of those unsettled by AI warn of the negative consequences of its deployment.  They advocate for limits on AI applications.  They want to restrict the output of AI, prevent it from creating digital images, from creating text documents.  There are some who call for certain AI applications to be illegal; however the United States federal government has resisted passing legislation to regulate AI.  Instead Presidential guidelines have been promulgated to influence AI developers.  These guidelines are in the form of an executive order; as such they do not have the power of a legislatively passed statute.  One could say these guidelines are “aspirational” objectives that AI developers should aspire to adhere to, but are not required to abide by.

It is fair to say that the preponderance of voices urging caution about AI are the voices of people with experience.  Enough experience to know that what they now experience with AI is different from what they experienced before the advent of AI.   It is the difference that troubles them.  The less pre-AI experience you have, the less you are troubled by emerging AI technologies.  Teen-agers are not troubled by AI; adults are.  Adults caution against rapid deployment of AI applications; teen-agers are gung ho for full-throttle deployment.

This cautionary attitude regarding the deployment of AI is the hallmark of NS.  Whereas AI resides solely in the architecture of computers, NS resides solely in the attitudes of human beings.

 

Is the AI Controversy the Byproduct of an Eyeball Competition?

Every reporter wants to believe their story describes an important event and should be read.  When was the last time you saw a headline such as, “Another Ho-Hum day in Day in Dullsville, Read all about the Tedium”?  That does not attract eyeballs.  But a title like this, “How Nations are Losing a Global Race to Tackle AI’s Harms” (New York Times, December 6, 2023) is meant to make you think the AI apocalypse is just around the corner, and there will be no refuge on planet earth.  Or what about this title, “The Unsettling Lesson of the Open AI Mess?” (New York Times, November 22, 2023).  It suggests that you must be a doofus not to know that there was/is an AI mess.  And if you don’t want to continue as a doofus, you better be sure to read this article and learn about the AI mess you may have missed.  These titles self-aggrandize their authors and their publications; they desperately cry out for the attention of eyeballs.  They exaggerate the importance of the topic they report on.

Which brings us to this question: is the dawn of AI, now being so thoroughly analyzed and trumpeted, really that important?  Is it really something so vastly different from what came before that it merits all this scrutiny?  Or is what we now read the output of some journalistic formula (perhaps the output of AI!?) designed to attract eyeballs and advertising revenue to a publication?

A fair question.  If we trace the arc of computing from the abacus to today’s technology, there are many watershed moments that appear to herald breakthroughs, that appear to disrupt the continuity of what preceded them.  But have they?  Just looking at the past fifty years, many technological advances have appeared disruptive at the time of creation only to later fade into a pattern of gradual evolution.  For example, the introduction of the personal computer in the 1970s and 1980s took computers out of the laboratory and put them in the homeWindows transformed DOS into something more or less understood by humans.  Apple transformed PCs into objects more or less understood by humans.  Cell phones put the power of computers into a hand-held device, and many say have changed our world.

Is AI any more revolutionary than any of these other inventions?  It would be hard to argue that what AI is doing now is more disruptive than when personal computers transformed the individual consumer into a computer operator.  Or when cell phones miniaturized computers into objects that could be held in one hand while taking a picture.  Can a process, a piece of software such as AI, disrupt behavior the way a physical object can?

 

The Turing Test.

The answer might be yes if we consider a test formulated almost 75 years ago by Alan Turing, called appropriately, the “Turing Test.”  Turing proposed that if a computer could produce output that was indistinguishable from the output of a person, then one could say that the computer was capable of “thought.”  As Turing proposed the test, if you were to carry on a conversation between two other entities each concealed from you, one a person, the other a computer, and could not differentiate your  conversation with the computer from your conversation with the other person, then the computer would have passed the Turing Test.  In other words, according to Turing, when a computer gets so good at computing that the output of its computation is indistinguishable from the output of a human brain, then the computer can be described as “thinking.”

If we believe the introduction of AI is different from the introduction of personal computing or the introduction of the cell phone, the difference might be this.  That when we interact with a personal computer or cell phone, we know we are using a device, a gadget; we are not interacting with another person.  The personal computer and cell phone each flunks the Turing Test.  But when we interact with AI, we are not sure if we are reading the output of a computer or the output of a person.  It appears to pass the Turing Test; the computer is “thinking” like a person.  And that is unsettling.  Suddenly the origin of output is up for grabs.  Is the author of the output a human or a computer?  Can you pass this New York Times test that challenges the reader to determine whether the authors of several essays are fourth graders or ChatGPT?  No longer do we ask: to be or not to be?  Now we must ask: PC or not PC?

Those who believe in NS believe if the output is from a PC and we don’t know it, then civilization will come to an end before climate change causes an ambient temperature rise of 1.5 degrees Celsius.  We must ask, what provokes this fear of AI?  What provides the grist that feeds the NS mill?  Is the output from AI any less reliable or more problematic than the output of humans?

 

Never Mind if we are Talking to a Computer, Who is telling the Truth?

Let’s take Turing’s Test and add a second part to his test.  Let’s say that not only must the observer determine which of his communication partners is a human, he must also determine which partner is telling the truth.  I suspect that a believer in NS will presume the teller of truth is the human and that the computer is not to be trusted.  However, I say not so fast.  I believe it is just as likely, in fact even more likely, that the computer would be providing a more truthful answer than the person.  Is that an irreverent statement?

I don’t think so.  I think part of what makes the Turning Test problematic, especially if we add part two to it, is that the stream of communication takes place with no context.  It is a dialog happening in a vacuum devoid of non-verbal cues.  Without non-verbal cues, information is lost at sea.  For example, consider this hypothetical:

Let’s say you are curious about Israeli – Hamas war in Gaza.  You don’t know much about the historical background leading up to the conflict.   So you seek information from various sources.  Perhaps your first instinct is to ask a friend.  The friend listens to your question about the war and offers her opinion about events leading up to the war.  You listen to what she says.  You process her words by extracting a meaning from each of them.  But at the same time, you create a context for how to interpret those meanings.  For example, if you know your friend is Jewish, you might interpret her words one way.  But if you know she is Palestinian, you might interpret her words a different way.  What if you knew that one of her cousins had been killed in the conflict?  What if you knew that one of her cousins had been captured as a hostage in the conflict?  There are myriad non-verbal cues and background pieces of information that we use to filter the meaning of verbal communications.  It is possible that the message we take away from a discussion with a person is vastly different from the denotative meanings of the words they used.  What we remember as the content of the communication is not just the words we heard or read; it is those words filtered by the context of the non-verbal cues and background information that shape the meaning of those words.

Back to the Turing Test.  If we strip away all of the non-verbal cues that provide the context to message we hear, can we believe the raw message from a person?  I have my doubts.  More than having doubts, I suspect words spoken or written by a person, stripped of all non-verbal cues, are more susceptible to distortion than the output from AI.  I suspect that an individual involved in a conversation with two unknown sources, with no frame of reference to interpret the biases of the sources, would probably derive a more accurate message from AI than a person.

How can we test such a hypothesis?  How about looking to the ultimate spokespersons of truth in our country, our Supreme Court.  We look to our Supreme Court to resolve disputes, to dispassionately interpret our laws and provide guidance developing answers grounded in the truth of our laws.  In 1973, in Roe v. Wade, our Supreme Court justices told us that the truth was that our Constitution gave women the right to have an abortion.  However, in 2022, in Dobbs v. Jackson different Supreme Court justices told us that the truth was that our Constitution did not give women the right to have an abortion.  What is the truth of these conflicting opinions?  Is there truth?  Even among our most educated jurists, truth is relative.  Truth is the byproduct of bias, of belief, or prejudice.  It is not found in a dictionary.

 

Bias, if Recognized, Can be Informative.

Those who want to elevate NS over AI disregard the roles of bias and prejudice in formulating answers.  Or else, without cues to recognize those biases and prejudices, they are distrustful of answers produced by AI.  They can take solace in this fact: the output of AI is not without bias and prejudice.  The output of AI is just as flawed with human bias as is the message of a person.  If bias floats your boat, the tides of AI are plenty high with bias.  For a in-depth examination of the effects of bias in AI, read Joy Buolamwini’s book (expose), Unmasking AI.  However,  the bias and prejudice of AI cannot be detected by observing the author’s skin color, age, gender, or spoken accent.  But bias and prejudice is injected into the code of AI by its programmers.  AI is programmed by persons who, like all persons, share certain beliefs and biases.  These beliefs and biases shape the output of an AI platform.  Because bias is harder to detect in the output of AI than in words from a person, it is more insidious and more effective manipulating the reader’s opinion.

For now, AI output is created by companies like Google, Microsoft, OpenAI, and Meta.  Companies that appear to have no political or socio-economic bias, although many consider the output of AI more representative of the views of liberal or “left-leaning” constituencies.  However, It will only be a matter of time before AI technology is private labeled and offered by companies with known biases.  The software companies will package their AI platforms and license them to outlets such as National Public Radio and Fox News which have well-known public personae with pre-defined political sympathies. These companies will shape the output of their AI technology platforms by feeding those platforms information supportive of their respective biases.  As consumers, when we retrieve AI output from such privately labeled platforms with known biases, we will know how to construe the message in light of its author’s context.

An informed public should question and evaluate the veracity of the output of AI just as it should question the veracity of statements made by any person.  Only if we are lazy and naïve and accept the output of AI as an absolute truth will we get into trouble.  Perhaps we would all do well to heed the words of Ronald Regan, who paraphrasing a Russian proverb, told us to “Trust but verify.”

 

Will AI Turn Our Brains into Mashed Potatoes?

Some predict that the better AI gets, the stronger NS will grow.  That AI will displace thinking in human beings.  That our reliance on the output of AI will cause us to engage in problem solving less and less and we will get dumber and dumber.  They view a world in which our brains atrophy while the neural connections between mind, eye, and mouse click grow ever-stronger.  I think this is a pessimistic view of the impact of new technology on human skill sets and behavior.  It is a view recycled through history and repeatedly disproven.  Slide rules were going to destroy a person’s ability to do arithmetic.  Ditto adding machines and calculators.  Computers were going to displace the jobs of millions of workers and hasten our intellectual decline.  None of which prophecies became true.  Instead they were the predictions of persons, typically older persons, who were trying to graft new technology onto pre-existing activities.  What these prognosticators failed to predict were the new activities brought into the world by the emerging technologies.  True, new technology obsoletes some existing behaviors.  However, it also creates the possibility of new behaviors not foreseeable prior to the advent of new technology.  It is the new possibilities that naysayers fail to consider.  Abundant are the critics; precious few are the visionaries like Steve Jobs who said, “Let’s go invent tomorrow rather than worrying about what happened yesterday.”

Consider the impact of mapping technology.  For millennia, people have plotted the routes of their journeys using paper maps.  Then came MapQuest which pre-printed guided routes for travelers to follow on their journeys; no longer did a person have to buy a map or plot their journey.  Then came the real-time, interactive, graphic technology of Google Maps and Waze that eliminated paper all together.  In fact, they eliminated the necessity of keeping a drawer stuffed with paper maps.  Good-bye, or maybe hello to downsizing, Rand-McNally.   What was the impact of this technology?  At first many felt it would cause the map-reading neuronal cluster in our brain to atrophy which would then domino to overall stupidity and an inability to read anything, be it map or non-map.  Embedded in this view was the belief, never fully articulated, that somehow a person was better off as map reader than as map non-reader: that map reading was a valuable and essential skill.  Which may have been true when maps were needed. But they were no longer needed with new technology.  Perhaps those map reading skills that earned merit badges with the binary/boy/girl scouts were no longer essential.  But, by ditching that rigorous mental exercise in lieu of mouse-clicks on a graphical map, did we become dumber?  I don’t think so.  Instead,  we became more adventurous and traveled to places we would not ever have travelled to before, because before it was too much of a hassle to read a map(s) to get there.  So technology did not dumb us down, instead it educated us by enabling us to travel to new places and experience new things. Dare I say, it made the human experience more enjoyable?  Make no mistake, we still get lost; however now it is a lot farther from home.

 

Conclusion – We Can all Benefit from a dose of NSAIDs.

Clearly the sentiments expressed in the above paragraphs are not without debate.  Every sentence could be debated by advocates on both sides (assuming just two sides) of the views expressed.  I believe a public dialog, a debate, about the benefits and drawbacks of AI should take place to create a more sophisticated, well-educated public.  There should be a series of debates between those who believe in benefits of Natural Stupidity and those who believe in the benefits of Artificial Intelligence. The debates should be called the Natural Stupidity – Artificial Intelligence Debates or “NSAIDs”.  It is my hope that regular doses of NSAIDs, prescribed by industry authorities and consumed by an informed public, would cure the public of its irrational fears of AI, and reduce the high-temperature flare-ups between warring factions.

Copyright 2023, Peter Kelman, Esq.  All rights reserved.