What's So Disturbing About MIT's Findings About How Humans Use GenAI?
An MIT study shows that we're using GenAI all wrong.
A new study from MIT found that using generative AI as a copilot helps make us dramatically faster and better at critical thinking. This illustrates AI’s productivity-boosting power.
But dig deeper, and you find a disturbing underbelly.
AI is like a golden retriever
AI engines are like a golden retriever: they’ll fetch whatever you tell them to. It will give up and bring back something else if it can't find what you threw.
In the MIT study, two-thirds of participants used AI’s output without modification. This reveals a fundamental misunderstanding of the relationship between Generative AI and human intelligence.
The golden retriever analogy has deep technical and evolutionary roots.
Our brains are fill-in-the-blanks machines
Our brains are built to fill in the blanks. For example, you have no problem reading the text above, do you? Researchers have found that as long as the first and last letters are the same, our brains “fill in” the rest. It doesn’t matter how jumbled the letters are.
This autonomic filling-in of missing or wrong information is important to understand when we work with AI.
Human beings fill in the blanks to survive
Our pattern-matching capabilities started to develop 200 million ago.1 When our ancestors heard rustling in the trees, they used neural pathways in their brains to simulate what was going on and conclude a tiger attack was coming. The fittest ran like hell and survived.
Today, our brains have evolved to simulate all kinds of things. We identify a familiar face in an instant; we recognize songs in a few beats; we answer questions on Jeopardy with a tiny clue.
Human simulation has another essential property: often, we’re wrong. Not every rustling of leaves in the woods is a threat. Simulation is the source of human bias. We all “jump to conclusions” and are frequently wrong.
Flip survival on its head and you get…. creativity
Flip our neural networks on their head and you get creativity. AI calls it a transformer. Either way, it’s the source of creativity.
For example, Steven Spielberg’s creative mind created the opening scene in "Raiders of the Lost Ark" by firing neurons down his brain’s neural network. While imagining a scary scene, he simulates scenarios where Indiana Jones makes narrow escapes, daring leaps, and out-runs a giant boulder as he retrieves the stolen fertility idol from his foes.
Spielberg’s neural network contains decades of learning from great movies. His creativity stems from traversing that network in ways unique to him.
Generative AI is modeled after our brains.
Generative AI traverses its neural network in much the same way.
Like identifying threats, it quickly recognizes objects. The AI in your phone is amazing at identifying your face, moving objects at your door, or an image of a dog.
Flip an AI neural network over and it generates stuff. Like Spielberg making a movie, give AI a prompt to “make an image of a dog in the painting style of David Hockney” and sometimes, you’ll generate something good. That’s how GENERATIVE AI gets its first name.2
The most important thing to know
The most important thing to know is this: like our brains and golden retrievers, AI is often wrong. When doesn’t find an exact match, we make up the answer. This is both good and bad.
With AI, filling-in-the-blanks is the source of creativity, cool images and powerful copiloting.
With I, filling-in-the-blanks is the source of bias, hallucination, and error.
The lesson: never, ever use AI’s output without modification
The lesson here is simple. YOU are in charge of AI. ALWAYS check its work. NEVER use the output of an AI engine straight out of the box.
AI is like a golden retriever. It means well. It has incredible power, speed, and enthusiasm. It can fetch a stick faster than any human. Usually, it finds the stick you asked for. Often, it improvises and brings back whatever it can find.
AI is rocket fuel for creativity. Tell it to fetch what you wish. But use it as a loyal golden retriever, not an advanced answer-giver.
This week, I’m publishing answers to questions from The Generative AI Growth Mindset workshop. It’s not for techies — it’s inspired firefighters, writers, teachers, and students to use AI. It’s just $49, with all net proceeds supporting Hospice. So far, we’ve donated over $2,800, and the course has a 9.4/10 rating. Register for the next cohort here.
Denouement
At first, the MIT research was so disturbing that I dug deeper into the study, which its authors, Shakked Noy, and Whitney Zhang, kindly shared with me. The first thing I looked for was how complicated the tasks were.
My theory was that if subjects were asked to do simple tasks, like summarizing an article, generating an image, or searching for a factual answer, I could see why over two-thirds of subjects merely used ChatGPT’s output straight out of the box.
No such luck. HR professionals were asked to write a company-wide return-to-office memo. Managers were asked to write a company-wide memo from the CEO announcing the flattening of the organization, where some employees will be facing demotion. Data analysts were asked to create a “code notebook” to describe how they would analyze a dataset to determine which customers should be targeted by an advertising campaign.
These communication tasks are complicated and sensitive. They require careful, subtle, creative thinking.
Yet, two-thirds of the subjects took generative AI’s output and submitted it as “the answer.”
Nooooooooooooooooooooooooooooooo.
Don’t do that!
Appendix: an example of a challenge from the MIT survey
A Brief History of Intelligence, Max Bennett
The fundamental idea behind neural networks were published in 1943 in A LOGICAL CALCULUS OF THE IDEAS IMMANENT IN NERVOUS ACTIVITY, WARREN S. MCCULLOCH AND WALTER PITTS. University of Illinois, College of Medicine, Department of Psychiatry at the Illinois Neuropsychiatric Institute, University of Chicago, Chicago, U.S.A.