Survival of the AI Adaptive
AI doesn't replace jobs, it replaces tasks and creates opportunities to flourish.
Every day seems to bring more fake news of AI’s rise and humanity’s demise. This one, Generative AI Is a Net Loss for Workers, is classic gloom and doom. But if you check its sources, you’ll find interviews with CEOs who “think” AI “may” replace jobs “someday.” For example, IBM CEO Arvind Krishna said, “I could easily see 30 percent of jobs getting replaced by AI and automation over five years," and Goldman Sachs reports that AI and automation “may reduce or replace" 300 million jobs.
These aren’t facts, they’re rationalizations for layoffs and predictably poor predictions.1 The truth is that AI doesn’t replace jobs, it replaces tasks and elevates those willing to try it.
Third-Party Worry
Kevin Kelly, the founder of Wired Magazine, calls AI-job-loss-fear-mongering Third Party Worry. A friend has a friend who can’t find a job —they blame AI. Press releases say AI might impact jobs in five years. Goldman Sachs says so.
Kelly stepped up his fight against third-party worry on the Tim Ferriss podcast. “Not a single person will lose their job to AI,” he claimed. Ferriss was shocked. “Seriously? Not one? Why be so black and white?”
Jokingly, Kelly offered $200 for the name of one human that AI replaced. Just one.
So far, his money is safe.2
The ATM & Survival of the Adaptive
Third-party fear of automation has swirled in our psyches for over 100 years. For example, in 1973, the New York Times predicted that ATMs would replace 75% of human tellers. Like reports about AI job loss, they cited various feelings and opinions.
What actually happened? It took until 1980 for ATMs to become commonplace. Over the next thirty years, from 1980 to 2010, teller jobs increased from 500,000 to 550,000 in the United States.3
Let that sink in: not only did tellers survive, but 50,000 more tellers were hired after ATMs became ubiquitous. Tellers thrived.
Why?
As MIT economist David H. Autor explains in Why Art There Still So Many Jobs? The History and Future of Workforce Automation and Anxiety, with cash counting delegated to ATMs, banks used branches to promote new services like credit cards, loans, and investment products. Sofas and cappuccino machines replaced rows of tellers dispensing money behind glass windows.
In other words, ATMs automated low-value tasks and tellers evolved. The fastest-evolving tellers became branch managers and executives. Average tellers became better tellers. Those whose only skill was bill counting found something else to do.
That’s not job loss, it’s natural selection.
To Thrive With AI, Try AI
Like all technologies, to thrive with AI, you must use AI. Tellers learned to represent new products and services. They adapted to handle more complex tasks. As they evolved, automation lifted them into new and better positions.
But generative AI is different than ATMs because it augments human creativity directly. They generate unique images based on your prompts. They help you check facts. They suggest improvements to your writing. But only if you try it.
Recently, I polled people about how they use AI. Most use AI a few times a day for elementary tasks, like a search engine.
That’s not enough. AI is like a camera — a deceptively simple yet powerful, mysterious, empowering tool. When cameras were introduced over 100 years ago, portrait artists revolved, “That’s not art!” Over time, great photography masters flourished, from Henri Cartier-Bresson to Annie Leibovitz.
Like cameras, AI enhances and unleashes what makes you unique. But like a camera, to understand it, you must use it. It’s survival of the technological adaptive.
Start Small and Climb
As I explain in The Joy of Generative AI Cocreation, you can start small — use Midjourney to generate images for your next presentation or Jasper to improve an email. Soon you may be using AI frequently, every day. It’s simple, free, and fun to try.
Starting small is the best way to conquer third-party worry. And besides, you have no choice. AI is here for good and evolving at warp speed. For many, that’s scary, but it shouldn’t be.
Try it, you might love it. You might hate it. But try it.
Behavioral economists call our inability to predict the future planning fallacy. Daniel Kahneman and Amos Tversky coined this idea in a 1977 paper, Intuitive Prediction: Biases and Corrective Procedures. It explains that all of us — scientists, military intelligence, engineers, writers, doctors—rely on intuition and bias when we make predictions.
According to Kahneman and Tversky, humans have “Major deficiencies in our judgments of uncertain events. We’re usually overly optimistic or overly pessimistic. Overall, we’re terrible at making predictions when the future is uncertain.
https://tim.blog/2023/04/28/kevin-kelly-excellent-advice-for-living-transcript/
https://www.nytimes.com/1973/12/02/archives/machines-the-new-bank-tellers-response-to-automated-transactions-is.html
As always you nailed it. It's getting exhausting seeing it continue on LinkedIn but that bleeding content drives engagement no matter how dumb and wrong it is.