Yes, AI will take our jobs as we know them. But who said it will replace us?
If you were around during the famous Garry Kasparov versus Deep Blue chess matches that catapulted AI into the spotlight in the 1990s, you’ll remember the anticipation the world felt as we watched the most exciting battle of man against machine since the Industrial Revolution.
Considered the greatest chess player in history at the time, Kasparov won his 1996 match against IBM’s supercomputer 4–2. The world cheered. We had put our strongest human mind up against the strongest computer, and the human mind had won. But as we were gloating, the programmers at IBM kept working, and in 1997, Kasparov and Deep Blue matched wits again.
Going into the sixth and final game of their rematch, the man and machine were tied 1:1 with three draws. Just 19 moves in, however, Kasparov shocked the world by resigning for the first time in his career. The chess community was confounded by his decision, pointing out that the position was still playable. Kasparov’s response was an inexplicable, “I lost my fighting spirit.” Society at large was disappointed, but we quickly reassured ourselves that chess is a very analytical game, and of course computers were bound to best us eventually. The world moved on.
In the years since Kasparov’s resignation, the old-fashioned AI that powered Deep Blue has given way to “limited memory AI,” which uses deep learning algorithms and vast amounts of data to constantly improve its output. While today’s AI powers everything from social media to self-driving cars to the detection of disease, its rise has gone largely unquestioned — especially by creative professionals who have always believed that the cold logic of computers could never threaten our livelihoods. Even those who admit that the so-called general AI we see in movies such as 2001: A Space Odyssey and Her could someday compete, most of us doubt such a breakthrough will happen in our lifetimes, if ever.
However, the proliferation of large language models (LLMs), such as ChatGPT and Bing, along with AI image generators, is finally challenging our stubborn belief that our careers — and creativity — are untouchable.
I (Ronnie) had asked ChatGPT to write a nuanced story about some friends, and its output was so eerily real to me that it felt like a magic trick. But despite being impressed, I still felt that creative talent would always need to oversee AI-generated content. Amy, my coworker and collaborator, has closely followed the rise of computer-generated copy since Persado AI hit the scene in 2012. She argued that marketing would handle content oversight within five years and copywriters would be wise to consider their next moves.
In the end, we agreed that AI will change the world in ways we can’t possibly imagine, and although it’s inevitable that the technology will soon absorb many of the tasks we currently consider integral to our disciplines, it’s equally inevitable that the creative mind is uniquely suited to survive — and thrive — in the new ecosystem.
If creative professionals’ most stereotypical flaw is our oversized egos, perhaps it’s not surprising that even the most AI-aware among us take for granted that our disciplines will be the last to fall. When Amy admitted to a designer that she’d always thought UX design would go first, she was taken aback by his shock. It turns out that a copywriter’s vanity can’t help but assert that it’s only a few short steps from a 1990s-era chess computer to a complex series of algorithms that can iterate thousands of designs based on the best-performing sites on the entire internet, refine those designs against a brand’s strongest competitors, run mock user testing with dummy data, and output a handful of no-fail designs ready for immediate implementation … all within seconds.
No — present-day AI isn’t yet capable of such a feat, but if this seems far-fetched, we need only to examine what ChatGPT is saying about itself. Marketer Paras Surya recently posted on LinkedIn about an exchange he had with the LLM. When Surya entered, “How AI is making us not to be as creative,” ChatGPT’s output might as well have been a response to “Tell me you’re taking my job without telling me you’re taking my job.” According to ChatGPT, “AI has the potential to augment human creativity by providing new tools and resources to generate ideas, identify patterns, and test hypotheses.” In short, it precisely described the functions that underlie Amy’s futuristic UX generator.
Even in its primitive state, AI is disrupting the creative industry. Whether we want to admit it or not, a significant portion of today’s LLM-generated content is good enough to work with, and any ROI-minded marketer will readily admit that it’s hard to justify paying for “great” content when “good enough” is free. The obvious conclusion is that it’s time to cede the grunt work — image searches, customer relationship management (CRM) creative, and certain UX elements, to name a few — to the bots and move into “big idea” territory. After all, limited memory AI, by its very nature, will never “think” like a human.
While this is certainly true, we should keep in mind that if limited memory AI is already competent enough to take over just a small portion of creative duties, it’s only a matter of time before deep learning algorithms will crack the code on creativity itself. We may view our craft through a superstitious lens, but that doesn’t mean an AI application won’t find a method to our madness. Where there’s a pattern, AI will find it, and when it finds a pattern, AI will eventually replicate it.
In the beginner’s mind there are many ideas; in the expert’s mind there are few.
In the end, it matters very little whether general AI ever comes to pass, at least where our livelihoods are concerned. Limited memory AI is more than capable of faking it well enough to compete with the human mind, even when it comes to ideas.
Years after his resignation to Deep Blue, Kasparov clarified that his position was only playable if the computer failed to play a perfect game, and he simply no longer believed it would make a suboptimal move. True as that may be, it’s a far cry from the deeply human emotion expressed in “I lost my fighting spirit.” As creative professionals facing similar odds, it’s tempting to indulge in the same maudlin mindset, or worse, to become “AI Luddites.”
In fact, 21st-century copywriters have more in common with 19th-century knitters than most of us realize. “Luddite” may have become a modern moniker for anyone who eschews the use of technology, but the historical Luddites embraced all machines except one: the stocking frame. Oddly enough, stocking frames had quietly existed for two centuries before the Luddites started smashing them to bits, but nobody cared about the machines until textile manufacturers stopped paying artisans to operate them. Similarly, AI-generated copy has existed since 2012, with J.P. Morgan Chase officially “hiring” Persado AI in 2019. Most copywriters just didn’t notice until marketers could use the technology for free.
Comparisons aside, the only useful wisdom we can glean from the Luddites is that the unwillingness to accept and embrace new technology doesn’t tend to age well. My first sense that I was falling into this trap occurred in 2009 when I asked a Rutgers user experience design (UXD) student whether he had concerns about how Google Glass might impact privacy. I was surprised when the student responded, “Gen Xers love to talk about privacy, but you don’t understand. We never had it, so we don’t value it the same way you do.” Although Google Glass now tops most lists of “bad tech,” the conversation should remind us that generational values are not necessarily static.
College student Edward Tian speaks for many adults today when he says, “Humans deserve to know when something is written by a human or written by a machine.” But neither his ideals nor the app he’s developing to combat the use of ChatGPT in academia are likely to stand the test of time. AI will continue its inexorable expansion — and future generations, inhabiting a world in which AI has always existed, are unlikely to care all that much. So, why are we so optimistic?
Ignoring criticisms of creatives’ second-most-stereotypical flaw — pathological open-mindedness — it’s crucial to remember that longtime creatives have already adapted to tremendous change as an analog world became increasingly digital. When I started teaching the UXD course at Rutgers and Amy was just a few years into her copywriting career, smartphones were in their infancy, the iPad didn’t exist, and responsive design was wishful thinking. Testing a taxonomy was still done on index cards, one at a time.
How happy were we to hand over that painful task to online tools that can globally test thousands of users in seconds? In the last decade, countless manual duties that used to define creative careers have been eliminated by increasingly powerful software — or handed off to entirely new disciplines such as content design. From our perspective, the coming transition to AI-based tools is the natural next step in technology’s ongoing evolution, and new creative fields will continue to appear as the AI era takes shape.
We’re not minimizing the massive upheaval the creative industry is about to experience, and we’re certainly not saying that we shouldn’t ever question AI. But ending the debate at “Will it take our jobs?” ignores the potential benefits of a powerful creative ally. Yes, AI will take our jobs as we know them, and the sooner we accept that, the happier we’ll be. But who said it will replace us?
Returning to Paras Surya’s LinkedIn post, ChatGPT’s response also recommended that we should “view [AI] as an opportunity to expand the boundaries of human creativity and explore new possibilities.” As strange as it feels, we agree. It’s time to embrace AI, immerse ourselves in understanding it, and explore ways in which we can humanize it.
Framing the perspective as one of openness versus dismissal, curiosity versus fear, and wonder versus discomfort will help us move into an uncertain future — and looking to the past may yet again illuminate our way. Long before Garry Kasparov lost his fighting spirit and the Luddites fought and lost, Zen philosophers were teaching the concept of Shoshin, which posits, “In the beginner’s mind there are many ideas; in the expert’s mind there are few.”
For the creative professional, embracing Shoshin means letting go of being an “expert” writer or designer and instead contemplating what’s behind the words and images. The truth is that ideas are our true creative playground, regardless of how we choose to express them. Of course, it would be hubris to assume that AI will never compete with the human mind in the realm of the abstract, or that human ideas will always be better than those of a machine. But when AI inevitably catches up, it will find itself on an even playing field.
After all, there is no “perfect game” in idea-land. Sometimes we’ll win and sometimes we’ll lose — but the matchup will always be exciting and the game will always be worth playing.