Zum Hauptinhalt wechseln
Perspektiven

The sorcerer’s apprentice: A cautionary tale of integrating AI at work

By Adam Thies
placeholder

Three best practices for managing the risks of this powerful technology

Have you ever watched the 1940s Disney movie Fantasia and witnessed the captivating tale of The Sorcerer’s Apprentice? In this short story based on the 18th-century poem by Johann Wolfgang von Goethe, Mickey Mouse serves as a young apprentice to a powerful sorcerer. Mickey is mesmerized by the magical powers that his employer wields and longs for the day he too can perform magic. Instead, his workday is filled with the drudgery of fetching pails of water.

One day after the sorcerer goes to bed, Mickey steals the sorcerer’s magic hat and enchants a broom to do his manual labor for him. With his work on autopilot, Mickey falls asleep dreaming of all he will be able to do with his newfound powers. He’s awoken to the workshop flooded with water. He panics as he realizes that he doesn’t know the magic necessary to stop the broom. In the Goethe poem, the broom continues to flood the workshop until the apprentice yells to the sorcerer, “The spirits that I summoned, I now cannot banish.”


Indistinguishable from magic

Does this tale sound familiar? This timeless narrative is eerily reminiscent of how generative artificial intelligence (AI) has entered the modern workplace. Any knowledge worker who has experimented with the magic-like capabilities of generative AI tools like ChatGPT knows firsthand the look of awe on Mickey’s face as he covets the sorcerer’s magical powers.

Generative AI and ChatGPT seem like magic. Enter a question and the answer appears instantaneously with surprising accuracy. Copy and paste a complex legal text, academic paper, or business memo into ChatGPT, and in seconds you have an easy-to-read summary. It recalls the well-known third law by science fiction writer, futurist, and creator of the HAL 9000, Arthur C. Clarke: “Any sufficiently advanced technology is indistinguishable from magic.”

When confronted with advanced technology we are often astounded. Our expectations run rampant with thoughts of the potential efficiency, productivity, and time saved. But it is not magic. It is a tool. Specifically, in the case of ChatGPT, the tool is a large language model (LLM) machine learning chatbot that uses statistical models to predict the next word that is most likely to follow given the prompt that was entered. These models are trained on datasets consisting of billions of sentences found on the internet.

And just like any tool, generative AI has its limits. The foundational building blocks of ChatGPT rely on the neural networks of deep learning, which operates in the realm of probability, not certainty. In laymen’s terms, that means sometimes the information that it gives you is false. This tendency to error is known as “hallucinating.” Heather Desaire, a professor at the University of Kansas, describes hallucinations as “kind of like the game Two Truths and a Lie.” It is difficult to distinguish what is true from what is false without expert knowledge of the subject at hand.


The dangers of viewing AI as magic

There is great danger in adopting AI tools into individualized workflows with the mindset that AI is magic. If you are not careful, like a dreaming Mickey, you too will suffer the consequences of unintended hallucinations.

This is already happening. In May the New York Times reported that a lawyer who used ChatGPT to prepare a court filing could face sanctions for including incorrect information. The submitted brief referenced six prior cases that did not exist. In response, the lawyer told the court he was “unaware that [ChatGPT’s] content could be false.”

Sometimes the consequences of hallucinations can be downright harmful. An AI chatbot hosted on the National Eating Disorders Association (NEDA) website that was designed to help people who suffer from eating disorders was taken down after it started giving out dieting advice. To make matters worse, before the rollout of the chatbot the NEDA laid off staff and volunteers who ran the legacy hotline service. As a result, thousands were left without a vital health resource for a disease that kills one person every 52 minutes in the United States.


Why we should not ban AI technology at work

So, what should organizational leaders do? Should they act like the Luddites and shun this advanced technology? Should they operate as if generative AI does not exist and put policies in place banning workers from using AI tools?

These are not realistic options. The cat is out of the bag and ChatGPT is not going anywhere anytime soon. Even when organizations enforce policies to restrict the use of ChatGPT, employees still use it to write emails, memos, and so forth—they are just using it on their personal devices and not telling their bosses.

Not only is banning this technology unrealistic, but it would be unwise because it is incredibly useful. For instance, a report from the National Bureau of Economic Research found that the use of generative AI can boost the productivity of novice and low-skilled workers by up to 35%. (Although effects on senior employees were minimal.) Like all tools, it can be useful if it is properly understood and used correctly.

So then, how can organizational leaders safely encourage their talent to integrate AI into their daily workflows without being inundated with false information and bad decisions? The answer is that they need to act like the sorcerer and train their apprentices.


How to use AI like a responsible sorcerer

Mickey’s desire to have the same skills as the sorcerer was not unjustified. His role as an apprentice was to observe and learn from the expert beside him. The same should be true of us as we integrate AI into our daily work. We should be observing and learning from experts so that we can gain experience in a supervised and safe environment. Organizations can encourage responsible AI usage by doing the following three things:

  • Build digital literacy
  • Always have an expert in the loop
  • Embrace boring productivity gains

1. Build digital literacy: The 30% rule

Paul Leonardi, professor of technology management at UC Santa Barbara, and Tsedal Neeley, professor of business administration at Harvard, argue in their book The Digital Mindset that to thrive in our modern world of digital work we need to become digitally literate. To achieve literacy, they recommend the 30 percent rule, which states: “you only need 30% fluency in a handful of technical topics.” Think about it—if you want to travel to France, you do not need to know every single French vocabulary word. You only need to know 30% of the language, which allows you to ask for directions, order a meal, and be adequately conversational.

The same applies for using AI tools. The average professional in HR or marketing does not need to know how to code or build an AI model, or the intricacies of deep learning, but they do need to build their 30%, which may include knowing that those things exist and holding a basic understanding of how they work and what their limitations are. Without that knowledge, they will be just like the lawyer who was unaware that AI content could be false.

2. Always have an expert in the loop

Organizations can learn a lesson from the law firm Allen & Overy. They were an early adopter of Harvey, an OpenAI-funded startup focusing on using generative AI for law firms and powered by GPT4. After running initial pilot testing, they rolled out the AI tool to 3,500 lawyers in 43 offices.

David Wakeling, a partner and the head of Markets Innovation Group at Allen & Overy, credits the success of their adoption to the use of change management, strong governance, and always having an expert in the loop. He explains, “Almost never is a work product from Harvey the final thing that is used. I would say never. I’ve never encountered that as an outcome. It’s always used to save a bit of time, and then someone finishes it and takes it the last mile with the right expertise.”

Whenever generative AI is implemented within an organization for a task that requires a high level of accuracy there should be an expert who double-checks the work produced by the AI model.

3. Embrace boring productivity gains

There is an additional key lesson that we can learn from Allen & Overy, and that is to embrace boring productivity gains. Wakeling did not integrate the Harvey tool to automate the entire product delivery process for their law firm. Instead, he describes the value they expect to be “boring productivity gains.” They are using it as a tool to create first drafts for incremental elements of work. Their goal is to help lawyers in their firm save an hour or two of work a week. Saving an hour or two a week is hardly the time-saver that Mickey envisioned! It is a modest expectation. But when seen in aggregate from an organizational level and multiplied across all 3,500 employees, it can make a big impact.


Peter Drucker’s sage advice: The importance of more accurate information

In his book Management Challenges for the 21st Century, Peter Drucker predicted that the new information revolution would focus on the “shift from the ‘T’ in IT to the ‘I.’” He argued that institutions would focus on the meaning and purpose of information and not the technological systems that store, collect, transmit, and present data that had defined the second half of the 1900s.

By this, he meant that we will no longer focus on utilizing technology for the sake of technology, but rather for it to cultivate meaningful information to help individuals make better-informed decisions. Not all information is meaningful. For it to be useful it must correspond to some degree of reality. Stephen Wolfram describes the value of technology as “taking what’s out there in the world, and harnessing it for human purposes.” The real benefit of technology for knowledge work will not result from its magical ability to generate a lot of information quickly, but rather for it to help us produce accurate information that is right for the specific problem we’re facing at that moment.

There is no doubt that AI tools will help us perform our work tasks quickly, but we must also be careful to realize when the desire for speed and efficiency creates unnecessary risk. It is crucial to approach generative AI with the mindset that it is a tool and to have a basic understanding of its capabilities and limits. This will allow us to integrate AI into our workflows cautiously and responsibly. A failure to be responsible at this time will result in organizations being flooded like Mickey, except instead of drowning in water we’ll be drowning in AI-induced hallucinations, inaccurate information, and unnecessary risk.

This blog post was originally published here



Lassen Sie uns gemeinsam Lösungen finden.