Skip to main content

Making sense of AI’s contradictions

employee in purple shirt leading ai discussion in meeting

Confronting the complexity of AI brings you closer to realizing the ROI.

Do I contradict myself? / Very well then I contradict myself, / (I am large, I contain multitudes.)

Walt Whitman

People are complex. Our standards vary, our beliefs can compete, and our firmly held opinions can evolve. 

In a 2015 New York Times opinion piece titled “The Virtue of Contradicting Ourselves,“ organizational psychologist Adam Grant explains the science showing that most people resist the discomfort of holding two seemingly contradictory beliefs, values, or attitudes at the same time—a feeling that social psychologist Leon Festinger coined “cognitive dissonance” back in 1957.

When many people look at artificial intelligence (AI) and especially generative AI (GenAI), they balk at a level of complexity that rivals our own, craving a clear verdict that AI either brings good tidings or cause for concern. In a way, they’re trying to alleviate cognitive dissonance. Because just as we humans contradict ourselves, so too does AI—or at least our expectations for it do. On the one hand, AI promises unprecedented efficiency, innovation, and convenience. On the other hand, it raises profound questions that can’t be ignored.

  1. What happens when AI simultaneously supports sellers in creating more personalized, human experiences while also powering “machine customers” acting in place of human buyers?
  2. Is using AI to work faster than ever before a boon or a very real risk that employees and their employers should worry about?
  3. Does AI’s power-hungry hardware diminish the positive impact of AI’s applications in sustainability initiatives?
  4. In what ways does AI pose data security risks, and are there ways AI itself can be used to combat risk?
  5. Will AI help organizations modernize their legacy technology—and contribute to an increase in technical debt?

test ai

Have questions about your AI journey? 

Implementing AI responsibly requires more than just technical expertise. It requires an awareness of the nuance in the potential outcomes. We confront that nuance in the following sections, each representing one or more contradictions in the conversation about AI with emphasis on contradictions that are relevant to today’s organizations.

While we can’t neatly resolve all the gray areas in the discussion of AI so that they’re black and white, good and bad, we can—and do—help organizations navigate AI’s contradictions to both mitigate risk and capitalize on the immense economic opportunity that AI presents. And let there be no doubt that the economic opportunity is immense.

  • On a global scale, GenAI could raise global GDP by 7%—almost $7 trillion—according to research by Goldman Sachs.
  • At the industry level, insurance businesses worldwide could realize annual economic benefits worth $50 billion by using GenAI, which could boost their revenues by as much as 20% and cut their costs by up to 15%, according to research from Bain & Company.
  • In the workforce, 10% of tasks across close to 80% of the jobs in the US economy could be done twice as fast with GenAI—with no loss in quality—according to research published by Cornell University.

We understand that billion- and trillion-dollar promises of economic growth can be hard to process alongside present-day uncertainty about AI’s risks. But while the capabilities of AI may be new to us, the challenge of processing conflicting information is not.

As Grant argued in his op-ed, the discomfort of cognitive dissonance can be a precursor to evolving and changing our minds for the better. “One person’s flip-flopping is another’s enlightenment,” he writes. So, by opening our minds to the different and sometimes opposing facets of AI and GenAI, perhaps people can become more comfortable with a technology that seems to be evolving with or without us. Let’s try it, shall we?


Customer experience

For sellers, the goal of using GenAI is clear—it’s engaging and (largely) efficient, and it very closely mimics person-to-person interactions, so it can appeal to the same buyer motivations as a flesh-and-blood salesperson can.

But not every consumer needs a high-touch sales experience, which is why machine customers are expected to become more widely used. This AI-driven buying has less to do with brand loyalty and more to do with efficiency, value, convenience, and necessity. Will machine customers force sellers to rethink the ways they interact with buyers?

Developer workforce

It has many names. Grunt work. Drudgery. Undifferentiated heavy lifting. It’s the type of work that slows employees down, tires them out, and wears away at their resolve to keep marching towards big goals that require many little steps. At Slalom, we’ve referred to it as “toil.”

One of the promises of AI and especially GenAI is that it will help teams and organizations reduce or eliminate toil in many different areas, including software development. With a reported 95% of developers already using AI tools to write code, we’re starting to be able to tell whether that promise rings true. And the results are more nuanced than you might think.


When it comes to sustainability, technology can paradoxically be a contributing factor to the problems it’s trying to solve. This is certainly the case with GenAI. While it’s currently an incredibly energy- and resource-intensive technology, it can also be a valuable tool in organizations’ efforts to operate more sustainably.

Using GenAI to analyze energy and emissions data can provide valuable insights across an organization’s operations, even ones with multiple locations and extensive value chains. But it takes a holistic strategy to address this increasingly complex issue.


Mixed in with the excitement around generative AI are some questions about its risks. Many leaders are concerned about data privacy and security, but when we dig a bit deeper into these concerns, we learn that the real challenges are often not what people think. 

GenAI tools come with extensive sets of guardrails and controls to protect organizations’ data throughout the model lifecycle. So, if data leakage isn’t the issue, what is, and how can organizations be sure they're operating as securely as possible? Surprisingly, GenAI can be a powerful ally in the battle to keep data safe.


How do you implement AI and GenAI when new tools and point solutions just keep getting released, your organization is already dealing with technical debt, and machine learning systems present new opportunities to accrue more of it? 

Questions about AI infrastructure, what’s needed to implement it at scale, and its various challenges—and idiosyncrasies—are getting much more pressing as organizations transition from smaller-scale AI experiments to real-life solutions. But questions are good, and leaders looking to optimize the “I” in “ROI” as they begin to implement and scale AI solutions at their companies should be ready to explore them.

Let’s solve together.