Leaders cut capability building and lower decision quality
Leaders expect ChatGPT to raise performance without requiring expertise, but results still depend on user knowledge. When leaders cut capability building, people rely on ChatGPT, cannot judge outputs, accept plausible responses as correct, and make worse decisions.
Leaders expect performance to rise automatically
Leaders assume ChatGPT can replace missing expertise and raise overall capability.
As a result, they expect people to produce strong outputs without prior knowledge.
This leads them to believe performance will improve uniformly across the organization.
People rely on outputs they cannot judge
Leaders place less emphasis on building knowledge because they trust ChatGPT to fill in the gaps.
Without sufficient understanding, people cannot assess the accuracy or completeness of responses.
Consequently, they use outputs that appear correct but contain errors or gaps.
Reduced capability leads to uncritical acceptance of plausible outputs
Leaders cut back on capability-building, which reduces people’s knowledge base.
As knowledge declines, people lose the ability to evaluate and challenge generated responses.
This causes them to accept plausible outputs as correct and act on them without verification.
Decision quality declines while leaders misread the cause
Leaders interpret consistent output usage as effective performance.
Because outcomes appear structured, they overlook hidden errors embedded in decisions.
This leads them to attribute failures to external factors or to ChatGPT itself rather than to their reduced investment in capability.
Note: We use the term “ChatGPT” as a shorthand for ChatGPT and similar tools such as Anthropic Claude, Google Gemini, Microsoft Copilot, and custom GenAI chatbots.
