Leaders introduce ChatGPT in isolated steps and create rework across the process
Leaders expect ChatGPT to improve outcomes when they add it to isolated steps, but performance does not improve. When ChatGPT use stays fragmented, outputs misalign across steps, rework accumulates, and overall performance remains flat.
Belief in step-level gains improving the whole
Leaders decide to introduce ChatGPT in isolated steps, expecting overall performance to improve.
By focusing on individual actions, they believe local efficiency gains will naturally combine into a faster process.
As a result, they expect better outcomes without changing how people connect their work across steps.
Fragmented use produces disconnected outputs
Leaders allow people to use ChatGPT independently within their own steps without coordination.
Across the process, people apply it inconsistently and produce outputs that follow different structures and assumptions.
Consequently, work does not flow smoothly, and overall performance remains unchanged despite visible local improvements.
Unaligned outputs force rework across steps
Leaders introduce ChatGPT at isolated points without aligning how outputs must connect across steps.
Because each step generates outputs without shared expectations, the next step cannot directly use what it receives.
This forces people to reinterpret and rework inputs, which cancels the initial efficiency gains.
Leaders misinterpret stagnation and misattribute failure
Leaders observe that overall performance does not improve when ChatGPT is introduced in isolated steps.
Instead of linking stagnation to fragmented output alignment, they attribute the issue to how people use the tool.
As a result, they push for greater use or stricter controls, which reinforces the same misalignment and sustains poor outcomes.
Note: We use the term “ChatGPT” as a shorthand for ChatGPT and similar tools such as Anthropic Claude, Google Gemini, Microsoft Copilot, and custom GenAI chatbots.
