Sporadic use breaks feedback loops and keeps teams at basic performance
People expect steady improvement, but use of ChatGPT stalls instead. ChatGPT only builds capability when repeated use creates a feedback loop that users can act on immediately. When usage stays sporadic, that loop never forms, so behavior never changes, and results stay flat.
An employee drafts a client proposal with ChatGPT
Leaders believe that an employee who occasionally drafts sections of a client proposal with ChatGPT will steadily improve over time. They assume that each interaction adds to the employee’s skill set and builds lasting capability. They expect that even isolated uses will accumulate into better judgment and stronger prompts. They assume that exposure alone allows the employee to generalize from one proposal to the next. They conclude that repeated use is helpful but not required, because each use already makes progress.
The employee produces similar proposal sections each time
The employee uses ChatGPT sporadically when drafting sections of a client proposal and produces outputs of similar quality each time. The employee repeats the same prompt patterns because no prior interaction informs the next one. The employee does not refine structure or argumentation because no comparison between attempts exists. The employee completes isolated sections but does not improve the full proposal workflow. The observable result is stable but low performance that does not evolve across proposals.
The employee cannot build a feedback loop while drafting
When the employee drafts a proposal section with ChatGPT only occasionally, the interaction ends without immediate reuse of its output. Because the output is not reapplied immediately, the employee cannot compare it with a revised attempt. Without that comparison, the employee cannot see which prompt change leads to a better result. Without seeing that link, the employee does not adjust behavior in the next attempt. This break in the feedback loop prevents the employee from forming a stable pattern of effective prompting.
Decision makers interpret flat proposal quality as a tool limitation
Decision makers observe that proposal quality does not improve despite access to ChatGPT and infer that the tool has limited value. They compare employees and find large performance gaps, which they attribute to individual talent rather than usage patterns. They conclude that additional training will fix the issue, assuming knowledge is missing. They continue to allow sporadic use, which preserves the broken feedback loop. This misinterpretation locks the team into uneven performance and prevents systematic improvement.
Occasional use prevents improvement in proposal quality
Leaders expect improvement from occasional use, but ChatGPT only drives better proposals when continuous use creates an immediate feedback loop that lets employees compare outputs, adjust prompts, and reinforce effective patterns.
Note: We use the term “ChatGPT” as a shorthand for ChatGPT and similar tools such as Anthropic Claude, Google Gemini, Microsoft Copilot, and custom GenAI chatbots.
