/vnd/media/media_files/2025/02/25/A5u4BuqiQlResat1Fxek.jpg)
GenAI Photograph: (GenAI)
While there was never any doubt, it is now evidently clear that “we are what we repeatedly do, and excellence is not an act but a habit.” It is also not surprising that modern technologies like Artificial Intelligence (AI) and Machine Learning, particularly GenAI, are atrophying the human ability to do things independently.
According to a research paper published by Carnegie Mellon and Microsoft experts, while GenAI can improve worker efficiency, it can inhibit critical engagement with work, potentially leading to long-term overreliance on the tool and diminishing skills for independent problem-solving.
Overall, the study surveyed 319 knowledge workers to understand when and how they use GenAI and why it affects their efforts to do so. The respondents were also asked to share how confident they were about the ability of GenAI tools to do the specific task, how confident they were in evaluating its output, and how confident they were in their abilities to complete the same task without using AI tools. It reviewed 936 examples of GenAI usage by the respondents.
Reflecting on their findings, the researchers note, “Higher confidence in GenAI’s ability to perform a task is related to less critical thinking effort. When using GenAI tools, the effort invested in critical thinking shifts from information gathering to information verification; from problem-solving to AI response integration; and from task execution to task stewardship.”
This should not come as a surprise. Lisanne Bainbridge, a cognitive psychologist, in her paper ‘Ironies of Automation’, presented at a 1982 Conference on Analysis, Design, and Evaluation of Man-Machine Systems, had warned that “mechanising routine tasks and leaving exception-handling to the human users can deprive them routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise.”
This shift is particularly significant in the digital world, where AI-driven automation is revolutionising almost everything. While AI can undoubtedly boost efficiency and reduce human workload, there is a possibility that professionals may gradually lose their deep problem-solving capabilities, especially in crises, as they become over-dependent on AI-generated insights.
Take, for example, the role of network engineers and cybersecurity analysts. While AI-driven tools automate threat detection and network troubleshooting, human oversight continues to remain crucial to understanding, validating, and responding to complex challenges that AI alone cannot resolve. If these professionals stop engaging in critical thinking due to over-reliance on AI, the industry could face a talent gap where expertise is eroded over time.
The solution: A balanced approach. Enterprises must ensure that GenAI is only used to complement human intelligence rather than replace critical thinking. Also, organisations should focus on AI-assisted decision-making frameworks, where professionals are encouraged to engage in verification, validation, and strategic oversight rather than passively accepting AI outputs. In short, organisations must adopt a strategic approach that ensures AI serves as an enabler rather than a substitute for human judgment.
Remember what the legendary tennis player Andre Agassi said: “If you do not practice, you do not deserve to win.”
shubhendup@cybermedia.co.in