The launch of ChatGPT in November 2022 accelerated the adoption of Generative AI (GenAI) across domains. It is now widely used in education, research, and industry to improve productivity and work quality, and is increasingly treated as an assistive technology. While concerns remain about its impact on jobs, GenAI has quickly become embedded in everyday academic and professional workflows.In academia, adoption has been particularly rapid in India’s premier technical institutions (TIs), such as Indian Institutes of Technology (IITs), Birla Institute of Technology and Science (BITS) Pilani, and National Institutes of Technology (NITs).GenAI tools provide on-demand access to explanations, problem-solving support, and coding assistance, effectively functioning as 24×7 tutors at students’ fingertips. While this improves access to guidance and reduces dependence on institutional resources, it raises a key question: Is GenAI reducing academic stress or intensifying it? The question is especially relevant given the high number of student suicides reported in these colleges, owing to a variety of factors.GenAI-Driven academic ecosystemThe recent proliferation of Artificial Intelligence/Machine Learning (AI/ML) has penetrated academia’s teaching and learning (T&L) more rapidly than the earlier cycles of AI development. While AI has evolved since the late 1950s, its direct integration into daily life accelerated with the rise of conversational systems such as the Google Assistant in the mid-2010s.A decisive shift followed the public release of ChatGPT in November 2022, marking the widespread adoption of large language models (LLMs). These systems generate text, code, images, and other content from large datasets, fundamentally altering how information is accessed and processed, and are called GenAI.Within academia, GenAI tools are now embedded in routine workflows. From coding assistance and problem-solving to report drafting and assignment preparation, students increasingly rely on tools such as Google Gemini, DeepSeek, and GitHub Copilot. These tools function as assistive technologies, enhancing productivity and reducing time spent on routine tasks. At the same time, they are shifting learning from deep engagement with problems to effective querying, placing greater emphasis on prompt formulation than on independent reasoning.Also Read | Makers of ChatGPT may be developing smartphones to rival iPhones. Can this fix AI’s monetisation problem?LLMs are probabilistic systems and can produce errors, inconsistencies, or misleading outputs. Over-reliance without verification may affect conceptual understanding and academic rigour. This evolving ecosystem brings both efficiency gains and new challenges in education.Story continues below this adWhat is missing in GenAI Ecosystem?Traditionally, teaching and learning in premier TIs focused on analysis, design, and application of knowledge rather than rote memorisation. Given concepts and mathematical formulations, students were expected to apply them to solve engineering problems. Many examinations were open-book, allowing the use of reference material while assessing analytical and problem-solving ability. Preparation involved selecting relevant material, organising notes, and developing approaches—processes that ensured deep engagement.With the advent of LLMs, this mode of learning is changing. The effort invested in reading, organising material, and working independently through problems has declined. Students increasingly rely on querying AI systems for solutions and explanations. As a result, emphasis shifts from internalising concepts to retrieving answers on demand. While they could be efficient, this reduces sustained thinking, weakens problem-solving depth, and limits the development of independent analytical skills.How can GenAI reshape academic stress?The widespread adoption of GenAI is introducing a set of specific though interlinked stress intensifiers in academic environments:Dependence and reduced confidence: Frequent reliance on AI-generated solutions shifts competence from self to system, weakening confidence in unaided problem-solving. Over time, this reduces persistence in tackling complex problems independently and shifts learning from deep engagement to effective querying.Story continues below this adAI-driven benchmarking: Student performance is increasingly measured—directly or indirectly—against AI-generated outputs that are faster, more structured, and often appear near-ideal. It raises evaluation standards, leaves little room for iterative thinking or minor errors, and fosters constant comparison with peers and machine outputs, intensifying performance pressure.Competitive pressure driven by Fear of Missing Out (FOMO): With all-time access to GenAI, students feel compelled to use these tools to avoid falling behind. It creates continuous pressure to match AI-augmented productivity, often prioritising speed and output over deep understanding. This approach may not sustain for long-term knowledge acquisition and skill development.Learning–evaluation mismatch: While GenAI is widely used for coursework and preparation, it is typically restricted in examinations and interviews. It creates a dual environment, forcing abrupt shifts to unaided performance, leading to anxiety, self-doubt, and a sense of unpreparedness during high-stakes assessments.Ethics undefined: The absence of clear norms on acceptable AI use creates uncertainty and internal conflict. Students must constantly decide how much reliance is appropriate, increasing stress while raising concerns about academic integrity.Story continues below this adThe way forwardAddressing the rise in suicides requires a shift to integrate mental health support, academic reform, and a policy on the use of GenAI in teaching, learning, and evaluation (TLE).NewsletterFollow our daily newsletter so you never miss anything important. On Wednesday, we answer readers' questions.SubscribeDespite widespread use of GenAI, none of the TIs has a policy framework to address its impact on academic use. Policy efforts should aim to detect stress early through continuous assessment. Counselling systems must be strengthened with early-warning mechanisms and greater faculty sensitisation.Preventing further loss of life must be treated as a national priority, requiring coordinated action by institutions, regulators, and policymakers to ensure that technological advancement does not come at the cost of student well-being.Rajeev Kumar is a former Professor of Computer Science at IIT Kharagpur, IIT Kanpur, BITS Pilani, and JNU, and a former scientist at DRDO and DST. He has been engaged in AI/ML research for the past four decades