Scroll through social media and you’ll find numerous videos such as “How to Use AI to write your essay in 5 minutes” or “How to skip the readings with ChatGPT.” The discourse surrounding AI in education is deafening and it’s almost entirely consumed by the question: How? How do we write the perfect prompt? How should educators integrate ChatGPT into academic work or detect its use? This obsession with methods and mechanics is a dangerous distraction. By racing to master the “how,” we have skipped the two far more critical, foundational questions: why should we use these tools in the first place, and when is it appropriate to do so?Answering the “how” is a technical challenge. Answering the “why” and “when” is a philosophical one. Until educators and educational leaders ground their approaches in a coherent philosophical and theoretical foundation for learning, integrating AI will be aimless, driven by novelty and efficiency rather than human development. Two frameworks provide the essential lens we need to move beyond the hype and engage with AI responsibly: “virtue epistemology,” which argues that knowledge is not merely a collection of correct facts or a well-assembled product, but the outcome of practising intellectual virtues; and a care-based approach that prioritizes relationships. Virtue over volumeThe current “how-to” culture implicitly defines the goal of learning as the production of a polished output (like a comprehensive report or a functional piece of code). From this perspective, AI is a miracle of efficiency. But is the output the point of learning?Virtue epistemology, as championed by philosophers like Linda Zagzebski, suggests the real goal of an assignment is not just writing the essay itself — but the cultivation of curiosity, intellectual perseverance, humility and critical thinking that the process is meant to instil.This reframes the “why” of using AI. From this perspective, the only justification for integrating AI into a learning process should be to support and sustain intellectual labour. If a student uses AI to brainstorm counterarguments for a debate, they are practising intellectual flexibility as part of that labour. If another student uses AI to map connections between theoretical frameworks for a research paper, they are deepening conceptual understanding through guided synthesis. When AI undermines ‘why’However, when the “how” of AI is used to bypass the very struggle that builds virtue (by exercising intellectual labour, including analysis, deliberation and judgment), it directly undermines the “why” of the assignment. A graduate student who generates a descriptive list of pertinent research about a topic without engaging with the sources skips the valuable process of synthesis and critical engagement.This stands in direct contrast to philosopher and educator John Dewey’s view of learning as an active, experiential process. For Dewey, learning happens through doing, questioning and grappling with complexity, not by acquiring information passively. Assignments that reward perfection and correctness over process and growth further incentivize the use of AI as a shortcut, reducing learning to prompting and receiving rather than engaging in the intellectual labour of constructing meaning.Care over complianceIf the “why” is about supporting human intellectual labour and fostering intellectual virtue, the “when” is about the specific, contextual and human needs of the learner. This is where an “ethics of care” becomes indispensable. As philosopher Nel Noddings proposed, a care-based approach prioritizes relationships and the needs of the individual over rigid, universal rules. It moves away from a one-size-fits-all policy and toward discretionary judgment.The question: “When is it appropriate to use AI?” cannot be answered with a simple rubric. For a student with a learning disability or severe anxiety, using AI to help structure their initial thoughts might be a compassionate and enabling act, allowing them to engage with the intellectual labour of the task without being paralyzed by the mechanics of writing. In this context, the “when” is when the tool removes a barrier to deeper learning.Conversely, for a student who needs to develop foundational writing skills, relying on that same tool for the same task would be irresponsible. Deciding the “when” requires educators to know their learner, understand the learning goal and act with compassion and wisdom. It is a relational act, not a technical one. Educators must ensure that AI supports rather than displaces the development of core capabilities.AI as mediatorThis is also where we must confront historian and philosopher Michel Foucault’s challenge to the idea of the lone, autonomous author. Foucault argued that the concept of the author functions to make discourse controllable and to have a name that can be held accountable. Our obsession with policing students’ authorship — a “how” problem focused on originality and plagiarism — is rooted in this system of control. It rests on the convenient fiction of the unmediated creator, ignoring that all creation is an act of synthesis, mediated by language, culture and the texts that came before. AI is simply a new, more powerful mediator that makes this truth impossible to ignore. This perspective reframes an educator’s task away from policing a fragile notion of originality. The more crucial questions become when and why to use a mediator like AI. Does the tool enable deeper intellectual labour, or does it supplant the struggle that builds virtue? The focus shifts from controlling the student to intentionally shaping the learning experience.Reorienting AI through values and virtueThe rush to adopt AI tools without a philosophical framework is already leading us toward a more surveilled, less trusting and pedagogically shallow future. Some educational systems are investing money in AI detection software when what’s needed is investing in redesigning assessment.Policy is emerging that requires students to declare their use of AI. But it’s essential to understand that disclosure isn’t the same as meaningful conversations about intellectual virtue.Answering the questions of why and when to use AI requires us to be architects of learning. It demands that we engage with thinking about learning and what it means to produce knowledge through the works of people like Dewey, Noddings, Zagzebski and others as urgently as we do with the latest tech blogs. For educators, the responsible integration of AI into our learning environments depends on our commitments to cultivating a culture that values intellectual labour and understands it as inseparable from the knowledge and culture it helps generate.It is time to stop defaulting to “how” and instead lead the conversation about the values that define when and why AI fits within meaningful and effective learning.Soroush Sabbaghan receives funding from SSHRC.