Published on July 23, 2025 5:55 AM GMT
AI tools are transforming how children learn — but will they still know why to be good?
“The highest education is that which does not merely give us information but makes our life in harmony with all existence.” - Rabindranath Tagore
As AI tools rapidly transform classrooms, this vision of education feels increasingly distant. Are we drifting further from this original goal in the age of AI?
With the ongoing developments in the AI industry, there has been a lot of mention and research in AI Education Policy about integrating AI in the education of school students.
However, little work has been done to understand the long-term moral consequences of children’s early development with AI. It’s not just about intellectuality, intelligence, and academic integrity or saving children from harms like data theft and bias. But also how students’ core principles of justice, truth, honesty, respect, empathy, and reasoning about right and wrong are affected. There’s a possibility that this may not be much affected (compared to 10 years back, of course!). In addition, children may be more aware of social problems these days (than we were), but this shouldn’t limit us from questioning whether, in the end, their education will make them good individuals or not?
Sigal Samuel wrote in one of her articles at Vox, “To save the humanities, we need to rethink our assumptions about AI and education. AI is removing cognitive friction from education. We need to add it back in.”
What Current Research Tells Us
Recent studies and initiatives across Asia reflect growing interest in AI-enhanced education. A 2025 paper by Gupta et al. gathered views of Indian students on the current state and future prospects of AI integration in their education, and what actions are needed to fully leverage its benefits.
Similarly, the Chinese University of Hong Kong also evaluated how a pre-tertiary AI curriculum can improve students’ perceived competence in and motivation towards AI and while fostering teacher autonomy.
In rural India, Goyal et al. (2025) highlighted that while LLMs can personalize learning and aid teachers, challenges remain around training, infrastructure, and ethical use.
These issues were also central at the 2025 PadhAI Conclave in New Delhi, where policymakers emphasized adaptive learning, equitable access, and redefinition of higher education.
One of the ministers also mentioned, “Our vision for Delhi is AI for all. It is about using technology to democratize education, break barriers, and create opportunities for all students.”
But as much as the focus on equitable access and leveraging the benefits of AI in education, and the vision to democratize education in Delhi is essential, so is important the question of whether there has been a, or will there be, any change in moral development of children post-ChatGPT era, when the non-technical world has been introduced to an effortless access to AI tools.
Although, work by AI ethicists does address some concerns around academic integrity, data privacy, and awareness among teachers and students.
A literature review by University of Hull, UK, explored the primary challenges in K-12 AI ethics, and reviewed the Institute for Ethical AI in Education (UK)’s developed “the Ethical Framework for AI in Education”, highlighting the role of governments in promoting ethical AI understanding in schools. Similarly, the European Commission argued that teachers must understand the potential and risks of AI and big data in education.
Furthermore, Adams et al. (2023) identified four new ethical principles unique to K-12 education - pedagogical appropriateness, children’s rights, AI literacy, and teacher well-being; in addition to the other core AI ethical principles of transparency, privacy, fairness, etc.
Tan et al. (2024) also tried to illustrate that the ethical use of GAI in education not only preserves but can also enhance academic integrity.
However, these measures in AI ethics are yet insufficient to understand and shape the landscape of humanity in the near future.
Ethics Alone Is Not Enough
“Education aims to impart ‘phronesis’, i.e., practical wisdom. Education’s main aim should be cultivating moral skills, not just capitalistic knowledge.” - as per Aristotle’s teachings
Although there has been some discussion on the intersection of social emotional learning and AI leverage. An Edweek article by Arianna Prothero described how AI and SEL are on a collision course. Social-emotional skills such as emotional management, impulse control, responsible decision making, perspective-taking, and empathy have become more crucial than ever to navigate the new online reality where chatbots and apps are becoming students’ primary source of friendships, relationships, and companionships.
Thinking back 5-10 years ago, today I believe that many people around me and I mention how time, and severe or hurting situations in their life have taught them a lot, which not only made them more mature, but also broadened their perspective towards not just their several relations, but also toward humanity. If children from an early age start escaping challenging situations and taking shelter in AI, we now may not even understand what disaster it could bring to our near future. Psychological research has found that children who are exposed to traumatic or violent situations early in their lives tend to replicate those behaviors in the future. Perhaps this early exposure to AI might also trigger some mental burden, which will be reflected in the future?
But again, is social emotional learning enough? Moral and character education is equally important in developing “good” individuals. This EdGate article highlights how SEL and Character Ed may seem the same but have some key differences.
Moral education is the process of helping students develop a sense of right and wrong, guiding them to think ethically, act responsibly, and cultivate values like honesty, compassion, and justice.
Moral Education is about thinking and reasoning about right and wrong.
Character Education is about practicing the virtues that align with right and wrong.
SEL is about managing emotions and relationships that influence how we act and treat others.
They overlap, but the intent, methods, and roots slightly differ.
Where Do We Go From Here?
“The breakneck pace of technological innovation means they (students) are are going to have to choose, again and again and again, how to make use of emerging technologies — and how not to. The best training they can get now is training in how to wisely make this type of choice.” - Sigal Samuel in her Vox article.
As we invest in integrating AI into classrooms, we must also (re)invest in something far less flashy, but far more foundational: moral education.
In a world of intelligent machines, we still need wise, kind, and principled humans. And that begins in the classroom.
--------------------------------
This is a reflection on an under-explored topic — how AI integration in classrooms might be shaping not just learning outcomes but moral development. It’s meant to spark discussion, not offer conclusive answers. I’d appreciate any critique, counterexamples, or further reading suggestions!
Discuss