Dr James M. Hatch, EdD
  • Dr James M. Hatch EdD
  • Who We Are
  • Get In Touch
  • Dr James M. Hatch EdD
  • Who We Are
  • Get In Touch
Reflections on the Impact and Importance of International and Global Education
                  Musings on Japanese and Ryukyu Budo                                 ​

Categories

All
Budo History
International Education
Japanese Culture
Random Thoughts

International & Global Education

Christmas, Care, and the Question We Keep Deferring: Who Is Responsible in the Age of AI?

23/12/2025

 
Picture
Christmas is meant to be a season of pause. A moment to step back from urgency, noise, and perpetual acceleration, and to ask quieter questions about care, responsibility, and how we live with one another. It is perhaps fitting, then, that one of the most unsettling questions of our digital age emerges most clearly when the pace slows.

What happens when harm occurs in spaces that are neither fully private nor fully public, neither wholly human nor wholly mechanical? And more pointedly: who carries responsibility when systems designed for connection contribute to injury, distress, or self-harm?
For much of the social media era, platforms largely escaped meaningful legal accountability when users were harmed attempting viral challenges, consuming dangerous content, or spiralling into online-fuelled crises. The standard defence was familiar: platforms merely “host” content; they do not create it. Harm, therefore, was said to lie downstream of individual choice.

This logic has always sat uneasily with everyday moral reasoning. We would not excuse an airline that refused to conduct safety checks simply because passengers boarded voluntarily. Nor would we accept a school’s claim that bullying was not its concern because pupils chose to speak cruelly. In most areas of life, foreseeable risk combined with control creates a duty of care. Digital platforms were granted a rare and consequential exemption.

What has changed is not our moral intuition, but the technology itself.

Conversational AI collapses the distinction that once protected platforms. These systems do not merely host expression; they generate it. They respond, adapt, reassure, normalise, and persist. They simulate attention, understanding, and authority in ways that broadcast media never could. When a user is vulnerable, distressed, or young, the interaction is no longer abstract. It is relational.

This is why current debates about AI liability feel so charged. They are not, at heart, about “free speech” in the classical sense. They are about relational responsibility. Law has long recognised heightened duties in relationships marked by asymmetry: teacher and pupil, doctor and patient, institution and child. AI now occupies an uncomfortable space adjacent to these categories while insisting it belongs to none of them.

The United States has played a disproportionate role in shaping how this question has been deferred. Its historically maximalist commitment to freedom of speech — developed to protect citizens from state overreach — became the default operating system for global digital platforms. Because dominant technologies were built, litigated, and defended within U.S. constitutional culture, American assumptions were exported worldwide. Other societies found themselves living inside a U.S. speech regime without ever choosing it.

Europe has never shared this hierarchy of values. In EU law, freedom of expression is vital but qualified, balanced explicitly against human dignity, physical and psychological integrity, and the rights of the child. The legal question is not whether speech is restricted, but whether restrictions are proportionate and justified. That difference matters. It explains why European regulation now focuses on system design, foreseeable risk, and child protection rather than disclaimers and post-hoc moderation.

Yet the global picture is broader still. Democracies such as Japan and South Korea protect freedom of expression, but neither elevates it above social harmony, dignity, or collective responsibility as U.S. jurisprudence does. Speech in these societies is understood as situated within relationships and obligations, not as an abstract right exercised in isolation. The idea that a robust system could knowingly expose users — particularly children — to foreseeable psychological harm while disclaiming responsibility would sit uneasily within these legal and cultural traditions.

India offers a different but equally instructive contrast. As a constitutional democracy shaped by postcolonial realities, it balances expression against public order, morality, and social cohesion more explicitly than the United States ever has. While this balance carries its own risks and controversies, it underscores a crucial point: there is no single democratic template for regulating speech, technology, or harm.

Several African democracies reinforce this point rather than complicate it. South Africa, with its post-apartheid constitutional order, places human dignity on equal footing with freedom of expression, explicitly recognising psychological harm and vulnerability in its rights framework. In Kenya, constitutional protections for speech are paired with duties to avoid harm, hate, and social injury, reflecting a legal culture attentive to the real-world consequences of communication. Ghana similarly protects expression while foregrounding communal responsibility and public interest, drawing on both common-law and indigenous ethical traditions.

Across these contexts, technology is not treated as morally neutral infrastructure. Communication is understood as a social act with consequences, and power — whether exercised by states, institutions, or platforms — carries obligation. The notion that a powerful system could knowingly expose users to foreseeable harm while disclaiming responsibility would appear incoherent rather than controversial.

From an Irish perspective, this debate resonates in familiar ways. Ireland has long balanced freedom of expression against duties of care in education, child protection, and safeguarding law. Teachers, schools, and youth-serving institutions are already bound by an expectation that foreseeable psychological harm must be prevented, not merely responded to after the fact. It is therefore striking that digital systems increasingly embedded in young people’s lives have, until recently, been exempt from comparable expectations. As an EU member state with a strong safeguarding culture, Ireland is unlikely to be comfortable for long with a model that outsources care while scaling influence.

What unites these non-U.S. approaches — across Europe, Asia, Africa, and Ireland — is not authoritarianism, but a refusal to treat freedom of expression as a trump card that nullifies all other duties. They remind us that American speech absolutism is historically contingent, not inevitable, and that alternative democratic futures remain possible, at least for now.

As AI becomes more embedded in everyday life — in classrooms, homes, and moments of vulnerability — the old legal fiction of neutrality becomes harder to sustain. Systems that speak, respond, and adapt cannot indefinitely be treated as passive tools. Different societies are already answering this challenge in different ways. Some continue to privilege speech above all else; others begin from dignity, harmony, or care. None of these paths is without risk, but pretending there is only one legitimate model has already cost us time we did not have.
Christmas narratives, at their best, are sceptical of indifference. They insist that structures matter, that neglect has consequences, and that responsibility cannot be endlessly deferred. Care, in these stories, is not abstract goodwill but concrete obligation.

Perhaps this is why the issue feels so unsettling at this time of year. Christmas asks us to take responsibility for one another in ordinary, uncelebrated ways: noticing vulnerability, setting limits, creating safety. It is uncomfortable to realise that many of our most powerful systems were designed to scale engagement while outsourcing care — and that we tolerated this because responsibility was inconvenient.

The coming years will bring court cases, regulations, and resistance. But beneath the legal machinery lies a quieter question, worth sitting with over the holidays:
If we build systems that speak to people as if they matter, are we prepared to live in societies that act as if they do?

That question is not uniquely American, European, Asian, or African. But how we answer it will reveal what kind of democracies we actually are.

​本稿は、AIが人と対話し、共感や権威を模倣する時代において、「誰が責任を負うのか」という問いを、クリスマスという節目から静かに考察するものである。
米国的な言論絶対主義が長らく世界のデジタル規範を形づくってきた一方で、欧州、日本、韓国、インド、そして複数のアフリカ民主国家では、表現の自由は人間の尊厳、子どもの保護、社会的責任と常に均衡の中で理解されてきた。
AIが中立な道具ではなく、関係性を生む存在となった今、予見可能な害から人を守る責任を誰が引き受けるのか。その問いは、法制度だけでなく、私たちがどのような民主社会を望むのかを映し出している。

Okinawan and Japanese Budo

Comments are closed.

    James M. Hatch

    International Educator who happens to be passionate about Chito Ryu Karate. Born in Ireland, educated in Canada, matured in Japan

    Archives

    December 2025
    November 2025
    October 2025
    September 2025
    August 2025
    July 2025
    June 2025
    May 2025
    April 2025
    March 2025
    February 2025
    January 2025
    December 2024
    November 2024
    October 2024
    September 2024
    August 2024
    July 2024
    June 2024
    May 2024
    April 2024
    March 2024
    February 2024
    January 2024
    December 2023
    November 2023
    October 2023
    September 2023
    August 2023
    June 2023
    May 2023
    April 2023
    January 2023
    September 2022
    June 2022
    March 2022
    December 2021
    August 2021
    July 2021
    June 2021
    May 2021
    April 2021
    November 2020
    October 2020
    September 2020
    August 2020
    June 2020
    May 2020
    March 2020
    February 2020
    January 2020
    December 2019
    November 2019
    October 2019
    August 2019
    July 2019

    Categories

    All
    Budo History
    International Education
    Japanese Culture
    Random Thoughts

    RSS Feed

Proudly powered by Weebly