Friday, 26 December 2025

AI and Human Judgement: Why Boundaries Matter

Facebook
X
WhatsApp
Telegram
Email

LET’S READ SUARA SARAWAK/ NEW SARAWAK TRIBUNE E-PAPER FOR FREE AS ​​EARLY AS 2 AM EVERY DAY. CLICK LINK

BY DATIN SHALINI AMERASINGHE GANENDRA

I write about culture, visuality and ethics. In recent years, with the evolution of Artificial Intelligence affecting these spheres,  I have grown increasingly uneasy about AI’s trajectory though I celebrate its role as a research assistant, guided by human expertise in the form of the prompts.

Though I am not a technologist and do not claim expert knowledge of machine-learning models, such expertise is not needed to see that the choices we make today will shape the everyday lives of generations. Frankly, I am happy to use search engines to inform my thinking  but I resist being told by non-human systems how to form my views or express them. AI is, for me, an assistant only. I always lead. 

2025 AI Winners with King Charles III, St. James Palace, November 5, 2025. Courtesy QEPrize/J Alden

I had the pleasure of being a guest at this year’s Queen Elizabeth II Engineering Prize in London (thanks to my husband, Dato Ir. Dr. Dennis Ganendra who is a leading FREng. at the Royal Academy of Engineering).

The QEII is often called the discipline’s equivalent of a Nobel, and this year brought together seven of the most influential figures in AI: Professor Geoffrey Hinton, Jensen Huang, Bill Dally, Fei-Fei Li, John Hopfield, Yann LeCun and Yoshua Bengio. It is remarkable, as Prof. Hinton remarked at dinner, that this was the first time all seven had been in the same room given that their collective impact on society is immense. Let us hope that they continue to engage personally with each other, to  consider holistically the development of AI to advance (not replace) humanity. Please, help steer AI toward ethical, value-based development. Here lies the focus of my call to consider and act.  

In less than a decade, AI has moved from the fringes of public awareness to a tool widely used in offices, schools and government departments. It drafts text, analyses vast volumes of data and influences decisions once made by individuals. The pace of change is extraordinary, and so is the responsibility. Professor Hinton has warned that if AI systems become far more capable while remaining indifferent to human wellbeing, we may find ourselves displaced by our own inventions. Here, he is not predicting a revolt of machines but cautioning against human inattention. We still control how much authority we delegate so let us not give that control away in pursuit of financial gain of a very few.

Echoing such sentiments at the 2023 AI Safety Summit at Bletchley Park, King Charles III described AI as one of the most consequential technological developments in history, urging governments, civil society and industry to shape it responsibly.

2025 QEII Award. Courtesy QEPrize/J Alden

Pope Francis made a similar appeal, reminding the world that empathy and compassion cannot be replaced by processing power. He preferred the term “machine learning” to focus on function rather than mystique. Pope Leo recently described AI as an “exceptional product of human genius,” but stressed that it must remain a tooland not a substitute for human intelligence, moral judgement or spiritual wisdom. He emphasises that true human intelligence and wisdom involve openness to truth, goodness, contemplation, rather than data-processing which is the centrality of machine learning.

AI has no desires or intentions of its own. Risks arise entirely from how it is built and deployed. Without limits, systems pursue objectives set by humans, whether commercial, political or personal, without any inherent sense of fairness. Efficiency alone cannot determine who is hired, who receives credit, whose privacy is respected, or how public opinion is shaped.

Early warning signs are already visible. Recruitment tools amplify bias. Deepfakes distort public debate. Surveillance technologies expand without consent. Behavioural design nudges citizens in ways they may not notice or notice after it is too late. None of this is driven by the malice of a machine; it arises from optimisation without ethical context. We have seen the good and the bad already, including with ChatGPT acting as confidante and counsellor, with mixed results, deaths and litigations.

Other winners of the 2025 QE II Prize fortunately echo these sentiments. Fei-Fei Li (Stanford) has argued that AI should deepen our understanding of humanity rather than obscure it. Yoshua Bengio has called for stronger global governance and investment in research focused on safety and alignment, advancing at the same pace as capability. These views hearteningly offer clear guidance: machine learning should support human judgment, not replace it.

Even ChatGPT recognises its limits. Asked whether it poses a threat to human rights or development, it replied: “I don’t have intentions, desires, or agency. So, I can’t want anything, nor can I threaten anyone by myself. The real question is not whether I (as an AI system) am a threat – but whether the way humans use AI could undermine fundamental rights, social structures, or human development.”

Applicable international and national legal frameworks are emerging too, though no single global treaty on AI specifically yet exists. The Council of Europe’s 2024 Framework Convention on AI requires risk assessments and human-rights safeguards. UNESCO’s 2021 Recommendation on the Ethics of AI emphasises fairness, transparency and accountability. OECD principles, Convention 108 on data protection, the Toronto Declaration on machine learning discrimination, and recent UN resolutions on safe and trustworthy AI all point in the same direction. We see here that a framework already exists to build on – to create an iterative dynamic that connects ethics and technical advancement, foregrounded on the believe that the technical must only improve the human condition (rather than replace it).

The greatest risk is not sentient machines but the gradual erosion of human agency. We do not need superintelligence to lose control. A sequence of automated decisions accepted without scrutiny can weaken our capacity for choice. If we define shared values now, we retain the ability to shape outcomes. Delay, and systems may become too embedded to amend.

Datin Shalini Amerasinghe Ganendra

This is not a contest between humans and machines. Technology should serve human interests. That outcome will not happen by chance. It requires boundaries, clarity and leadership that place human judgment above automated efficiency. So, what are these guiding values in the myriads of competing profit interests? To govern AI responsibly, policymakers, engineers and society must uphold three commitments as core values: 1) protect human dignity and agency; 2) ensure transparency with clear accountability, and 3) prioritise safety and fairness before scale. Though these values may seem trite, they are defining and the constant reference to and guidance from them will help us focus on what really matters. We do. 

Datin Shalini Amerasinghe Ganendra DSG is a globally engaged advisor and cultural policy leader whose work sits at the intersection of visual culture, governance and sustainability. She is an Adjunct Prof. UNIMAS, Institute of Borneo Studies; and Honorary Associate, SOAS (London University).

Related News

Most Viewed Last 2 Days