FEBRUARY 28 — Artificial Intelligence (AI) is no longer a distant concept confined to the realm of science fiction. It is here, shaping how we live, work, and interact. From Facebook’s facial recognition and personalized Netflix recommendations to medical diagnostics and autonomous vehicles, AI has embedded itself into the very fabric of modern life. But as we marvel at its capabilities, a crucial question arises: Whose interests does AI serve? If left unchecked, will AI become a tool that benefits only a select elite few, or can it be harnessed for the common good of humanity?The idea of humanistic AI is gaining traction — a vision where AI is not merely driven by economic incentives or technological prowess but is developed and deployed with ethical considerations at its core. This means prioritizing fairness, transparency, accountability, and, most importantly, the well-being of people.Historically, technology has always been a double-edged sword. The printing press democratized knowledge, but propaganda soon followed. The internet connects the world, yet misinformation flourishes and harms us. AI is no different. If guided by the right principles, it can uplift our lives; if misused, it can widen inequality, reinforce biases, and erode privacy.This dilemma was explored extensively by Isaac Asimov in his Robot series, where the Three Laws of Robotics were designed to ensure that machines served humanity without causing harm. Yet, even within those rules, Asimov demonstrated the ambiguity and ethical dilemmas that arise when AI is forced to navigate human complexities.Many believe AI development is the sole responsibility of scientists, engineers, and technocrats. However, the impact of AI is too vast for it to be left in the hands of a few. Every individual — whether a student, a healthcare worker, an entrepreneur, or a retiree—has a stake in how AI evolves. Just as democracy thrives on civic engagement, responsible AI development requires public awareness and participation.AI, much like Asimov’s Foundation series, thrives on the premise of predictive modelling, where patterns of human behaviour guide the future. But what happens when those patterns reinforce societal biases instead of correcting them? AI often operates as a “black box,” where decisions are made without clear explanations. If an AI-driven medical system makes a wrong diagnosis, shouldn’t patients and doctors have the right to know why? AI learns from data, and if the data is biased, so are the outcomes. Consider how AI-powered hiring systems have, in some cases, discriminated against women and minorities. Ensuring diverse, inclusive, representative data is not just a technical issue — it is a societal and ethical one.Governments and corporations must implement policies that align AI with human values. Ethical guidelines should not be mere suggestions but enforceable frameworks that protect public interests. At the same time, the digital age demands that we not only consume technology but also understand its implications. Just as financial literacy helps people manage money, AI literacy empowers us to engage critically with the systems that influence our lives.AI has tremendous potential to solve real-world challenges, from detecting diseases early to combating global climate change. Supporting initiatives that use AI for education, sustainability, and healthcare can ensure that its benefits reach all of humanity, not just the privileged few.Technology, in itself, is neither good nor bad — it is how we choose to wield it that matters. A hammer can be used to build a home or to cause harm. AI is no different. Its development must be guided not only by technical expertise but also by wisdom, empathy, and a deep sense of social responsibility. Words reading "Artificial intelligence AI" are seen in this illustration taken December 14, 2023. — Reuters pic Asimov’s The Evitable Conflict depicted a world where AI-controlled systems govern economies and global decisions, appearing infallible while subtly guiding humanity’s course. It raises an uncomfortable question: “Do we want AI that makes choices for us, or AI that helps us make better choices?” Perhaps the most important question we should ask is not what AI can do, but what it should do. Can we create AI systems that amplify our shared humanity rather than diminish it? Can we ensure that AI serves as a bridge between generations, cultures, and communities rather than a tool for division?If AI is to be the defining technology of the 21st century, then we must ensure that it reflects the best of humanity. The responsibility is not solely with scientists or policymakers— collective wisdom is crucial. Thus, as we stand at this technological crossroads, we must ask ourselves:Are we shaping AI, or is AI shaping us?* Ng Kwan Hoong is an Emeritus Professor of Biomedical Imaging at the Faculty of Medicine, Universiti Malaya, Kuala Lumpur. A 2020 Merdeka Award recipient, he is a medical physicist by training but also enjoys writing, drawing, listening to classical music, and bridging the gap between older and younger generations. He can be reached at ngkh@ummc.edu.my** This is the personal opinion of the writer or publication and does not necessarily represent the views of Malay Mail.