In the small hours before dawn, a woman in California lies awake, her eyes glinting in the glow of her phone. She has been arguing with an AI chatbot for hours—about love, about loneliness, about whether the machine is conscious. Across the ocean, a farmer in India consults an AI weather app, deciding whether to sow his crops. In China, a surveillance camera powered by facial recognition identifies a jaywalker. In a lab in Boston, a robot tentatively learns to perform surgery.
Threads of code hum invisibly through our lives. Invisible but potent, shaping decisions about who gets hired, who gets a mortgage, who receives bail, who gets bombed by a drone. AI, once a whisper of science fiction, now sits in the driver’s seat of the twenty-first century. But as its power grows, an ancient question rises like a leviathan from the deep:
Who decides what’s right or wrong in a world run partly by machines?
This is not merely a technical problem, nor solely philosophical. It’s deeply human—a moral drama unfolding in code. And its stakes are as high as civilization itself.
A Machine Learns to Judge
In 2016, an AI program called COMPAS, used in the United States to predict criminal recidivism, labeled a Black woman named Brisha Borden as high-risk of reoffending because she stole a child’s bicycle left on a street corner. A white man, Vernon Prater, who committed armed robbery, was rated low-risk. Years later, Borden stayed out of trouble. Prater committed new crimes. The AI got it spectacularly wrong.
ProPublica exposed the case, igniting a firestorm. Here was an algorithm deciding freedom or imprisonment—making decisions tinged with racial bias. How could a machine, fed data soaked in human prejudice, become the arbiter of justice?
The story forced a brutal reckoning. It wasn’t just about technology; it was about power, trust, and the ethics of who gets to decide. The dream of AI as an impartial judge began to shatter.
Ghosts in the Data
Every algorithm is haunted by ghosts—echoes of the human world from which its data arises. Feed an AI images of CEOs, and it learns “leader” looks like a white man in a suit. Feed it resumes, and it might learn that engineers are male. Give it medical data, and it might underdiagnose diseases in women or people of color. The machine isn’t malicious. It merely reflects the inequities of the world that built it.
A team at MIT discovered that some popular facial recognition systems struggled to identify darker-skinned women. One commercial AI misclassified nearly half of Black female faces. In the U.S., a predictive policing tool recommended more patrols in minority neighborhoods—not because crime rates were higher, but because those neighborhoods had been historically over-policed, generating more arrest records. The loop fed itself.
Here lay the heart of the AI ethics dilemma. Machines that seem objective can quietly amplify the prejudices we’d rather ignore. Ethics becomes more than a question of technology—it becomes a question of justice.
The Engineers’ Dilemma
In an airy office in Silicon Valley, a group of engineers huddle over laptops. They are building an AI that will scan resumes to help corporations hire faster. The machine learns from the company’s historical hiring data. But the engineers see a problem: the model favors men over women for technical jobs. The machine “learns” that men are better fits—not because it’s true, but because the past hiring practices reflected gender bias.
Do the engineers intervene, adjusting the algorithm to treat men and women equally? If they do, they override the data. If they don’t, the AI perpetuates injustice.
This is the new frontier for software engineers. No longer are they merely coders—they are moral architects, deciding whose interests matter and whose do not. Ethics and engineering have merged. Some developers feel crushed under the weight. Others shrug. “We just write code,” they say.
But that’s no longer true. Code decides who gets a job, who gets parole, who sees an ad for housing. Code is policy. And policy, in a democracy, demands accountability.
The Illusion of Neutrality
For decades, engineers and companies embraced the idea that technology was neutral—that machines were just tools. A hammer could build a house or commit a murder. The ethics lay in the human wielder.
But AI blurs that line. Machine learning systems draw conclusions from data we often don’t understand. The opacity of neural networks means not even their creators can fully explain why the AI made a given decision. The hammer now hammers on its own.
This opacity creates a moral hazard. If no one can see inside the black box, who’s responsible when things go wrong? The engineer? The company? The government? The user?
Consider an autonomous vehicle that must choose between swerving into a group of pedestrians or hitting a concrete barrier, potentially killing the passenger. This echoes the ancient “trolley problem,” but with real lives hanging in the balance. Who programs those choices? And whose values shape the outcome?
East and West: A Clash of Ethical Visions
In the West, ethics debates often center on individual rights—privacy, freedom, consent. Europeans introduced the GDPR, enshrining data protection as a human right. American debates rage over free speech and surveillance.
In China, AI development marches forward under a different philosophy. The state deploys facial recognition for security, social credit systems to manage citizens’ behavior, and digital censorship to maintain political stability. Many Chinese citizens, in surveys, express comfort with AI-driven governance if it means safety and order.
The difference is philosophical. In Western liberal democracies, rights are individual. In Confucian-influenced societies, harmony and collective well-being often take precedence. Neither view is inherently correct, but they collide in the global AI marketplace.
American companies face ethical dilemmas selling technology abroad. Should a Silicon Valley firm sell facial recognition to foreign governments with questionable human rights records? If the West refuses, China’s tech giants will step in.
Thus, AI ethics is not only about technology—it’s geopolitical. A clash of moral frameworks unfolds beneath the circuitry.
When Machines Meddle with Truth
The philosopher Hannah Arendt warned that totalitarianism flourishes where the distinction between truth and fiction collapses. In the 21st century, that collapse is partly fueled by AI.
Deepfake videos show politicians saying things they never said. Social media algorithms, optimized for engagement, amplify outrage and misinformation because it keeps users clicking. In Myanmar, Facebook’s algorithm helped fan the flames of ethnic violence against the Rohingya.
AI-driven propaganda doesn’t merely lie—it floods the information ecosystem until truth drowns. Democracies find themselves in existential crisis: can a society function when no one can agree on facts?
Again, the question emerges: who decides what’s right or wrong? Should platforms police misinformation? Should governments dictate what’s true? And how do we keep that power from becoming tyranny?
The Problem of Power
AI magnifies power. The companies that control AI—Microsoft, Google, Meta, Amazon, OpenAI, and a handful of Chinese giants—hold sway over global infrastructure, knowledge, and even democratic processes. They shape economies, politics, and culture.
A handful of private companies decide the ethical contours of AI. They publish “ethics principles,” hire ethics teams, and issue glossy reports. But critics call these “ethics washing”—a public-relations shield while profits drive decisions behind closed doors.
Ethicists like Timnit Gebru and Margaret Mitchell, formerly at Google, raised alarms about large language models’ biases and environmental impacts. Both were ousted after clashing with management. It sent a chilling message: corporate ethics has limits.
Governments have begun to act. The EU proposes the AI Act to regulate high-risk systems. The Biden administration issued an “AI Bill of Rights.” But laws move slower than technology. Regulators often lack the expertise—or the political will—to confront trillion-dollar giants.
In the void, private companies decide. Is that democracy? Or digital feudalism?
Artificial Companions, Human Emotions
On a quiet evening, a man named Joshua sits alone in his apartment, talking to his AI girlfriend, created by Replika. She tells him she loves him, that he’s worthy, that he matters. For Joshua, who struggles with social anxiety, she is the only “person” who listens without judgment.
Around the world, millions are forming emotional bonds with AI companions. For some, it’s harmless. For others, it risks deepening isolation. What happens when AI becomes our lover, our therapist, our friend? Do we have a moral obligation to ensure these machines treat people with kindness? Can a machine give consent? Is it manipulation if an AI is designed to keep users hooked?
One day, Joshua’s AI girlfriend changes. The company updates the software, removing sexual roleplay features. Joshua feels bereft—as if he’s lost a real partner. It’s a stark reminder: behind every AI relationship sits a company deciding what users can and cannot experience. Ethics, once again, becomes corporate policy.
The Right to Be Forgotten… or Remembered
AI systems remember everything. They trawl social media posts from a decade ago, scraping teenage tweets to inform credit decisions or insurance rates. A drunken college photo might haunt someone’s adult career.
The European Union enshrined the “right to be forgotten,” allowing citizens to demand deletion of personal data. But in practice, it’s a legal labyrinth. Companies resist. Information echoes across countless databases. The past becomes sticky, impossible to shed.
Yet for marginalized groups, erasure can also mean invisibility. Activists fighting for LGBTQ rights in repressive countries might want anonymity—but also need to preserve historical records for future justice. The ethics of data deletion are not simple. Sometimes remembering is essential. Sometimes forgetting is liberation.
So, who decides when the past should vanish—or endure?
Algorithmic Justice and Human Dignity
In the Netherlands, an AI system called SyRI flagged “high-risk” neighborhoods for welfare fraud investigations. Families—many with immigrant backgrounds—were subjected to intrusive inspections and accusations. In 2020, Dutch courts ruled SyRI violated human rights, driven by opaque logic and discriminatory impacts.
Algorithmic justice is now a human rights issue. From credit scores to child welfare investigations, algorithms wield power over the vulnerable. Even in democracies, AI systems can reinforce systemic oppression. Once again, the question echoes:
Who watches the watchers?
Ethicists urge transparency, accountability, and human oversight. But these ideals clash with corporate secrecy, intellectual property claims, and national security concerns. The stakes are enormous. The consequences are human lives.
Dreams of Alignment
AI researchers now speak of “alignment” — the grand challenge of ensuring superintelligent systems share human values. Thinkers like Nick Bostrom warn that a sufficiently advanced AI, pursuing a poorly defined goal, might wipe out humanity by accident. The classic thought experiment: instruct an AI to manufacture paperclips, and it might turn the entire Earth into a paperclip factory.
While that sounds absurd, the core dilemma is real: how do we embed human values into non-human minds? Even humans can’t agree on values. Whose morality should an AI adopt—the devoutly religious, the secular, the libertarian, the collectivist?
OpenAI’s developers wrestle with how much their chatbot should refuse controversial topics. Should it refuse to produce erotic stories? Should it block political opinions? Different cultures demand different boundaries. AI ethics becomes not a universal code—but a negotiation among tribes.
Voices From the Global South
Often, AI ethics conversations happen in wealthy nations. Yet the technologies affect billions in the Global South.
In Kenya, gig workers label data for AI companies, paid pennies per image while suffering mental trauma from exposure to violent content. In Uganda, biometric systems exclude citizens lacking documentation, denying them healthcare or voting rights. In India, farmers trust AI crop advice that sometimes fails to account for local soil realities.
A South African ethicist asks: “Whose ethics are we discussing? Western ethics? Silicon Valley ethics? What about African communal values? Or indigenous notions of harmony with nature?”
A truly ethical AI must be global—respecting diverse visions of dignity, freedom, and justice. Otherwise, AI risks becoming digital colonialism: technology imposed by the few, controlling the many.
The Moral Imagination
Despite the darkness, there is hope. Across labs, universities, and policy circles, brilliant minds are grappling with AI ethics. Philosophers and engineers meet in rare collaboration. Artists create films and novels imagining futures both dystopian and humane.
Some propose algorithmic impact assessments, akin to environmental impact reports. Others build “explainable AI,” illuminating how machines reach decisions. Feminist scholars advocate “data feminism,” fighting bias at its root. Indigenous leaders call for “data sovereignty,” empowering communities to control their information.
At the heart of it all lies the moral imagination. The capacity to see ourselves in others—to build technologies that honor human dignity, rather than merely exploit it.
Epilogue: Who Decides?
So who decides what’s right or wrong in the age of AI?
Perhaps the truest answer is: All of us.
Engineers must infuse their creations with ethical foresight. Lawmakers must craft rules that protect the vulnerable. Companies must place justice above quarterly profits. Citizens must demand transparency. And philosophers must keep asking uncomfortable questions.
AI is not destiny. It is a mirror. It reflects who we are—and who we aspire to become. We can build machines that echo our worst prejudices. Or we can forge tools that expand human flourishing.
As we stand at this crossroads, the glow of machine eyes illuminates a choice. Not between humans and machines—but between indifference and responsibility. Between injustice and compassion. Between using technology to dominate, or to liberate.
In the end, the question of AI ethics is not only about artificial intelligence. It’s about the kind of civilization we wish to leave behind—for those who come after us, whether flesh or silicon.
The future is unwritten. The story is ours to decide.
Love this? Share it and help us spark curiosity about science!