“What makes us human?” a group of followers turned to the deity they worship in their existential quest, «Selecting all images with traffic lights,» answered their Divine Spirit. This same joke circulates on social media, showing different religious groups’ followers and their God’s lands back to the most exciting debates of our days. One on Artificial Intelligence. With the risks and opportunities central to the debate, one question remains: Are we more than simply biological machines capable of selecting all images with traffic lights? If so, it would make AI very much human, for it does it in a fraction of a second. So, the real question is:
Does AI make us more or less human?
Following the example of different religious followers, I turned to the AI deity, the ChatGPT, and typed in the same question:
“What makes us human?”
Contrary to other deities, Chat GPT was less witty, and in a wink of an eye, there stood on the screen before me the long, elaborated answer:
“What makes us human is a complex interplay of biological, cognitive, social, and cultural factors….”
After enumerating all the points and their intrinsic characteristics in bullet points, the electronic deity concluded with:
“These elements together create the rich tapestry of what it means to be human, encompassing our biological foundation, cognitive abilities, emotional depth, social structures, cultural richness, and capacity for innovation.”
Nothing new. While the answer would undoubtedly qualify as valid in an exam, I expected more eye-opening, mind-blowing answers from a deity. Being a “rich tapestry” is a vague solution for existential guidance.
So, I tried ChatGPT with another question:
«Tell me a joke.» I typed into the box and immediately got back:
“Why did the scarecrow win an award?
Because he was outstanding in his field!”
While staring at the screen, I could not stop picturing Jensen Huang, Elon Musk, and Satya Nadella laughing their heads off with this joke. Not me. I guess I don’t have it in me. I never wanted to be that rich.
This brings another dimension to the AI debate—the one Michael Sandel introduced when debating justice. When applied to AI, those same principles of justice—utilitarian, libertarian, and teleological—can help us understand what we should seek from AI.
According to Sandel, the utilitarian idea of justice means maximizing welfare or happiness. The second is the idea that justice means respecting freedom and human dignity. At the same time, the third, the teleological one, says that justice involves honoring and recognizing virtues and the goods implicit in social practices.
For the philosopher of justice, justice is not simply a matter of maximizing utility or securing freedom of choice and non-discrimination. To achieve a just society, we must reason together about the meaning of the good life and create a public culture hospitable to the inevitable disagreements.
The same principles apply to AI.
However, I haven’t seen much of the third principle in applying AI. There is increasingly more of the first one, with companies increasing profits through persuasive mechanisms co-created with artificial intelligence. There has been increasingly more of the second one, with AI controlling our movements and habits. But not much of applying AI in pursuing virtues, sense, and purpose.
I would be a most devoted AI worshiper if it honored and recognized virtues and the goods implicit in social practices and provided means for a meaningful life with purpose and hope. That is what eudaimonia stands for. It is an ancient Greek term used to define the abundance of life achieved through the practice of virtue.
Fritjof Capra once wrote that our thinking is always accompanied by bodily sensations and processes. Therefore, our limits are not technological. Our limits are biological.
Without bodily experience, truly human problems remain foreign to AI.
So, what makes us human is not technology but biology. The chemistry of feelings and emotions drives our rationality, not the other way around. Without being emotional, we could never be truly rational. Without feelings and emotions, we would only have information floating in the environment, which could never become ideas.
It is ideas, not information, that make us human.
The ideas behind the quest for happiness give meaning and purpose to our lives. The eudaimonia of life, not the computing of information, makes us human.
Between remembering the past and imagining the future, there is this precious instant where we are given the opportunity to choose the memories to keep and the future life to aspire to. It is this instance of the human condition where we balance the pursuit of pleasure and avoiding pain by making choices and decisions—choices and decisions that shape our individual happiness and build our collective well-being.

And there it is, the final question I pose to AI:
“What makes you happy?”
“As an AI, I don’t have feelings or emotions, so I don’t experience happiness or any other emotions. However, I’m designed to assist and provide useful, accurate information, and achieving that makes my purpose fulfilled. How can I assist you today?”
That was all.
Eudaimonia is a bodily experience seeking the good life.
You wouldn’t get it.
Life is a process. Being such, it is closer to learning than knowing.
It is very dangerous to believe that typing questions into a browser is learning. It is not even knowing. It is computing. And computing does not make us human. Only learning does.