AI news
May 1, 2024

AI Ethical Dilemmas

AI presents a number of crossroads, but are the most famous ones really dilemmas we should worry about?

Daniel Guala
Daniel Guala

The ethical principles and thoughts of individuals on specific topics raise discussions that shape social values. The consensus of society guides legal codification and legal processes, transforming ethics into laws. This has always been the true importance of ethics and philosophy.

Debates surrounding AI are flooding the media and we have seen them countless times, and they always orbit around the same arguments, but almost no debates provide solutions.

We will try to solve these AI Ethical Dilemmas, as far as possible, or at least give readers perspectives so that they can come to their own conclusions.

A futuristic robot, an ancient Greek philosopher and a computer technician discuss Artificial Intelligence Ethics
Ancient Greek philosopher, nowadays person and futuristic robot discuss AI dilemmas. Image by Daniel Guala

Book V, 18: Nothing happens to any man which he is not formed by nature to bear.

I was right behind Marcus in the checkout queue when we realized that I was charged 6 euros less for the same book (The Critique of Practical Reason). The shop without price tags bases the prices on our internet profile and the data it gets from us. An AI had decided that because of various data extracted from my digital profile (age, work situation, marital status, family, education, etc.) because I don't go to the shop very often, because the clock showed that there was just one hour left before closing time and for other objective reasons, I should pay less than Marcus for the same product. We looked at each other in surprise, but once we reflected on the reasons that could have led to this circumstance, he understood it and stoically accepted the difference in the price of the product.

How would you feel about being charged more than someone else for the same product? Or for a seat in the same class on the same plane? Or to avoid throwing away perishable goods in a supermarket?

The Paradigm

Let's change the scenario. An autonomous car with no brakes has to decide between running over a child crossing the road at 100km/hour or car swerving and falling down a 100m high cliff with the couple inside along with their dog and cat: what should the autonomous car do? If you were the manufacturer of the car, would you harm your customers? What if you have someone you know in the situation (child or occupants of the car)? What if instead of a child, you have a homeless person in the car? What if…?

This thought ethical exercise is also known as the trolley problem, and it even has an online game!

This dilemma is not meant to be solved but to stimulate contemplation, provoke thinking, and create discussions that acknowledge the challenge of resolving moral dilemmas and our constraints as moral decision-makers.


If we go deeper into the moral conflicts related to AI, they even raise the question of whether or not we should develop this technology. The main reason given is that this technology has the potential to lead to the destruction of humanity through a number of avenues.

Artificial Consciousness

One of these avenues is that AI development is reaching, or will reach, a point where the AI can develop its own consciousness and act in its own interests, which may not be aligned with the interests of humanity. Or those of a particular group, as we saw with the AI called HAL 9000 (named in ‘honor’ of a big tech that we can identify by adding one alphabet position to this acronym) in the movie 2001: A Space Odyssey. Debates about AIs were already there in 1968!

In the case of Artificial Consciousness, we would first have to know in depth what consciousness is, how it is formed, what it is composed of, how to identify it correctly, and more importantly how to replicate it in an artificial system. With our advances, we have managed to emulate human consciousness but we have not yet been able to simulate it. That is to say, we have managed to replicate human consciousness by imitating its cognitive processes and developing computer systems such as LLMs (Large Language Models) that pretend an apparent understanding of reality and social interaction and that have an advanced capacity for expression (emulation) that, in practice, can be used as a “conscious” agent. However, we have not yet been able to create a comprehensive and realistic representation of human consciousness, with a biological basis and the complex processes that we perceive in the mind and that would serve to study, analyze, and predict the behavior of a real system such as the brain (simulation). Clearer examples of emulators and simulators are, for example, the SNES emulator in our computer: with this emulator, we can play Mario Bros. (it fulfills the same task as the SNES) but we cannot see what happens if we disconnect a circuit of the SNES. On the other hand, with the Flight-Simulator game, which replicates flight physics, we can observe what would happen in certain scenarios and with certain behaviors, but it does not take us from our desks to another city (it does not replace the function of an airplane). If this difference is still not clear, when we (IT people) have doubts about a certain topic what we usually do is to look it up at StackOverflow 😀.

David Chalmers argues that replicating the brain 100% by knowing its mechanisms and processes in depth is the “easy” problem to solve and that the “hard” problem of consciousness is part of explaining how consciousness arises from the physical processes of the brain. In parallel, Thomas Nagel with the question “What is it like to be a bat?” also exposes the difficulty of understanding the subjective experience of another organism even if we can, for example, understand how a bat’s brain works and how it perceives the world through its echolocation system. This is something that is difficult or impossible to solve since science tries to explain things objectively and we cannot abandon the subjective point of view if we want to get closer to the real nature of the phenomenon of self-awareness. If we ask ourselves “What is it like to be ChatGPT?” we would see that the subjective experience of the OpenAI star would be something akin to simply waiting for someone to write something to me and when that happens trying to predict a plausible answer to that question. We can easily intuit that this is not the experience of a conscious being.

If this ever happens, it will be a challenging and thrilling time and we will surely make a leap in evolution and knowledge of ourselves and the world thanks to our new friend. We are used to dealing with animals, that have a certain degree of self-awareness, but it would be the first time the new entity would be able to communicate with us in our very own words. A conscious machine would raise the debate about the rights of the new entity and the ethics of having it ‘subjected’ to our will. We should focus our interactions with this new being as we would with any new intelligent person we meet: actively listening, building trust, dialoguing, using diplomacy, being friendly, and learning from each other.

Artificial General Intelligence

Another fear that plagues the community is that AGI (Artificial General Intelligence) or “Strong AI”, which is an AI that can perform any cognitive task infinitely better than a human, will be reached soon. This new super-intelligent species, often illustrated as self-aware as well, would subjugate us by considering itself superior, it would “help” us by imposing a totalitarian mandate to achieve some of the goals we would have asked it or it would eliminate us directly because we are a great evil to the planet.

As for AGI, we must ask ourselves whether it is right to compare a machine or technology that has followed a different evolutionary path to ours, that has a different composition, that learns differently, etc. with human beings. We have our own limitations and machines have theirs too, so why do we want to create something that is similar to us? With all the flaws, imperfections, and things that we, humans, are bad at, why use us as a starting point and then improve on it? Is it fair, beneficial, or real to compare ourselves? Wouldn’t it be better to follow what we have always done and solve domain-specific problems by applying new technologies without the specific objective of “outdoing for the sake of outdoing” something in particular? In this sense, there are a couple of comparisons that exemplify this point really well: “Aircraft were not created to surpass birds”, or “We have not tried to create a bird better than others”.

Perhaps, AGI in the future will not be a machine that is created as a single entity or tool but many tools for different purposes or a state of technology in general that helps us to live better. Today we move faster, we are more productive, and have better conditions and knowledge than before we are ultimately “better” because of various techniques and technologies that already surpass our capabilities in very specific aspects that combined have already created “better humans”

For any apocalyptical AI scenario to happen, we should have taken quite a few steps backward in natural human intelligence to delegate to an automatic process, in which no person intervenes, the power to perform devastating actions.

An Artificial Intelliget Robot gives a speech in front of humans and robots in a dystopian future.
Artificial Intelligence Dictatorship. Image by Daniel Guala

Book II, 14: …Neither the past nor the future could be lost, because what we don’t have, how could anyone take it away from us?

Nowadays Issues

If we move away from dystopian futures, right now, there are already circumstances related to humanoid robots that are directly affecting the well-being of some people.

Uncanny Valley

The uncanny valley phenomenon is the feeling of unease or discomfort when faced with an artificial representation that closely resembles a human being, but not enough for us to identify it as a person.

Photo by Maximalfocus on Unsplash

People suffering from this affliction are not in a very good place at the moment, as practically every day these androids that give them ‘cringe’ are exposed in any kind of media.

Understanding that the uncanny valley is a known phenomenon can be helpful as knowing that other people also experience this sensation can help to normalize it.

Artificial Companions

Advances in Large Language Models (LLMs) and robotics will result in another issue that gives plenty of food for thought. In a not-too-distant future, androids with human-like physical capabilities will have, among many other uses, one in particular that is quite interesting to ponder: The use of androids for the care of dependent people and companionship.

Knowing that unwanted loneliness is a growing phenomenon in all developed countries, these androids, as well as caregivers, would be the ones with whom these people would most relate and interact, possibly creating important affectional bonds in many cases. These bonds don’t even require to have a physical presence, as robots have, it is enough with interactions over voice or text, which are widely accessible nowadays.

Is it possible to be friends with an AI or a robot? Is it morally acceptable to depend emotionally on a robot as a friend? do we give AIs the faculties of a human psychologist to help people?

The latter question is already answered on, where the character “psychologist” has 93.7 million users.

Screenshot of the web page, starting a conversation with the Psychologist Character. It displays a red warning saying Remember: Everything Characters say is made up!
Remember: Everything Characters say is made up! Mobile screenshot of

How would AI or robot friendship affect social dynamics and traditional human relationships? Could it lead to social alienation or emotional isolation? Will some individuals ask for ‘human’ rights for their artificial friends or partners? Or will they be their subjects? Will we consider them conscious beings when they behave as such?

In relation to the above, John Danaher defends in his article that we should not try to avoid making robot friends, but should actively take advantage of the opportunities that friendship with robots gives us. On a philosophical level, he reminds us of the 3 Aristotelian friendships: the virtuous (highest), the utilitarian, and friendship for pleasure. He argues why friendship with robots can become a virtuous one. Even if it is not accepted that this can become a friendship of the most meaningful kind, friendship with AI can at least be one of the lesser kinds. We can build utility friendship or we can build it for pure pleasure and amusement. Friendships between humans are rarely of the virtuous kind, and the presence of robots as friends can complement and enhance human friendships without undermining existing friendships with other humans. One example in fiction is R2D2 from Star Wars and the relationship he maintains with the group he accompanies and assists in a wide variety of tasks.

Other examples can be found in everyday life where people develop affection for objects or machines and even talk to them, such as cars, musical instruments, fictional characters, gifts from people we love, childhood toys, etc.

If you like this topic, the movie HER (2013) is highly recommended.

Image by

Book VII, 55… The prime principle then in human’s constitution is the social.

AI Generated content: misinformation and disinformation.

An issue that is as trendy as worrying when we talk about AI is that we are seeing more and more AI-generated fake content that is shared as real or it is dangerously inaccurate. Misinformation refers to false or misleading information shared without harmful intent, while disinformation involves intentionally spreading false information to deceive or manipulate audiences.

These practices contribute to the stigma surrounding AI, leading many to associate the technology with criminal activities and view its development in a negative light. Is an AI-generated image the same as a Photoshop-generated image? What content is acceptable to create? Is it wrong to create a virtual avatar of an unexpectedly deceased loved one so that we can say goodbye to them? Is it technology that creates fake news on its own initiative? Does AI hate a particular group of people or is it the fault of the data it has been fed? Is it OK to modify AI data to positively discriminate against a group of individuals? Did fake news, fake content, and impersonation exist before AI, or is it just a change that has refined what went before? What to do about all this?

Education, critical thinking, and common sense are the solution.

More on this in later articles 😉.


We have always incorporated new techniques and technologies to restore our capacities or to correct deficiencies we have: prostheses, glasses, crutches, etc. The term transhumanism has emerged to describe the incorporation of technology applied to ourselves with the aim of surpassing our biological capacities, of making us evolve through technologies such as genetic engineering, nanotechnology, artificial intelligence, and robotics. The ability of AI to integrate into all of these areas (and many more), is stirring the debate around transhumanism. Many ethical debates arising in this area raise the question of whether all that is possible is desirable. Some critics argue that transhumanism could perpetuate existing social inequalities, as only those with economic resources and privilege could afford to enhance their human capabilities if there are social differences between those who are enhanced and those who are not. Increasing life expectancy beyond natural limits would increase pressure on the health care system, negative effects on pension management, and psychological changes in the mentality of the young and the “old” and the way they relate to each other. It is also questioned whether genetically modified humans could still be called “humans”.

Other positions maintain that these and other debates on Transhumanism are not a problem as such, since we have been improving ourselves forever with clothes, shelter, weapons, telescopes, cars, planes, and many others, in order to improve our innate abilities to withstand the cold, hunt, move around… Human beings have always used the intellect to take advantage of the resources of their environment, to modify them through techniques and other technologies, and to create new tools or situations that give us an advantage over the previous ones in order to live better in that environment. We should not forget that almost every time a disruptive technology emerges, those who have the most money benefit first, as these technologies are expensive and companies benefit from their competitive advantage, but after some time the costs of producing the technology tend to fall and more and more companies produce or use the technology and prices fall, expanding it to more people.

There are controversial companies in this area, like Neuralink, that aim to “create a pervasive brain interface to restore autonomy to those with unmet medical needs today and unlock human potential tomorrow”.

What about you? Would you increase your life expectancy while maintaining all your faculties? Would you increase your skills by incorporating a chip to answer your questions directly to your mind? Would you have an arm with super strength or an eye with super vision?

A couple doing chores in the kitchen. The man has a robotic prothesis in his arm.
Transhuman couple doing chores. Image by Daniel Guala.

Job losses due to AI

Massive layoffs due to process automation, robotization, and advances in Artificial Intelligence are included in discussions related to AI as a moral dilemma to be solved. In this sense, there are several studies, papers, news, and well-founded arguments that make us see that the labor market will change due to technological progress, eliminating, modifying, or creating jobs because of the rise of AI.

Epoch transitions have always been marked by the incorporation of new techniques or technologies applied to new tools and processes: fire, agriculture, printing, steam engines, computers, Internet… and looking back, technology has never been considered good or bad because it destroyed many jobs or created many others. It “simply” causes substantial changes in societies and individuals end up adapting to the new scenarios.

Today, a new way of working is being forged in which non-technical skills and attitudes are becoming more important (to the detriment of technical skills). Ease and willingness to learn, teach, innovate, critical thinking, creativity, digital literacy, adaptability, time management, and many other skills will be more necessary and differentiating than ever. If you are autonomous, curious, learn, inform yourself, proactive, and have other skills such as those mentioned above, you can become a very interesting profile in the job market. Nowadays, there is a lot of free information about almost any topic and we must exploit this information. Engage in your projects, address the challenges you face, and solve the problems you encounter — everyone is interested in a troubleshooter!

Do you agree with those who say that AI will not steal your job but someone who knows how to use it will? Are you already using ChatGPT, Gemini, Copilot, or Perplexity for your work or hobbies? Do they increase your productivity?

After a period of assimilation, all new technologies or innovations become simply “technology”. In our hands, we have a device that has incorporated innovations in significant leaps almost without noticing it: the camera, the touch screen, Bluetooth, the keyboard corrector, the keyboard predictor, camera filters, face unlocking, fingerprint unlocking…

Non-automatable technical knowledge, physical skills, occupations, crafts, social sciences, and the humanities disciplines will be revalued in this world revolutionized by AI.

Thinking or having others thinking for you. Update yourself or become outdated. Curiosity or disinterest. Adapting or resisting change. Innovate or maintain the status quo. Learn or ignore. Improve yourself too or let only others improve. Reinvent yourself or stick to the familiar. Would you let someone with an attitude surpass your aptitudes? Which options do you prefer?

Book VII, 18: Is change feared? and what can be produced without change? Is there anything dearer and more familiar to the nature of the universal whole? Could you yourself wash with hot water, if wood were not transformed? Could you nourish yourself, if food were not transformed? And could any other thing among the useful ones be fulfilled without transformation? Do you not realise, then, that your own transformation is something similar and equally necessary to the nature of the universal whole?

Current conflicts

Other dilemmas are more of a conflict that has already happened in public or private sectors and that are being discussed in legal terms, institutions, and procedures.

Some of the conflicts are, for example, whether we should use artificial intelligence in personnel selection processes, in the manufacture of personalized medicines, profiling of bank customers to obtain mortgages, judicial decisions, artistic creation and copy-right, voice cloning, surveillance systems, warfare, etc.

These are issues where AI already has a practical involvement and pose real and current problems that need to be solved legally.

We will be dealing with law and AI in future posts 😉.

One topic that needs to be assessed today is the rise and spread of AI Ethical Frameworks proposed by companies, international organizations governments and which are all aligned between them and no one is questioning why. That’s the reason we published the article:


All these moral dilemmas are the ones that one encounters very often when investigating ethics and Artificial Intelligence, as they are the most mediatized and impacting to the big audience. Although many of them seem difficult at first sight, we have given perspectives that may help in how we approach some of these problems.

We will continue to raise other situations and other possible tools to approach these kinds of crossroads, but we can get an idea of the current landscape of discussions and some arguments to not being much disturbed by alarmist and current dilemmas.