Philosophy of AI: It's not you, it's me…

Is AI a job apocalypse or pure hype?

Every day we hear or read news of all kinds about AI, and all of them resort to the usual exaggerations where what matters is getting the click, not that the news reflects reality.

The messages can come from two extreme schools of thought: either AI is an apocalypse for the world (at least the working world) as we know it, or it is just hype that is good for nothing.

So, out of curiosity, I asked AI what it thought about it. 🤖

I used the same prompt in the three main AIs: ChatGPT, Gemini, and Claude:

what do you think about all the hype and hyperboles being said about AI?

The nuances in the responses each model provides are curious. Gemini introduces or simulates a pseudo-self-awareness, although it demystifies it within its own response, acknowledging biases and having a much more moral component than ChatGPT, which focuses on more practical productivity components. Claude sticks more closely to the question and focuses on the extreme messages. And all of them advocate for a middle-ground position of "neither so much nor so little."

All three consider AI as a powerful tool, and as such, it amplifies and changes the nature of work. 🔧

Reflecting on the messages predicting the end of certain roles such as programmers, product owners, designers, etc… it seems to me that:

  • Those who predict the end of a role are usually from another role, often with conflicts or issues with the role they advocate for eliminating: Developers regarding product owners and vice versa, CEOs regarding anyone appearing on the expense account, etc… 🙂
  • It is a mistake to talk about roles; we need to talk about the nature of the work.

When we talk about the nature of work—and it can be perceived in the responses from AI itself—we usually think about the possibility of automating certain tasks, but tasks are only one part of our job. In fact, many tasks were already automatable before generative AI; the way (and the cost) of automating them has simply changed.

I would like to reflect on other less visible characteristics, which are slightly glimpsed in ChatGPT's response:

Judgment, decision, and responsibility.

These three characteristics have been treated since antiquity in philosophy as essential attributes of the human being. Judgment and the ability to deliberate and weigh arguments, decision or the capacity for choice, and the responsibility for our acts have been discussed by philosophers from ancient ones like Aristotle to more modern ones like Descartes, Hume, or Kant.

Beyond philosophical essays, these concepts have defined the architecture and organization of our society, being basic pillars of concepts such as law, ethics, labor relations, meritocracy, etc…

When we see the advances in AI and the capabilities of new models, we must not forget that these characteristics are essentially human, and no matter how much AI has (or can simulate) the capacity to decide, it is still a tool, and there must always be a human who assumes responsibility for the acts.

I have seen with concern how certain actors intend to use AI not as a tool that increases human capabilities, but as an excuse to dilute the responsibility of actions executed by a tool like AI.

The example of Anthropic refusing the use of AI by the US military for mass surveillance of US citizens or for autonomous weapons is well known. Beyond the ethical considerations of whether those uses are justified or not, what is relevant is the responsibility for the actions. Is it intended to use AI in those cases to dilute the responsibility for those actions? Who is responsible if AI breaks the law while monitoring citizens? Or if AI makes a mistake and "chooses" to eliminate an innocent person?

Without reaching extreme life-or-death cases, in our work and daily lives we have multiple examples of decisions and responsibilities that we assume. Let's ask ourselves: if things go wrong, who is responsible, who is going to be fired, who is going to be held accountable, or who is going to step up… if the answer is "it was the AI…", we are heading in the wrong direction.

Therefore, the application of AI in our jobs will be limited by the concepts of judgment, decision, and responsibility. It is not that AI cannot do it, it's that human beings and society are neither prepared nor willing to yield these concepts.

In this case, we will have to say to the AI: I'm sorry, it's not you, it's me….