Use AI to write emails, not to write poetry
Tags:
AI
Date: 2025-03-19
We had two possibilities: use artificial intelligence to work less while maintaining the same level of productivity, or work the same number of hours while increasing productivity. Obviously, we "chose" the second option. "Chose" because in a system based on competition, it wouldn't have been possible otherwise. If you produce more, I must produce more; if you produce less, it still benefits me to produce more.
But that's how things are, and AI is here to stay. To be clear, I too am convinced that we are in the middle of the so-called "AI bubble." Among all the things that make me think this, these are the most relevant two: the imminent end of Moore's Law and the limited data we have available to train new models. In general, everything seems to suggest that we are reaching a plateau in technological development, at least until there is a paradigm shift.
However, even if we are far from the much-feared Artificial General Intelligence, I have no doubt that AI will accompany us for the rest of our days.
Our generation is experiencing a small revolution, somewhat reminiscent of the one our parents experienced. At some point in their lives, a tool appeared that was suddenly able to revolutionize the way we work, dramatically increasing the productivity of many at the expense of a few who were replaced by the new tool. There was no point in fighting it or refusing to use computers. Anyone who did ended up paying the price.
The same will be true for us. It's useless to cry asking to "regulate AI" (limits should be placed on companies, not on research, and above all, how do you regulate matrix multiplication?). We can no longer erase what we have discovered. As happened with the advent of computers, some jobs will disappear, others will be created, and all will change.
And this isn't even the biggest danger. Neither are fake news, deepfakes, and the various types of fakes that can be generated with AI. I have no doubt that the greatest danger is the temptation to entrust crucial decisions or operations to artificial intelligence, which is a system we don't fully understand, since all state-of-the-art models are "black boxes". We can observe the input and output, but we cannot see from the outside what is happening inside the box (there is a branch called Explainable AI, or XAI, which aims to study what happens inside the box). If we don't know what happens inside the box, we don't know how decisions are made, which could be flawed by errors in the data, biases, or "reasoning" that we would find unacceptable according to our ethics. For these reasons, human supervision will always be necessary.
"A good human plus a machine is the best combination."
Artificial intelligence certainly allows us to do things that we would have previously postponed or set aside due to lack of time, or even things we wouldn't have been able to do (see this ugly website). We can delegate paperwork, boring tasks, and mechanical operations for free, so we can focus on more interesting things. But be careful that the opposite doesn't happen. Don't delegate the creative process, artistic expression, or the pleasure of performing a difficult task to AI. In an increasingly automated world, these things - art, words, philosophy - will be what make us human. Use AI to write emails, not to write poetry.
Please note: AI ≠ LLM
At the time of writing this article much of the attention is focused on Large Language Models (LLMs). In fact, LLMs are often referred to by the term "AI."If you want to get an idea of how LLMs work, this is the best video I've ever seen on the subject: https://www.youtube.com/watch?v=wjZofJX0v4M.