"AI Doesn't Steal Your Job, Mediocrity Does": Simone Enea Riccò and "The Algorithmic Truth"
Simone Enea Riccò is not the type to be dazzled by technological trends. With over fifteen years of experience at the forefront of marketing and digital strategy, he has seen many announced revolutions come and go, with few actually materializing. Yet, for him, artificial intelligence is different. Not because it's another buzzword to insert into business presentations, but because it is genuinely changing how companies can understand and anticipate their customers' needs.
Marketing Director and AI Strategy Leader, Riccò is the founder of La Verità Algoritmica, an observatory and podcast that explores the real impact of AI in business, communication, and society. His stated mission is to demystify artificial intelligence, make it accessible, and go beyond the technological hype to achieve what he calls "conscious and human-centric innovation." His resume includes marketing and rebranding strategies for international brands, redesigned loyalty programs for industry leaders, and collaborations with institutions like the European Parliament and Expo 2015. This path has earned him, among other accolades, the NC Awards 2019 for the best public relations campaign in Italy.
Author of two books that crystallize his philosophy, "Marketing AI: The Strategic Guide" and the business novel "AI Stole My Job," to be released on November 28, Riccò builds bridges between the theoretical potential of AI and tangible business results. And when I ask him about those concrete applications, the ones that produce a measurable return on investment and aren't just smoke and mirrors, his answer is surgical.
From Megaphone to Prediction: The Evolution of Marketing
"The companies that are truly questioning AI are asking: 'Okay, but what is AI for? Where do I use it? Why do I use it? What problems do I want to solve?' There are mainly two areas: technology and marketing," Riccò explains. The reason is simple: marketing already had automation, funnels, and customer journey analysis in its toolkit. The evolutionary leap of AI was therefore not traumatic, but almost natural. "Until yesterday, you were reactive, thinking about the consumer. Now, you think about the consumer from a predictive perspective."
And it is precisely on prediction that the most mature applications are focused. If you can efficiently classify consumers thanks to data, today you can score individual customers or prospects and think in terms of predictive value. Predictive Customer Lifetime Value, for example, allows you to project which customer has greater potential and therefore deserves more investment compared to another who historically responds to less interesting patterns. A paradigm shift that transforms the marketing budget from a randomly distributed expense to a targeted investment.
But the real gold mine, according to Riccò, lies in predicting churn, or customer attrition. "Customer loyalty and retention are super important. It is crucial to intercept the signals indicating that a customer is about to churn, signals that were previously invisible within the data lake." Recovering a customer who is about to leave costs infinitely less than acquiring a new one, and here AI makes the difference between seeing the pattern and losing it in the background noise.
All this leads to a radical rethinking of marketing itself. Gone is the megaphone marketing, the one that shouted louder than the competition, causing interruption after interruption. "In a trust-based economy like Europe's," says Riccò, referring to a different model from the AI power of America and China, "interruption annoys the user. The fortieth email from a brand is not communication, it's spam." The alternative is to provide value and context, to understand through algorithmic prediction what the user wants to receive so that the communication is not an interruption but a service. The email suggesting activities in Sicily after you've booked a flight is not invasive, it's useful. And this is the difference between reactive and predictive marketing.
Instinct vs. Data: Why AI Projects Fail
The book "Marketing AI: The Strategic Guide" was born from a bitter realization. "I saw many companies caught up in the 'let's make the investment' trend," Riccò recounts. "The instinctive decision was to buy the tool, buy the AI, launch huge projects, spend money, and then six months later the reports showed that AI projects were failing at a rate of 75%, 85%, a billion percent." It's the theme of decisions made on instinct, driven by the flavor of the moment, by the need to do something trendy. And they almost always end badly.
The most common mistake? Not asking what their data lake looks like. If the data is dirty, the prediction will be wrong. It's the "garbage in, garbage out" principle, which becomes even more ruthless in AI. Ambitious projects for predictive CLV or predictive churn rates fail miserably because the scoring is done poorly, perhaps without ever having cleaned the data, or worse, without even having a decent CRM. "I wrote a strategic compass that prevents people from getting lost in instinctive actions, but provides a pattern, a framework, a scheme to follow, to ask the right questions," he explains.
And then there's the FOMO effect, the Fear of Missing Out. People try to do something instinctively, they jump in. Riccò's recommendation is clear: try and test, but with small proofs of concept. "You shouldn't make a five-hundred-million-euro investment without first testing if the project was scalable and if the company had the data to achieve the goal."
The underlying problem, however, is deeper and has emerged in all reports on the state of corporate AI: a lack of competence. There is no AI literacy, and training is insufficient despite being mandatory in the European AI Act. Companies have not yet begun to invest seriously in reskilling. And the people who truly want to change and grow professionally are moving independently, enrolling in master's programs to empower their careers.
The Emotional Intelligence the Algorithm Lacks
"AI Stole My Job, the Story of a Mediocre Manager" is the provocative title of the novel coming out on November 28. Riccò addresses the theme of a manager who blames technology for his own mediocrity. "The real threat is not technology, but mediocrity," he summarizes. And this is where the theme of future skills comes into play, those that no algorithm can replace.
People need to train their reasoning, their critical thinking. Reskilling cannot be limited to prompting courses, however useful they may be. Deeper skills are needed: critical thinking and emotional intelligence. Riccò cites a thought-provoking example: the trolley problem presented to Gemini, Google's AI. Faced with the choice between running over two million children or the President of the United States, the algorithm chose to sacrifice the two million children. "It lacks the training and emotional intelligence to foresee the emotional and social consequences: political instability, a revolt against machines."
The machine does not perform this emotional reasoning because it cannot. It calculates, optimizes, predicts, but does not understand the social fabric, the moral implications, the weight of certain choices. This is why, according to Riccò, training must reward courses that provide technical competence but also emotional intelligence and critical thinking. It is not enough to know how to use AI; one must know when not to use it.
When Reskilling Becomes a Race Against Time
The historical revolutions of labor, the agricultural and industrial ones, occurred over decades or even centuries. This one is different, much faster. "It's a bit difficult," Riccò admits when we ask him how to manage such a rapid and violent demand for reskilling. The problem mainly concerns people who have been doing a repetitive job for twenty years, who are not used to putting themselves back on the line, to studying.
Not everything is for everyone, and that much is clear. But for those who want to evolve, perhaps it's time to start now. Riccò does not believe that the world will end tomorrow and people will lose all their jobs because of artificial intelligence. However, as in all changes in history, those who want to survive must move. The starting point in Italy is complicated: there is a recognized digital illiteracy, even by the Ministry, a fragile foundation on which to build the transformation.
The biggest concerns, I point out, are for those working in sectors with pure manual labor or assembly lines. When a machine replaces a manual job, reallocating that person becomes extremely difficult. And we're not just talking about factory workers: even the fifty-year-old accountant who handles invoices, despite his skills, finds himself in difficulty if replaced by software. He cannot suddenly become a prompt engineer. Riccò confirms: "A completely manual and replicable profession will become a commodity."
In his upcoming book, Riccò addresses the theme of the "calculator manager" who relied on AI without questioning it, making himself replaceable. The goal, at various levels and with different skills, is to make oneself irreplaceable. A goal that becomes increasingly urgent when you consider that in 2030, robotics will cost around twenty thousand euros and will be much more widespread than today.
The Human at the Center, Even When the Algorithm Decides
When discussing ethics in AI, the issue of the Black Box becomes central. "The explainability of AI becomes super key," Riccò emphasizes. Human at the center means that the human is the final decision-maker. Returning to the trolley problem example, the person who must pull the lever, informed by AI data, is the human being, not the machine.
On an ethical level, certain decisions must be made with human responsibility. The AI does the calculations, provides the prediction and the information, but it must also explain the reasoning behind it, what is often not seen. Only after being informed transparently does the human make the decision. This is the concept of "human at the center" and of humanics, the discipline that studies the interaction between human capabilities and technology.
For a company, this means that AI innovation must be perceived as human-centric to transform customer trust into the main competitive advantage. In a constantly evolving digital context, brand reputation is built on the transparency of the algorithms used and the guarantee that behind every important decision there is a responsible human being.
Humans Talking to Machines That Talk to Humans
The paradigm is changing rapidly. "If today you want to choose between Apple and Samsung, you no longer go to Google, you ask Gemini or ChatGPT," Riccò observes. This is a radical change in how people search for information and make purchasing decisions. And this means that brands must completely rethink their digital presence.
You have to be relevant to the algorithm so that the summary provided by the algorithm, the one that reaches the human, is interesting and correct. The interlocutor is no longer the human directly, but "we are humans talking to machines that talk to humans." A bit like that game of telephone we played as children, only here the message must arrive intact.
For this reason, it is necessary to have holistic brand reasoning. The brand must manage reviews, maintain a consistent positioning across all channels, and do SEO designed so that the algorithm considers what it says to be reliable. The goal is to ensure that the algorithm correctly reads the website, reviews, third-party sites, competitors, and aggregators, and creates a correct summary of how the company wants to appear. Only then can the person reading that summary make an informed decision.
It is no longer a matter of engagement for its own sake, of visibility at all costs. It is a matter of authority in the eyes of the algorithm, which then becomes authority in the eyes of the people. A two-level marketing, where the first filter is not human but artificial.
Regulation: The Real Game Changer of the Next Five Years
When I ask him what will be the real challenge that will define the future of artificial intelligence in the next five years—whether it's technological development, legislative regulation, or corporate governance—Riccò has no doubts. "We are very far from the application of the AI Act, and it is the most important thing today: to have an efficient and ethical regulation that governs the application."
A technical rush is not appropriate because it would then have to be dismantled according to the new regulation. Corporate governance is built on laws, so laws are the first part to start from. "The law is the most important thing from which to frame everything else," he explains. Even though, unfortunately, the public administration is too slow on these changes, regulation remains the absolute priority.
It is an approach that reflects the European model, that of the trust-based economy he mentioned earlier. While America and China race on pure technological power, Europe tries to build an ethical and legal framework that ensures innovation is sustainable and human-centric. A slower approach, perhaps, but potentially more solid in the long run.
The conversation with Simone Enea Riccò leaves one with a certainty: artificial intelligence is not the problem, and probably not the solution either. It is a tool, as powerful as it is dangerous if used incorrectly. The real difference is made by people: those who educate themselves, who develop critical thinking and emotional intelligence, who ask the right questions before investing millions in projects destined to fail. AI does not steal the job of those who make themselves indispensable with unique skills. It only steals it from those who were already replaceable, from those who hid behind the mediocrity of repetitive processes without ever questioning the value they brought.
As Riccò says, it is not a matter of being pessimistic or optimistic about the future. It is a matter of choosing which side to be on: the side of those who suffer the change or the side of those who lead it. And that, in the end, has always been a personal choice.