Saturday, May 2, 2026

The impact of AI automation on our Jobs and our Lives


 

1. Introduction 

I had the pleasure of being invited to talk at the RNS (Rencontres Numériques de Strasbourg), with Yann Fergusson, about Agentic AI and its impact on the future of Work.  This is a topic that I have worked on for a long time, although the very rapid pace of progress has led me to reconsider my opinions every few months. The interesting part is the tension between a very old debate, that is the impact of automation on the future of workforces, and the new twists due both to the new capacities of genAI and the ambiguity of the “agentic revolution”.  For instance, many of the key questions that are raised by automation were already well addressed in Kevin Roose’s book “Futureproof – 9 Rules for Humans in the Age of Automation”, but 5 years later the landscape of what is possible today and what is coming has changed considerably.

I will simply here list three of my previous blog post to recall the analysis that I started with:

  • The Future of Work and the Transformation of Jobs” is the most detailed blog post I wrote in 2019 (latest revision of a 2016 paper), which is a foresight attempt to imagine how deep automation permitted by AI & robots will change the nature of jobs.

My goal today is to propose a short and self-contained synthesis of my thinking about the topic of AI automation and jobs. I had to make an editorial selection of a few key ideas for the public lectures that I gave in Strasbourg and Bonnieux, leading to a position which is crisper and easier to grasp (compared to my previous posts). I have no crystal ball, so any position that is “crisper” is also likely to be less true, so this is only offered as food for thoughts. I will also take this opportunity to “update the references” and the scientific works that support some of my opinions. For instance, if you have not done so already, I strongly advise to download the “AI Index 2026 report from Stanford” (which I am going to use in this post, as AII2026). What makes the topic of this blog post complex is the constant deluge of outrageous claims on both sides, telling us either that half  the jobs will disappear in two years or that coding will have disappeared at the end of 2026, while others are “showing” that no significant value creation or deep transformation has occurred yet, that “95% of genAI projects failed to deliver value” and that “a bubble of investments” is about to burst. Reading Chapter 2 of the AI index will help you grasp the amazing progress (especially in 2025) but also realize that there is lot of improvement required before we can expect AI to do everything autonomously.

The nutshell summary of my analysis is that a deep change has already started but the rate is, for the moment, much slower than what is claimed by those whose investments require rapid payoffs. There is no surprise here : this is the illustration of the famous Amara’s law. There are two main reasons for the “disconnect”. First, those who report the “amazing things that we can do today to produce code or to perform agentic automation of knowledge work” tend to ignore the “false positives”, all the things that do not work so well. This does not reduce the impact of the amazing feasts of science and tech that we have seen recently but is slows the practical applicability. Second, the tech adoption models are naïve (our jobs are more than sets of tasks) and overemphasize our cognitive load versus “all the other things we do as humans”. The result is a “viscosity” of AI-fuelled business process transformation (which obviously vary and is clearly scale-sensitive). On the other hand, the fact that we see a slow start does not change, as pointed out by Laurent Alexandre and Olivier Babeau, that the commoditization of cognitive intelligence with create an anthropic revolution in our societies in the long term.

This blog post is organized as follows. Section 2 proposes a framework to consider the impact of AI on work, at the individual level, the team and the business process level, and at the global enterprise scale (reengineering the services and processes through AI). I will also explain briefly why “agentic” is so important, because as an engineering discipline it allows to expand considerably the range of application of gen AI. Section 3 tries to propose some elements of thoughts in the very polarized debate about the speed at which genAI will impact our businesses. I will detail the conviction from the previous paragraph, explaining the reason for slow progress at scale but also the inevitability of deep transformation of our job landscape. Section 4 steps back at a macro-economic level to probe further on the possible consequences on our economies. I will start by evoking briefly “The 2028 Global Intelligence Crisis” from Citrini Research, which has created a lot of heated discussions in March. I will return to the “Future of Work Parametric Model” to illustrate the complexity and uncertainty of the societal consequences of job changes produced by AI automation. Last, Section 5 will focus on working with these new forms of advanced AI, since even though the speed of the transformation is uncertain, the depth is bound to be spectacular. We need to adapt to these new tools and this new environment, so I will summarize some of the best pieces of advice that I have read recently.

 

2. Impact of AI on Work

 

For the past 15 years, I have borrowed The McKinsey model, which distinguishes three types of work: production / transaction / interaction. You may read more about this in the original paper « Preparing for a new era of work » from Susan Lund, James Manyika et Sree Ramaswamy at the McKinsey Institute , or in my 2019 blog post. Roughly, “production” regroups the jobs that create physical value, what is often designated as primary and secondary sectors. Transaction collects jobs related to creating immaterial/intellectual value without direct interaction, that covers some of the service sector but also the transactional part of primary & secondary sectors. The third category regroups jobs where value (intellectual, emotional, practical) is created through human interaction. This is a crude abstract model with overlapping, but it is convenient to characterize the macro transition made possible by AI & robotic automation:

  • Production jobs are mostly shifting to robots. Part of this has started a long time ago (I visited my first “human-less” Sharp TV factory in 2010 and this was a shock that I still remember), other manufacturing processes are still very complex and require today the collaboration of robots for strength and humans for intelligence (environment situational intelligence and adaptation, more than cognitive intelligence).
  • Transaction jobs are shifting to AI. This was already the trend 10 years ago, but the spectacular capabilities of genAI (and its acceleration) make this even clearer. Cognitive intelligence becomes a commodity: an increasing share of work based on analysis, writing, synthesis (and coding) is no longer scarce, and therefore no longer a differentiating advantage.
  • Human interaction jobs are more resilient: activities based on care, relationships, presence, hospitality, embodied education, or mediation retain a durable human advantage. This also, by definition, a localized landscape: interaction jobs cannot be performed remotely (otherwise they fall in category #2). Thus, the future of interaction jobs (that could be partially captured by humanoid robots) is a field that lays squarely in the hands of politicians (I will return to this in Section 4, this is what makes the McKinsey model interesting), whereas “Production” and “Transaction” categories are globally competitive by nature.

 

If “production moving to robots and transactions to AI” may be seen as the long-term trend, the deep transformation of business processes, whether for production or for transaction, through AI automation, moves slowly (considering that it started decades ago). Macro Iansiti explains this very well: adopting genAI (from RAG to knowledge and content synthesis to knowledge worker automation) is easy. You just need to decide, spend some time and you go. This is what we have seen since 2023, with the acceleration of 2025 with Claude Code (Cursor, Codex App, etc.) and OpenClaw. Adopting genAI as a team already requires some discipline, some common tools and shared framework. Using the same genAI tool to automate a full business process is more difficult. Task automation becomes less efficient as the shared context grows. Individual task automation, using genAI as “RPA 2.0” works, but the benefits are small and do not change significantly the economic results. As is beautifully explained by Sangeet Paul Choudary in his book “Reshuffle: Who wins when AI restacks the knowledge economy”, AI’s real transformative power is as intelligent glue that removes friction in processes. Focusing on AI can complete each task to replace a human miss the bigger picture of global reengineering : “However, the real impact of AI comes not from how it performs a task, but from how it restructures the entire system around that task”. This also what Marco Iansiti explained to Michelin executives: to draw the real benefits of AI automation of business process, you need simplification (streamlining), shared context and unique simple (to evaluate) goals, and a shared mindset to let the optimized automation made possible by AI work without suffering from “additional insights” of each human stakeholder. When reading Reshuffle, one can see a kind of equivalent of Amdahl’s law : if cognitive intelligence becomes a commodity (and associated time tends toward zero), the cost of a transactional process becomes primarily an orchestration cost.

The following  picture is borrowed from “Labor market impacts of AI: A new measure and early evidence”, a very interesting study from Anthropic that explains the gap between what genAI automation technology is, according to Anthropic, capable of doing today and what it is actually used for. This Figure is interesting for two reasons. First, it shows the slow start of genAI automation, made famous by the MIT/Nanda study last year. Second, it also shows the prevalent way of thinking of jobs as a set of tasks ready to be automated. I will return to this – clearly the main point of Reshuffle is that this is not the case, that context weaving is an integral part of knowledge worker’s jobs today. As stated by Choudary: “When we define jobs purely in terms of tasks, we risk overlooking the constraints that jobs are designed to manage. These very constraints – the context within which those tasks are performed, the coordination required to sequence tasks to get actual work done, and the risk that needs to be managed if something goes wrong – hold jobs together. … Ask someone to explain away their own work as simply a set of discrete tasks, and they’ll likely resist, insisting that it’s a lot more than just that. But ask them to break down someone else’s job, especially one they don’t understand well, and the task model starts to feel plausible”.


Figure 1 : Capability map of genAI (observed vs Theoretical)

 

The complexity of adoption does not mean that we should expect a slow transformation, disruption will come through the competitive environment. There is a common meme on the Web saying that “you will not lose your job to AI, but to someone that uses AI better”. Based on the observations from this section (and the next), one might put things differently: “You will not lose your job because of AI, but because your company may lose market share to another company (that may be far away), that has found how to adopt AI-based processes faster”. The competitive pressure to adopt AI is strong, as we shall see in Section 4, it is a race that seems difficult to miss. As noticed by Eric Hazan, the pace of AI investment matters, and it varies a lot from one country to another. It is not limited to an investment question, but also the ease of doing AI business because of regulation and the attitude of the market towards AI-driven innovation. Choudary in his book gives the example of Airbus use of AI and digital twins to build a significant competitive advantage: “Rather than optimizing isolated tasks, it enables superior coordination across the entire system by identifying bottlenecks and testing alternate workflow configurations. Workflows adapt in response, and roles shift accordingly. Where task-oriented AI enhances performance within existing work structures, the digital twin allows Airbus to redesign the structure of work based on a constantly evolving view of the whole system”.  Reshuffle is quoted by Howard Yu in excellent paper that looks at AI as a force reducing friction in a Coase transactional model  of the entreprise.

As stated in the introduction, 2025 is the year when we have seen agentic approaches of genAI significantly gain in popularity and proven outcome, as exemplified by Claude Code. It has taken me most of the year to truly understand why there was a breakthrough, not because agents were autonomous or ready to multiply the capability of LLMs, but because agentic discipline is a way to deliver value in a sustainable way, that is maintainable and reliable. Agentic decomposition is a method to outsource the “chain of thought” techniques that are part of any LLM system into the hands of the user to add domain-expertise to this decomposition. However good, LLMs still suffer from hallucinations when using long contexts. Agentic engineering helps to stay in the “comfort zone” of moderately sized requests. I reproduce here a quote from the AII2026 section called “The Gap between long Context Window and Deep Understanding”: “However, bigger context windows do not translate into deeper understanding, as the gap between accepted and usable context length is wide. Recent research points to different reasons for this gap. On one expert-level, long-context benchmark (LongBench v2), human experts scored just 53.7% accuracy under a 15-minute time limit, and the best model scored 57.7% (Bai et al., 2025). This is a narrow margin in contrast to the structured benchmarks where models have surpassed human baselines, and reflects the difficulty of deep comprehension over long inputs”. As told earlier, AII 2026 is a good reading to get a balanced view on AI progress, including agentic framework: “AI agents advanced from answering questions to completing tasks in 2025, though they still fail roughly one in three attempts on structured benchmarks. On OSWorld, which tests agents on real computer tasks across operating systems, accuracy rose from roughly 12% to 66.3%, within 6 percentage points of human performance”.  Agentic genAI tools matter to the topic of automation and the impact on work because, on the one hand, this is the right approach to make genAI “the next generation of RPA (Robotic Process Automation)”, but, on the other hand, this is not a magic wand and what I learned in 2025 is that the domain-matter expertise of the human writing the set of prompts (the “context engineering”) matters a lot to reach this goal of reliability and maintainability. Anyone can vibe code (and anyone should, as I will point out later), but producing maintainable software is still an engineering skill, even with agentic automation (to avoid the production of slop code).

 

3. Foresight and Speed of Transition

 

The speed of the transition to full AI automation for production and transaction activities is slower than what people think when they experience the marvels of 2026 gen AI systems. This is a consequence of the previous section: it requires skills, discipline and courage to adopt the mindset and methods of agentic approaches and to reach large-scale benefits. This is the core of the debate about the Citrini research report “The 2028 Global Intelligence Crisis”. This essay (a scenario, not a prediction, as told by the authors) explores, in a system dynamics way, a possible cascading of disruptions that starts with the software industry (laying off employees as agentic automation starts to dominate). I enjoyed reading the paper because the systemic analysis is quite interesting and actually not so far from the FWPM model that I will comment in Section 4. However, I believe that the timeline is simply impossible, being far too short and too aggressive in the scope of deployment. As pointed by Albert Meige, it starts with a misunderstanding about the METR “Task-Completion Time Horizons of Frontier AI Models”. The rate of reliable response is low in the METR experiments (50% to 80%) and the fact that Claude Code can do wonderful things (true and proven) does not mean that it can do everything (hence the need for agentic pipelines and engineering). Even if the agentification of software development accelerates, it will take time because it is hard (for large-scale companies and their large-scale legacies).  Enterprise software has its own lifecycle and integration complexity constraints, leading to strong inertia. The following figure is taken from one of my many LinkedIn posts about the future of software engineering. I will not comment it today, but I am showing it to illustrate my beliefs that there are many ways of using genAI to help producing software systems, which are all useful and necessary depending on each enterprise’s context, and that, as a consequence, writing code will not disappear at the end of 2026.

Figure 2: The different forms of AI usage for software engineering

 

One of the big questions is whether AI automation will result in human replacement or human augmentation. The answer depends on many factors; I believe the first factor is whether the initial need was served or under-served. When a need has been fully met with the appropriate solution, productivity improvements from automation are transformed into cost reduction; when a need is under-served, productivity improvement is applied to doing more, resulting in AI as an augmentation tool. For instance, the field of enterprise software, notably as support for digital transformation of companies, is under-served. This will explain the curve at the end of this section and the belief shared by many professionals that software engineers will not reduce, as a population, in the five years to come. Another interesting example is quoted by Eric Hazan in a radio show about “AI & Workforce”. Legal assistants, such as paralegals, could be seen as the first casualty of genAI since these AI systems are doing good job at document mining and analysing. It turns out that recruiting has not stopped, because the need for “discovery” is under-served by nature (it is always a competition to find as much information as possible before a trial). What has changed is that you cannot get such a job without being fluent with AI tools. In factories, both situations are visible: some tasks are well specified and executed, for which automation leads to cost reduction; some other tasks are still complex and difficult (from quality management to process control), in which case AI automation helps operators do their jobs better.

So far, I have been discussing AI automation of transaction and production jobs, without factoring the advent of humanoid robots that could do “so many things”. As pointed out by Yann LeCun, this is related to the general “AGI vs ACI” debate (general intelligence versus cognitive intelligence), and is linked to the more specific question of the necessity of a “world model” for an AI/ robot to function as a “human substitute”. I am a strong believer that ACI is already there, despite the limits that I have mentioned, and that we are entering a world where cognitive intelligence will be a commodity. If you have not read my previous writing, I make a difference between:

  • ACI (artificial cognitive intelligence) : the ability to answer any question, or solve any problem, as an expert human would do, including the ability to derive sub-questions (I do not make mine the idea that machines have the answer but are unable to ask the right questions), explore different paths “of thinking” to build problem-solving scenarios, etc.
  • AGI (artificial general intelligence) : the ability to function as a human when placed in any work situation, including situational intelligence to “read the environment” and emotional intelligence to interact with other humans. (Figure 1 in this blog post gives more details about AGI vs ACI).

I do not know when AGI is coming (I believe it will come) but I do not see this before 2030, which makes me a sceptic in the world of tech leaders who have invested massively to win the AGI race. On the other hand, the advent of ACI is already a big thing to process.

A world where ACI is a commodity will be quite different, but most of our jobs will simply evolve. I base this affirmation on a controversial belief, that the share of cognitive time in most jobs is overestimated. I have made that test repeatedly for 5 years: ask someone to measure the fraction of time when they were actually thinking to solve a cognitive task, as opposed to communicating, listening, travelling, attending other’s needs, etc. I found that, with a few exceptions, the fraction of time is small. Even for developer in IT departments (and they tend to complain about it), the time spent to produce code (as opposed to attending meetings or reading code), is small. Humans are not robots, which is rather a good thing; we exercise many other forms of intelligence than cognitive intelligence, and we spend time doing other things than being intelligent. On the topic of general-purpose robots, the AII 2026 reports also shows that multi-purpose anthropomorphic robots are not ready yet, despite the wonders they exhibit when dancing or practicing martial art: “Robots still fail at most household tasks, even as they excel in controlled environments. Robots succeed in only 12% of household tasks, highlighting how far AI is from mastering the physical world. On RLBench, robotic manipulation in software-based simulations has reached 89.4% success, but the gap between predictable lab settings and unpredictable household environments is wide”. Gary Marcus analyses are known to be biased (he is an expert with a strong thesis), but I find his arguments interesting to read. On the topic of “AI and your job”, I recommend his Fortune article “9 reasons AI isn’t going to take your job (yet)” (the “yet” is important).

 The result of these different arguments leads to a factual situation that has been described by labour experts as “nothing much happening yet” (as shown also by the diagram from Anthropic). I had the pleasure of discussing this topic in Strasbourg with Yann Fergusson, who is a true expert about the actual impact of automation on real people and jobs. If you read French, I recommend his paper “Ce que l’intelligence artificielle fait de l’homme au travail ”. It will give you a powerful framework to analyse the upcoming transformation. Yann Ferguson argues that AI has reached a level of technical, economic, and organizational maturity that makes it a concrete reality in the workplace, shifting the key question from technology to its human and social implications. He identifies four archetypes of employees in relation to AI: the replaced worker, where automation quietly reduces jobs through attrition rather than layoffs; the dominated worker, whose autonomy and skills are constrained by algorithmic control, raising strong ethical concerns; the augmented worker, who benefits from improved performance and support but faces new skill requirements and potential cognitive downsides; and the “rehumanized” worker, where AI frees time for more distinctly human activities such as creativity and empathy, though this outcome remains ambiguous.  To return to the topic of software development jobs, the following figure is taken from an analysis of the tech market from the Pragmatic Engineer. It gives data to support what Yann Fergusson explained in Strasbourg: the bulk of job cuts happened for other reasons (overshooting because of the combination of COVID and digital transformation, plus reskilling strategies from tech players).

Figure 3 : Evolution of software engineering jobs in tech companies

 

I will not focus on “Agentic genAI for software” in this post, to keep it short, and also because I have addressed this topic in an earlier post (which is 6 months old, hence quite outdated). I am a big fan of agentic pipelines as a technique to produce maintainable code in a repeatable way, as explained in the previous section. “Vibe coding” (the plankton in the fish picture) seems, for the time being, better used for exploration and quick productivity improvement through “disposable apps”. However, even with this restriction, this is a revolution : “vibe coding” is a new skill, a new “muscle” as explained by  Nicolas Grenié in his 2026 Devoxx lecture in Paris. I see this as an illustration that ubiquitous ACI is already a huge revolution to process. The fact that anyone in the enterprise can ask to an AI assistant : “show me a software that would do this to help me solve this business problem” will both deeply change how fast we can invent new approaches to problem solving and the way the different roles in the company interplay (writing code is no longer a bottleneck or rare skill, but maintaining operational systems with the expected quality of service is still engineering).

 

4. Economic Impact and Regulation


I have proposed a model, “Future of Work Parametric Model”, that is a thought experiment to understand both the high level of uncertainty and the complex retroaction loops. A fun fact is that Codex App made me a working prototype as a web site in less than an hour (I am a fan of vibe coding). This model is based on the McKinsey analysis of Section 2 and represent the adult possible workforce as three spheres (see Figure 4) : the first one are production and transaction jobs from companies that are exposed to worldwide competition, the second one represent local interaction-based jobs, and the third represents people without a salaried activity, but who may work in a voluntary/charity activity, or from assistance (with or without activities). From Section 2 we recall that AI/robotic automation pressure is high for the first category, whereas it is more a local/political decision for the second category (jobs that cannot be displaced geographically, but that could move to anthropomorphic robots). The model looks for global balance of economic flow (there is enough money/value generated to cover the whole society, including kids and senior citizens) and a “utility” balance since a stable society cannot rely solely on income distribution; it must organize recognized forms of social usefulness, at least for a large majority of citizens. There are two dystopic ways to circumvent the necessity of utility balance: totalitarian political regimes or the techno-elitism nightmare where technology is used to overwhelm thinking of the “useless citizens” with digital distraction.  FWPM is not a prediction model, it is a “look-at-the-systemic-consequences” model where you provide your own insights about four “Key Known Unknowns” of FPWM, key questions that drive what the future of AI -automated workforce may look like:

  1. What is Rate of job reduction in competitive/offshorable sectors (first category) thanks to ACI, once its usage gets stabilized? My own value for this model would be 30%, because of the arguments of Section 3 (we are more than human robots).
  2. Should countries and their societies protect their interaction jobs against robotic replacement? This will obviously depend on demography and aging, but I would argue that democracies need to protect these jobs and promote some regulation towards “interaction robots”.
  3. Is UBI (Universal Basic Income) bound to be a form of assistance, or can regulation and taxation be used to create subsistence activity? This is critical to solve the societal utility constraint, unless the dystopian alternative seems acceptable. As you may guess from the way I introduce the question, and from previous writings, I believe in the possible social utility of UBI.
  4. Do you believe that jobs that will disappear in the first sphere will be replaced by new jobs thanks to Schumpeter’s creative destruction, or so you think that the impact of climate crisis, energy scarcity, natural resources will activate a “Schumpeterian brake”?  My own answer, when exploring FWPM, is conservative as I believe that the growth of new (meaningful) activities will be constrained in the 21st century by the scarcity of resources, as well as its consequence: the rise of protectionism and conflictuality.

If you play with FWPM, notice that the first slider is the size of workforce that is still needed after AI automation in 2035 (at iso-activity, before the positive side of Schumpeter’s creative destruction). 30% work reduction is represented by 70% (Figure 4), whereas if you believe the dramatic forecasts of Silicon Valley engineers as reported by Jasmine Sun in here provocative essay “Silicon Valley is Bracing for a Permanent Underclass”, the value should be set lower than 20%, which makes the issue of the “underclass” obvious. My opinion is much closer to Azeem Azar’s, who also notices that things will take more time than the dystopic forecast of Silicon Valley.


Figure 4: Future of Work Parametric Model

 

I do not have a crystal ball and do not pretend that my own setting of the model is better than any other. The goal of the model is precisely to see the complexity – even with a super macro, super simplified view – of workforce evolution with respect to society. It is not possible to mention UBI today without giving more details about the underlying societal model, since universal income is very often criticized as unproductive assistance. Without going into too much detail, UBI here is seen as a tool to make non-sustainable-economically-but-socially-appreciated activities become subsistence activities because some form of negative taxation or subsidy is redefining the sustainability. For instance, cooking for your neighbors, custom design and sewing of clothing, woodworking to build custom craft furniture, storytelling to children … and adults, gardening, painting works of arts for others, etc. could become subsistence activity catering a very small local market. In my systemic thinking, I see UBI as a “potential function” that changes the profitability landscape, so that interaction activities that are both pleasing to the provider and the beneficiary may become viable (at a small scale). To borrow from Avi Reichental, whom I have quoted in my older post about UBI, the 20th century has been the century of mass-production, not always a progress compared to the experience of past centuries (especially from a planet perspective); the 21st century could be a return to customization and craft, to the satisfaction of unique needs through local interaction (and technology, such as 3D printing, can help). There is no evidence here: the positive attitude towards UBI is a political project, that requires effort, creativity and change management. Following this path means to oppose a natural curse : to let AI steal our specific know-how and make most of us a “proletariat” (to quote Bernard Stiegler), of people whose knowledge has been outsourced to machine. In all the UBI-activities mentioned earlier as example, success will mean both that an activity that someone is passionate about, with social value, may yield a subsistence income, and that leveraging her or his unique skills, aptitude and unique motivation (the opposite of a “proletariat” activity) will offer true utility and self-realization opportunities. I speak of “creativity” because there are many ways to modify the rules of profit, work, taxation to create new zone of social utility. I found it interesting that most Silicon Valley thinkers, such as Vinod Koshla, warn us that AI will tilt the balance between capital and work, and that some form of UBI needs to be created. UBI is often seen in France as assistance (such as “RSA” income), which is soon followed by the opinion that assistance leads to idleness, and the absence of social usefulness. The goal of the third question (slider) in FWPM is to trigger the realization that this is not necessarily true.

Systemic thinking is a must when contemplating the future of work in 2035 or 2040, the flows of money, utility, and representation (votes) need to be balanced. This is the goal of the very naïve FWPM model, to help the user grasp how the automation choices, and the other “key known unknown” beliefs, influence value creation and redistribution. The two key issues for our future are the evolution of taxation to adapt to AI transformation, and, for democratic countries, to ensure that a large majority of the population keeps the opportunity of self-realization through social utility. On the first topic, I recommend reading “Can we Have Pro-Worker AI? Choosing a path of machines in service of minds” by Daron Acemoglu, David Autor and Simon Johnson. This first issue covers how to raise money to redistribute, which is why FPWM shows the main money flows and how they are impacted by AI transformation (according to your own beliefs). It also covers how this money is spent, how some form of UBI is implemented to protect social utility. They go hand in hand: it is about taxation, what it should be based on (hint, no longer on labour) and how negative taxes (subsidies) can create pockets of subsistence opportunities (previous paragraph).  The simple FWPM model takes aging and the cost of social protection into account but remember that is only a crude “food for thought” experiment. The second question is how the inevitable increase of inequality – represented in the FWPM model by a crude estimate of the Gini coefficient evolution – compounded by a growing percentage of the population without a regular job, and worse, without a social role, can fare with our democratic ways of representation and political steering. Here I strongly recommend to read Langon Morris’s book “Hello Future – the world in 2035”, which I have reviewed in the following blog post. Langdon Morris system dynamics analysis shows the interplay between AI, inequality rise, resources scarcity and geopolitical plays.

 

5. Working with AI


If the speed is uncertain but the wave that is coming is clear, the real issue is adaptation. There is a consensus today that “how we adapt to AI” is the most pressing and practical question. This is not an easy question since AI is not a tool, it is a transformative technology. Using a tool like a hammer does not change you, using AI does. This is beautifully explained in the recent Vatican document, “Quo Vadis, Humanitas?” (where are you going, humanity ?). Using cognitive assistants, such as our genAI assistant, in a repeated manner change both our abilities and the structure of our brain. Eric Sadin, a French philosopher which I have studied in this blog post, has coined the oxymoron of “mutilating augmentation” which is a good summary of AI augmentation paradox. There is today plenty of evidence that AI augmentation comes with a price. Let me give three examples, but there are many more available today. The first one is now old  (CACM, 2024). Eric Klopfer decided to divide his MIT class into three groups with a programming task to solve in the Fortran language. One group was allowed to use ChatGPT to solve the problem, the second group was told to use Meta’s Code LLM, and the third group could only use Google. Then, the students were tested on how they solved the problem from memory, and the tables turned. The ChatGPT group “remembered nothing, and they all failed”.  The second example shows that the repeated use of genAI create a false sense of confidence and weakens our critical thinking, as reported in the Microsoft/Carnegie Mellon study. The third example bring brain imaging to show that we become lazy and stop activating some neural functions when AI assistance become prevalent (another MIT study).

Another key finding of the past two years is that genAI is, mostly, an amplifier of capabilities. If you know how to do thing (code, write, analyse), you will do it better with genAI. If you don’t and use genAI to compensate your shortcomings, the overall gain is much less clear (sure, AI is doing things for you, but you fail to spot the mistakes and you stop learning at all, cf. previous paragraph). Experts seem to be experiencing fewer loss of capabilities (attention, patience, depth of thinking) than novice users. This is explained in the book from Laurent Alexandre and Olivier Babeau that I mentioned in the introduction : “Strong individuals, in terms of skills and discipline, become much more productive; weaker ones risk dependency and loss of autonomy”. The authors advocate for usage discipline: protect your attention, observe daily and weekly period of concentration, make deep reading a habit. Learning becomes a lifelong requirement in the 21st century and AI can, and must, help us to learn continuously, if we apply discipline in the way we ask questions after having thought first about them, the way we apply critical thinking to the answers and the necessity to iterate questioning and reflection. The word “discipline” is used here as the opposite of laziness : the main risk when using AI is to follow the path of least resistance and to become lazy; here is a quote from the book: “Personal discipline becomes a major economic skill: in a world of constant cognitive copilots, the difference between progress and decline depends on the ability to resist cognitive laziness”.  One may find similar pieces of advice in this LinkedIn post from Stephen Klein : think before you prompt ; question AI instead of accepting answers; notice cognitive sedation (be self-aware, do not force-feed your brain with too much information) ; go deeper, not faster.

Systemic thinking is equally important to develop the strength of AI at the corporate level without weakening resilience, or the ability for collective learning and adaptation.  A major concern of Michelin CEO, Florent Menegaux, about using AI these past five years has been “Go fast, but (mostly) go far”. That is, leverage AI to accelerate what you do, but not at the expense of learning continuously. Hence never consider AI as a black box, keep honing your critical judgement and questioning the answer provided by AI assistants. Going far means adapting to the ever-increasing complexity of the world and its new challenges. Continuous learning is more important than improving efficiency because the company’s success depends on inventing new products and services thanks to newly developed capabilities. This is very similar to what was said in the previous paragraph at the individual level, it is “simply” extended to the collective level (to promote collective learning as well as individual learning). How to use AI to invent a future (products, services, processes) that has not been seen before is out of scope for this blog post which is already long. Let’s just say that it involves simulation, causality inference and “world models” such as digital twins and the hybridization of many forms of reasoning, from mathematical/physical models to numerical machine learning models. I will end with another systemic concern about AI automation: it increases the fragility of cybernetic systems and decreases resilience. Optimising for efficiency has a price, it makes the system more dependent on “current conditions”, thus reducing its adaptability (precisely why, “to go far” means that you should not over-emphasize “going fast”). This has been beautifully explained by Olivier Hamant and his many books, such as “The Third Path of the Living”. But the same idea, that over-optimizing based on past data and logic, increases the efficiency at the expense of resilience, is also at the core of books by Nassim Taleb, such as “Antifragile”, and is also articulated in the “Reshuffle” book by Chaudry that we mentioned earlier: “An overemphasis on optimization, paradoxically, can actively break down whatever coordination exists today, making the entire system less reliable”.

 

 

6. Conclusion

Accompanying the anthropological shift brought by the commoditization of cognitive intelligence will require a deliberate rebalancing of what we value in work and in society. As analytical and synthetic tasks become widely accessible through AI, human interactions will regain central importance and must be both revalued and protected. Activities rooted in care—healthcare, education, hospitality, mediation—embody forms of presence and attention that cannot be industrialized without losing their essence. Likewise, situational and emotional intelligence, that have started to be recognized by companies as the foundation of “soft skills”, are now emerging as critical capabilities: understanding context, reading subtle signals, and adapting in real time. Education systems, organizations, and social norms are already beginning to shift in this direction, placing greater emphasis on empathy, creativity, and design.

At the same time, learning to live with AI means cultivating complementarity rather than competition. Core skills will increasingly revolve around systemic thinking, process modeling, formalization (including mathematics), and a solid grounding in legal and ethical reasoning—these provide the structure needed to guide and supervise intelligent systems. As stated many times in this paper, complementary capabilities remain equally essential: emotional and situational intelligence, observation, and a grounding in the humanities, especially history, which helps interpret complex human dynamics. Beyond skills, attitudes become decisive: critical thinking to question outputs, creative thinking to explore new possibilities, and the ability to build and mobilize networks, in line with the enduring value of diverse connections. In a world of abundant cognitive assistance, the difference will lie not in access to intelligence, but in how wisely and meaningfully it is used.

 


 
Technorati Profile