1. Introduction
The “agentic acceleration” of genAI is everywhere. We first learned in 2022/2023 to use LLM to generate documents, then to retrieve information and manipulate knowledge with RAG, now genAI is becoming a powerful automation engine through script generation and task orchestration. Agents started as “macros” or “cooking recipes” on how to use genAI (RAG, fine-tuning, prompting, embeddings, etc.) to achieve a specific task, then became modular and composable objects. This is not a surprise, since the “federation of minds” is an old AI patterns from the multi-agents approaches of last century AI to the MoE (Mixture of Experts) approach of modern LLMs. Agents then evolved to workflow or orchestration agents, introducing hierarchies or graphs of collaborating agents. This meant that the CoT (chain of thought) pattern evolved from an internal genAI capability to a scriptable one. This overall evolution is not a surprise, the same pattern has occurred many times in computer science (reification, composition, higher-order abstraction) but it definitely grows the landscape of what one may achieve with genAI.
Although there is a lot of hype and unfounded claims about what agents can do today, this is a major trend. Truly autonomous agents will take a while since anyone who plays seriously with the current state-of-the-art is both amazed at the capabilities (take Sonnet 3.7 as an example for writing code or making a crisp summary) and surprised by the surprisingly simple mistake they still make. However, composability is a way to increase both relevance and robustness, so I expect that trust will grow. Composability of agents is taking a new dimension with the wildfire availability of MCP (Model Control Protocol) as an agent composition protocol. In a few months, MCP has become the inevitable topic of genAI conversation, for a very good reasons: it irons away the integration efforts and introduce dynamic service discovery into the world of genAI composition. I reproduce below an illustration that I posted on LinkedIn about the evolution of SaaS that agents will produce. Though I do not believe that SaaS systems will disappear, the advent of AI agents will impact considerably the way we interact with information systems.
Figure 1 : Building SaaS systems for both humans and AI agents
If we look a few years ahead, as agents become more reliable and trustworthy, they will gradually become more autonomous. This is not simply a better RPA technology, this is indeed a game changer in the long run, which has led to the concept of the agentic workforce, a clear indication of the possibility of job replacement. Hence, I decided to share in this blog post a few thoughts about the ethical and risk issues that the fast development of AI agents may bring. This is taken from an invited conference that I made to the “Réseau Blaise Pascal” last month about the “impact of AI on society and enterprises”. I have already written about my prospective vision concerning the future of work impacted by artificial intelligence. Somehow, this blog post is a follow-up because AI has moved ahead since 2016, but not in unexpected ways. I will avoid duplication so you may find more detailed references, about McKinsey model of work, quaternary economy or the advent of mass customization in the previous blogpost.
This post is organized as follows. Section 2 starts with the question of what ethical AI should be from the viewpoint of three major stakeholders: the company who decides to build an AI solution to achieve a business goal, the employee who is asked to use this AI tool as her new way of working and the citizen bystander who observes the company. I will then propose a global and simplified overview of the associated risks. Obviously, looking seriously at the risk topic would require a full-length article, my goal here is just to propose a framework as food for thoughts. I will then take a closer look at the specific topic of societal risks, that is how AI, while bringing efficiency to the operations of human society, is at the same time dissolving the fabric of human society as far as trust, education, democracy and equality are concerned. Section 3 talks about AI and Society in general, with a focus on the future of work and how we need to learn to collaborate with AI agents. I will conclude with a balanced position on two axes. First, I believe that AI agents will bring efficiency improvements that are actually needed considering the challenges that we face, but we need to acknowledge, understand and adapt to the associated risks. A huge transformation is coming, that requires regulation, deep system thinking and a huge amount of training. Second, I do not believe either that AGI is near, nor that the obvious benefits of the upcoming very smart cognitive agents will translate into double digit GPD growth (for the very same reasons of the huge challenges that we have to face in the 21st century), but I think that the revolution of working together with cognitive agents that “look smarter than us” is around the corner, say 2030.
2. Artificial Intelligence, Riks and Ethics
2.1 Ethical questions raised by the use of AI in business
The use of artificial intelligence in businesses raises major ethical issues related to reliability, transparency, responsibility, and respect for fundamental rights. I will reproduce here the simple analysis that I proposed in the BFM TV interview of October 2024. The following figure represents ethical issues through the lens of three stakeholders:
Those who want to use AI to solve a business problem, the decision-maker who can be equated with the company.
Those who use AI, the users of software that incorporates AI capabilities.
The citizen, who observes that the company uses AI to conduct its operations.
Figure 2: Ethics of AI use according to stakeholders
From the company's perspective, it's about ensuring the robustness and accuracy of systems through rigorous methodologies, managing operational risks, and preventing drift through continuous supervision (MLOps), while guaranteeing data security and access. Trustworthy AI is therefore AI that is constantly monitored, adjusted, and secured.
From the users' perspective, the ethics of AI involves responsible use, framed by clear guidelines, accessible documentation, and thorough training. It is essential to prioritize human augmentation rather than replacement, keeping humans at the center of decisions. AI must also be designed with environmental awareness, taking into account its energy and material footprint. This framework aims to establish a relationship of trust between company employees and the artificial intelligence tools they use daily. I will not cover the energy and CO2 impact in this blog post, since I have covered this complex topic in a previous blog post, but the question about the fair use of electricity for AI is the most common one nowadays.
Finally, from the citizen's perspective, the implementation of AI must be based on transparency, compliance with regulations (such as GDPR or the AI Act), and responsibility. You will notice from Figure 2 the Russian doll structure of the concerns, which mean that each stakeholder in this list inherits from the concerns of the previous one. Each system must be traceable, explainable, and auditable, including when based on opaque (black-box) models. In the previously mentioned BFM interview, I insisted on the crucial need for human responsibility. At Michelin, each AI system has an owner (a person) and we take responsibility for any possible mistake (not the machine nor the software provider), contrary to the famous AI Canada chatbot incident. Hybrid multi-level approaches, such as systems of systems, can make these systems more understandable. Moreover, respect for intellectual property — particularly in cases of automatic generation of content or code — constitutes an essential ethical requirement. Thus, building trustworthy AI assumes a collective discipline based on principles, internal rules, and external regulations.
2.2 Safety in AI use
The increasing use of artificial intelligence raises major safety concerns. I will rely here on the classification proposed by CESIA (Centre pour la Sécurité de l'IA). I propose a simplified taxonomy with three categories, you will find a detailed article on the website: https://www.securite-IA.fr.
Malicious uses (AI does what is desired, which is an aggression): cyber-attack, "augmented" weapons by AI, etc.
Misaligned uses (when AI doesn't do what we want): when the specification doesn't produce the expected effects, which can easily happen because specification is difficult (which explains why we still need computer scientists in a world where code is written by AI).
Uses "with unexpected systemic consequences": biases, loss of resilience, societal damages, etc. This is when AI is doing what we expected/designed at first, but the execution exhibits systemic causal chains with surprising results, ranging from inconvenient to catastrophic.
Figure 3: Security Risks Taxonomy inspired from CESIA
Among the identified risks, the malicious use of AI represents a direct threat: it can be exploited to conduct sophisticated cyberattacks or to enhance digital warfare capabilities. These diverted uses pose regulatory challenges comparable to those of nuclear or biological technologies. The need for coordination and control at the international level then becomes a priority to prevent potential drifts.
Another axis of risk concerns alignment problems, when the objectives pursued by autonomous systems do not correspond to those desired by humans. The mentioned CESIA article lists several examples where algorithms exhibit unacceptable behaviors to accomplish the tasks they are asked to do (going as far as misleading their human interlocutors). So-called "black box" AIs, whose operation remains opaque, complicate security checks. Additionally, the specification of objectives for autonomous agents can lead to unanticipated or dangerous behaviors, especially when AI optimizes for a goal without understanding the underlying human intention. It is therefore essential to frame these systems with rigorous control mechanisms and ethical design from the outset.
Finally, the unintended and systemic consequences of AI should not be underestimated. There is a risk of losing control over large-scale systems, which could generate biases, serious errors, or harmful effects on society: job losses, mental health deterioration, weakening of democracy. Here we find the risks of bias, where particular traits of the training corpus produce undesirable effects in the recommendations built by AI, but also unexpected consequences when AI objectives are specified too narrowly, producing, for example, anti-social pricing policies or recommendations for increasingly deviant and aggressive content because they increase the probability of reader reaction. To better understand how these biases cause a significant risk to society, on should read the reference book by K. O'Neil: "Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy." There are also more subtle risks of losing resilience if AI is programmed to use resources in the most efficient way possible. One of the general laws of systems theory (at the heart of the "Lean Management" approach) is that the loss of room for maneuver leads to fragility. On this subject, one should read Olivier Hamant's book, "La troisième voie du vivant", which explains very well the tension between performance and resilience. In today's uncertain and volatile world, resilience is an essential characteristic that the systematization of AI can diminish. These issues, although having variable risk levels, require a global response including ethical standards, audits, adapted legislation, the creation of independent monitoring agencies, as well as a generalized education effort to accompany the transformation.
It is very difficult to assess these risks, and this assessment is necessarily subjective. It seems to me that the risk of malicious use is proven. The challenge is to be able to reduce it through international regulation, which we are trying to do with other types of weapons. As for the risk of misalignment, I am more optimistic. The problem is complex, but solutions exist and can be developed iteratively in synergy with AI deployment (which is also the challenge of the ethical framework described in section 4.1). On the other hand, the risk of systemic consequences remains very significant and difficult to control. It is very clear that mastering the AI systems in a robust way, which implies to be able to foresee the systemic consequences of biaises, hallucinations, statistical traps such as overfitting, etc.), is a difficult task, that we are only beginning to grasp. Not surprisingly, we have seen the emergence of many organizations, such as PauseAI, who are pointing these risks as a compelling reason to pause AI development.
2.3 The societal impact of AI
Another major issue is the cognitive impact of AI on individuals. Studies such as the one from MIT on students using generative AI highlight a risk of excessive dependence on these tools, to the detriment of critical thinking. Michelin CEO Florent Menegaux speaks of the need for a balance between going fast (thanks to AI) and going far (thanks to learning). This debate points to a broader question, posed in 2008 by Nicholas Carr in his essay "Is Google Making Us Stupid?": is our ability to think autonomously eroding in the face of increasing cognitive delegation to machines? Without justifying such concern, the first studies show that we need to learn to use AI and especially learn "to learn with AI." The MIT study cited in "The Impact of AI on Computer Science Education" by E. Shein shows that across three groups of students with varying access to generative AI assistance, the group that goes the fastest (with the tools) is also the one that learns the least. The Harvard Business Review article, "Generative AI and the Future of Work" gives positive arguments in favor of using generative AI tools, but the learning question posed by Florent Menegaux remains fundamental. A more recent study from Microsoft Research conducted on several hundred regular users of generative artificial intelligence tools shows real gains in both efficiency and the ability to explore ideas or subjects more broadly, while warning against a false sense of security or competence produced by these tools.
Furthermore, artificial intelligence, perceived as the final stage of digital transformation, risks accentuating social and economic inequalities. The digital economy tends to favor monopolistic logics, where "the winner takes all." Technological acceleration creates an increasingly marked digital divide between individuals, companies, and territories. This amplifier effect also worsens inequalities in the face of environmental challenges, such as adaptation to climate change, by reserving the most effective solutions for those who master the technologies.
Finally, AI raises growing concern about mass unemployment. While the transition will take time — contrary to dystopian scenarios that announce a brutal collapse by 2030 — it will require a profound adaptation of skills. We often hear the following aphorism: "It's not AI that will take your job, but a person who knows how to use AI." That's rather reassuring since we remain within the framework of the augmented human. On the business scale, the risk is equally clear: those that cannot integrate AI will lose market share to those that adopt an agentic and automated workforce. This more ambitious vision of transforming through AI is more complex to implement (cf. Section 3.5), but it is very visible in China and raises questions about the future job market, with a less pleasant perspective than that of the WEF cited above.
The question of complete automation made possible by AI is indeed a debated topic, but it already exists in many domains. In the Vatican text, "ANTIQUA ET NOVA - Note on the relationship between artificial intelligence and human intelligence," we find this quote: "In this perspective, AI should assist and not replace human judgment." What I have seen in many companies over the past 20 years makes me rather think of this quote from philosopher Alfred North Whitehead: "We should not cultivate the habit of thinking about what we are doing. On the contrary, civilization advances by extending the number of operations we can perform without thinking about them."
If this perspective of a disruption in the labor market is emerging, it will take time, beyond the 2030 horizon. AI is a general ingredient of automation, but other things are needed to apply this "intelligence," just as to implement electrification, tools need to be developed, and processes modified. It also takes time for good ideas to spread and for the financial and material means of implementation to be found. If we look at domains precisely by sector of activity, the role of intelligence in value creation is less important than one might think (cf. the notion of "RoI": marginal return on intelligence by Dario Amodei, which we will discuss in section 3.3), we still need to find energy, raw materials, distribution networks, etc. Here we find a key idea from Langdon Morris's book: we cannot deal with the impact of automation independently of all other issues: global warming, geopolitics, resource scarcity... In many possible future scenarios, having a truly-human-intelligent agent at one's disposal is only a small competitive advantage compared to other factors.
3. Artificial Intelligence and Society
3.1 The approaching arrival of AGI (artificial general intelligence) ?
While current artificial intelligence, called ANI (Artificial Narrow Intelligence), specializes in specific tasks with impressive performance, the ambition of AGI (Artificial General Intelligence) is to design an AI capable of solving any cognitive problem, autonomously, with performance equivalent to or better than humans. There is no simple and clear definition of AGI, even as the debate around its imminent arrival becomes increasingly topical. I propose here to define three forms (three sub-levels): even if this distinction is not recognized, it will facilitate addressing the question posed:
I call ACI (Artificial Cognitive Intelligence) the ability of AI to answer all questions asked with the same level of competence as the best humans, and with an autonomous capacity to investigate the question (pose sub-problems, decompose, explore...).
I use AGI (Artificial General Intelligence) when AI has sufficient contextual intelligence to act autonomously, deducing questions to ask from intelligent observation of its context. AGI can truly replace a human, while ACI is a multiplier.
I use ASI (Artificial Super Intelligence) for the emergence of a completely different form of intelligence inherently superior to that of humanity. The reference book on ASI is "Superintelligence" by Nick Bostrom. As an AI practitioner, I find this book very speculative and self-referential (the book formulates hypotheses that are elementary forms of the conclusions the author wants to reach). I find myself closer to this famous quote of Pedro Domingos: “People worry that computers will get too smart and take over the world, but the real problem is that they're too stupid and they've already taken over the world” (back to the risk of lack of control and unintended systemic consequences).
There is no clear consensus on the arrival date of AGI, but the progress made over the past three years with generative AI (GenAI) suggests that a shift could occur sooner than expected regarding the ACI level. Some experts, like Leopold Aschenbrenner or Sam Altman (OpenAI), envision the arrival of an AGI that would be a rapid evolution of ACI before 2030. Leopold Aschenbrenner published an essay in 2024, "Situational Awareness," where he analyzes the constant progress of generative AI, both in hardware and software. In the tradition of Ray Kurzweil, he extends the logarithmic curves (the OOMs: orders of magnitude) and concludes that the ACI level will arrive in 2026-2027 and that ACI will contribute to its own improvement to become an AGI before 2030. A similar narrative may be found on the collective work of AI2027 (with the same two-steps growth towards AGI, here in 2027). Others, like Yann Le Cun (Meta), remain much more cautious and remind us that human intelligence itself is very specialized, which makes the objective of AGI much more complex than it appears. In particular, as mentioned above, for Yann Le Cun, one must possess a working and predictive model of the world, continuously updated, to develop a truly autonomous intelligence. Similarly, there is a debate about the type of ACI intelligence that the extrapolation of generative AI allows us to predict. Where Dario Amodei envisions "thousands of Einstein/Nobel Prize winners at our disposal in a data center," Thomas Wolf is more cautious and considers that while the level of excellence "of a very good PhD" is accessible to today's generative AIs, the exceptional level of creativity of a genius such as Albert Einstein is not yet within reach of today's AI architectures. Thomas Wolf is the CEO of Hugging Face and has published a very interesting article on LinkedIn, to which I completely subscribe.
Despite these uncertainties, one thing seems clear to me: ACI, in the form of a universal cognitive assistant, is already in the early stages of deployment and will progressively transform all human activities. Conversely, the arrival of an ASI and the existential risk it poses remains very speculative, and the experts I just cited do not express themselves on this possibility. Eventually, all so-called "computational" or cognitive tasks could be performed more efficiently by AIs, first in collaboration with humans (centaur model), then autonomously. Without waiting for the next revolution of a fully autonomous AGI, the "simple form" of AGI that ACI represents is already capable of profoundly transforming human society, which we will now discuss.
3.2 The future of work in a world conquered by AI
Automation already has a significant impact on employment, and this trend will accelerate with the rise of artificial intelligence. According to some already old studies (Frey & Osborne, Brynjolfsson, Ford), up to 50% of current jobs could be threatened in the coming decades. A great update about these studies may be found in the fifth chapter of Ray Kurzweil book, “The Singularity Is Nearer: When We Merge with AI”, from which I draw this quote : “Over the decade since that report was released, evidence has continued to accumulate in support of its startling core conclusions. A 2018 study by the Organisation for Economic Co-operation and Development reviewed how likely it was for each task in a given job to be automated and obtained results similar to Frey and Osborne’s.”. Repetitive and specialized tasks are the first to be automated, but eventually, all jobs are bound to evolve under the effect of efficiency gains (by virtue of the empirical observation that a job where 50% of tasks (in time) can be automated sees its associated workforce reduced by half). Even if more optimistic visions (such as those of the OECD or the World Economic Forum: WEF) exist, the transformation of the work world seems inevitable. The latest 2025 WEF "Future of Jobs Report" continues to assert that the balance of changes brought by AI will remain positive, but this seems more an act of faith than economic reasoning, unless one postulates strong economic growth supported by AI (we will return to this, but in a world constrained in resources and forced into costly mutations of its production apparatus, this is clearly optimistic). I find myself much closer to this estimate from Ray Kurzweil who sees a major job reduction impact coming between 2030 and 2045 : “ If adoption proceeds quickly, half of this work could be automated by 2030, while McKinsey’s midpoint scenarios forecast 2045—assuming no future AI breakthroughs ».
The replacement of humans in processes, however, does not happen overnight. Automation first targets specialized roles, where the task is stable and well-defined. Gradually, the work environment itself becomes intelligent, with omnipresent digital assistants that accompany collaborators in their daily activities. This "ubiquitous automation" transforms tools and workplaces into collaborative platforms between humans and machines, rather than simple human-robot substitutions.
This profound mutation leads to the emergence of a new employment landscape, structured around three major domains: production (ensured by robots), transactions (managed by AI), and interactions (reserved for humans). I borrow this framework from the article "Preparing for a new era of work" by S. Lund, J. Manyika, and S. Ramaswamy. Even if it dates from 2012, this analysis remains relevant and constitutes a good framework for formulating prospective analyses on the future of work. This model highlights a reorientation of value toward emotional intelligence, creativity, storytelling, and the ability to create connections. The experience economy thus takes over as routine tasks are automated. As interaction is by definition localized, this vision leads to a hybrid vision of a quaternary economy in which coexist sectors that are subject to mass effects and quasi-monopolies supported by digital technology, in terms of production and concentration, and a multi-scale local economy of experiences and interaction. This paragraph is inspired from a text by the author that appeared on FrenchWeb in 2016 (https://www.frenchweb.fr/le-futur-du-travail-et-la-mutation-des-emplois/267902) in which I allude to the quaternary economy due to Michelle Debonneuil. To illustrate this idea, I take the example of a theoretical "gardener of the future," who probably uses one or more robots, but sells an "experience," in the sense that he tells a story. He can also benefit from a technological platform that provides him with autonomous robots that will mow the lawn or trim the hedge. He "programs" the system (garden + robots + environment) with speech. The vision I heard in 2016 at Singularity University: "We won't program computers, we'll train them like dogs" is now in front of our eyes with the advent of generative AI.
The future of work will also depend on demographic factors and how societies integrate robotization into human interactions. For societies that will still have an abundant workforce, it is important to reserve interaction tasks for humans and rely on the personalization of products and services as a space for value creation (and jobs). Conversely, in societies that will find themselves with a labor shortage by the end of the century and that do not wish to resort to immigration, it is possible and probable to see robots also appear for certain interaction tasks. The rapid and spectacular progress of humanoid robots, as witnessed by the Chinese New Year celebration, shows that all futures are technologically possible and that there will indeed be a societal choice to make so that we do not find ourselves in the situation described by Pierre-Noël Giraud in his book "The Useless Man".
3.3 Living with agents "smarter" than us?
The emergence of artificial intelligences capable of surpassing human intelligence in a large number of tasks is disrupting our relationship to work, knowledge, and ourselves. Dario Amodei and Jared Kaplan (Anthropic) describe this near future as "an army of PhDs at your service," where AI becomes a virtual colleague capable not only of analyzing data but also of designing and conducting experiments. This dazzling acceleration of science and technology — illustrated by cases like AlphaFold or the 2024 Nobel Prize in Chemistry — is beautifully illustrated in Dario Amodei’s essay, “Machines of Loving Grace”. In this essay, he introduces the key idea of RoI, "Return on Intelligence," which raises the question of the marginal value creation of increased intelligence, and the answer is mixed. On one hand, in many cases automation is already quite advanced, and the share of human intelligence cost in the production cost is actually moderate. On the other hand, the domains of science and technological exploration are strongly dimensioned by the availability of "brain power." Dario Amodei describes the future world not as a world with more powerful analysis tools, but a true ability to have autonomous and competent digital assistants.
In this context, artificial intelligence ceases to be a simple tool to become a workforce in its own right. I encourage you to read Olav Laudy's text on LinkedIn, "The Rise of AI-Orchestrated Work." It contains key ideas about the skills needed in tomorrow's world: "critical thinking," because AIs will not be without faults for a long time, "systemic thinking," "exponential thinking" because the task of formulating the vision of the future must remain in human hands. Olav Laudy uses a framework named APEX: Automation Mastery, Process Oversight, Ethical Governance and eXponential Thinking, which is a good framework for building the AI learning programs mentioned in section 2.3 (learning to learn with AI).
As Olav Laudy explains, AI management becomes a central competency, with its own requirements: optimization, process engineering, ethical responsibility, and systemic thinking. The mode of collaboration also evolves toward a "centaur" model, where humans and AI complement each other, one bringing intuition, the other calculating power and execution speed. This hybrid cooperation imposes a new view of work, learning, and how decisions are made. We find a distinction introduced in section 3.5: using generative AI to gain efficiency in one's current practice is an evolution (easy but limited gains), while learning to orchestrate processes differently to make humans and (relatively) autonomous agents collaborate is a rupture (greater gains, but a profound questioning of the way of working).
On a more personal note, this technological revolution resonates with a conference given at the École Normale Supérieure in 1984, dedicated to the end of work in the AGI era. I was already defending the idea that we should move away from an overvaluation of pure intelligence to refocus on our relational, emotional, and human qualities. Of course, in 1984, the question of AGI's arrival was very speculative but already posed an anthropological question, about the nature of man and the place of his intelligence. One of the key ideas of this presentation was formulated as follows: "a man is more than his brain; his brain is more than a computer." This statement is both accurate in the face of a chrono-analysis of employees in most companies and liberating from the anxiety that the arrival of assistants "smarter" than us can pose. Note that the quotation marks highlight the multiplicity of forms of human intelligence, and the fact that our brain is more sophisticated than today's AIs (the point strongly emphasized by Yann Le Cun). Nevertheless, learning to live with AI requires bringing down from its pedestal a calculatory form of human intelligence. Faced with these increasingly brilliant AIs, it becomes essential to learn to better appreciate ourselves, and to make peace with what Günther Anders called "Promethean shame" — this feeling of inferiority in face of our own creations.
4. Conclusion
In conclusion, artificial intelligence is redrawing the contours of our world by automating everything that is repetitive, reinventing processes within digital twins, and absorbing the growing complexity of our societies. The acceleration of generative AI and the emergence of intelligent agents offer a unique opportunity to reallocate our time toward tasks with higher human value, as is pointed out in the study "Generative AI and the Nature of Work" by M. Hoffmann & al. This article shows that the use of generative AI allows for more autonomous work, encourages experimentation, and allows for the reallocation of time to "core" subjects. The world of tomorrow will be built in simulated environments, enriched by machine learning, in all sectors of the value chain. However, while AI helps us better manage complex systems, it also introduces its own challenges. It is therefore up to us to show discernment, responsibility, and creativity to make the best use of it, while preserving what makes our singularity: our humanity.
Besides this philosophical question, I would like to add a practical piece of advice for people who are applying AI to reengineer their business process. First, learn to copilot, that is find where genAI is able to automate parts of your repetitive tasks. Second, grow agentic recipes, that is learn to script these automation scripts into modular and composable agents. Last, refactor your process, that is redesign through the search for simplification and shared KPIs for large-scale alignment. Here I reuse my favorite living system law that states that any incremental growth methods must be couples with refactoring.
I am perfectly aware of the discomfort that a “balanced perspective” may cause (e.g., saying that AGI is not here, but the weaker form of ACI is around the corner; or saying that AI is unstoppable because of the obvious benefits that it brings, but that it requires to be regulated and managed carefully at the same time since the societal risks are very serious). I can only share three thoughts that reflect my state of mind :
- This is a time to move fast with technology, the pace of performance improvement and innovation is exciting, we are definitely “living through interesting times”. Try everything first-hand: this is the best way to learn and the only way to cut through the layers of hype and pure fallacies that we covered daily with. These tools are amazing but adult supervision is still required.
- Still, it is important to move fast since understanding when and how to leverage gen AI agents is a competitive advantage. It is wise to recognize the shortcomings of the current tools, it is unwise to ignore them.
- However, fast is less important than far: stay in control and keep learning. To paraphrase Yann Le Cun one more time, true intelligence comes from building causal model that works (in a VUCA world) and that requires exploration and learning, not simply being the best at leveraging correlation-based past-knowledge compression.
No comments:
Post a Comment