Tuesday, December 30, 2025

CCEM System Dynamics Calibration


1. Introduction

A few months ago, I had the pleasure to present CCEM to a summer school about System Dynamics organized by Arnaud Diemer. In this talk, “A System Dynamics Model for Global Warming Impact, from Energy Transition to Ecological Redirection”, I had to opportunity to emphasize the lineage with the original “Limit To Growth” system dynamics model from MIT. CCEM (Coupling Coarse Earth Models) is built by coupling five “simple/coarse” system dynamics models that represent the worldwide energy production system, consumption ecosystem, economy, global warming due to CO2 emissions and societal reaction to this warming (ecological redirection). System Dynamics models are simple (few quantities, relationships and equations), they are “from-first-principle” : they are calibrated with past data, but they represent a “world model” from the designer, and they expose part of the complexity of the subject matter because of their implicit non-linear behavior with reinforcing circles and delays.

CCEM is a specific model because it recognizes that much is unknown when producing such a model, through the concept of KNU (Key Known Unknown). Last November I was invited to share CCEM during a lively event organized as a dinner by Hélène Campourcy for Umantex and Think Innov. Instead of focusing too much on the model, I selected to organize three debates about three major KNUs : the level of damage produced by warming, the complexities of energy transition and the geopolitical tensions between competition and cooperation in a world facing both resource shortages and catastrophes. CCEM is not a tool that tells you what to think: it is a tool that takes a set of KNUs as an input (your “mental model” of the moment) and transforms them into 21-st century trajectories called scenarios (in the world of IAMs and IPCC). You look at the scenarios, find interesting systemic coupling that you might have missed, revisit your beliefs and iterate. As I like to say, “this practice does not make you smarter but less stupid”. Let me add to this crude statement two great quotes about this process:

  • First a quote from Hannah Ritchie (from one of her newsletter) : “A scenario is basically a “what if” story. It makes a bunch of assumptions, and asks what the outcome will be — in terms of energy supplies, or CO2 emissions — if those hold true. They’re useful in exploring what the future would look like if the world did x or y.
  • The second quote is from Paul Caseau (founding member of the NATF, much older quote of 1992): “Numerical models are not an extension of our ability to “theorize”; they are rather a new way of experimenting. Yet this experimentation is unique: it does not teach us anything truly new about the world around us, but it allows us — with surprising efficiency — to draw out all the consequences of what we already know. It is this efficiency that, in a way, creates a rupture”. This is the heart of my previous statement as it applies beautifully to system dynamics: they are not here to help you discover new knowledge, but to better understand the systemic consequences of what you believe to be true. 

This blog post is about “calibrating CCEM”, that is producing “median KNU” that we can then modify according to our beliefs. The median KNUS are not “the most likely forecast”, but an anchor point that is needed to start the simulation cycle. The last CCEM development cycle which has produced CCEM version 8, focused on calibration through two approaches: shifting time origin and calling advanced reasoning models (Gemini 3, GPT 5.2 and Sonnet 4.5) to get a consensus. As someone who has practiced SD models for 20 years in many fields, I have found the following four criteria to assess the quality of a SD model:

  1. A good SD model surprises you: it embeds enough complexity to help you see things “you did not know were the consequences of what you believed”. A useful SD model is the opposite of a model that tells you what you put in it.
  2. A good SD model is causal and not based on correlation. This is what “from first principles” mean: the relationships between the model variables are not data-driven (derived from past observation from statistical analysis) but they are expressed with formulas that can be explained. This is less precise than extracting the model from machine learning but better suited to explore the future with data spaces that are very different from the past.

  3. A good SD model should be “as simple as possible, but no simpler”, to borrow Einstein’s famous quote. The KISS / Occam’s razor principle is critical to avoid overfitting. This is what I will describe in today’s blog post. A great thinker who has explained the necessity of simplicity when modeling complex system is Nassim Taleb (in his many books, but especially with Antifragile and the ViaNegativa aphorism).
  4. Last, a good SD model should not depend on the time origin (year when we start the simulation). For CCEM v8, I switched the time origin from 2010 to 1980 and explored what happened. This is where overfitting (adding complexity into the model, in violation of the previous principle) shows. Note that ideally, modeling should work in both directions of time : produce a plausible future from the past and a plausible past from the future (a great idea for future improvement of CCEM).

This blog post is organized as follows. Section 2 presents the changes that we have introduced in CCEM v8 to cope with this new 1980 “simulation start”. Reproducing the 1980 to 2020 past data has revealed that CCEM v7 was both too precise and too rigid. The new economic growth model is both more general and adaptable to what we have seen. Section 3 is focused on energy, both from the energy production and energy transition angle. Moving back to 1980 and looking at the evolution of energy prices in constant dollars has produced a different picture. This is also when the constant progress of reasoning LLM-based system has impacted CCEM. For 3 years I have tried to automate the production of the many hundreds of past data variables and derivatives that feed CCEM. I had to wait last summer to get trustworthy (and sourced) answers from GPT, but this is now working well with GPT5.2, Gemini 3 and Sonnet 4.5. Section 4 talks about Global Warming damage and adaptation, a topic that was the center of the previously mentioned dinner. As I have explained in a previous blog post, damages is one of the most critical and most uncertain KNU. It is critical because this uncertainty is a root cause of the political inaction that we see COP after COP. It is uncertain because understanding the consequences of future catastrophes on the economy is indeed very hard. The range of opinions in the published literature is surprisingly large, as we shall see.


2. Economy Growth Model

In the spirit if keeping this blog post short, I will only give you the main insights that occurred during the new “calibration”. CCEM v8 and the associate graphical user interface will be made available soon. Trying to calibrate the previous (v7) version with 1980 data revealed three major questions:

  • How to explain the high growth rate of India and very high growth rate of China ?
  • How to explain the trajectory of Europe and its actual decline for the past 15 years, when you measure GDP in constant dollars. Euro/dollar parity makes only for a small part of the explanation
  • What should be the influence of population size on GDP was clear in 20th century; it is not so clear for 21st century. This is actually where the AI-fueled automation debate will appear.

The following slide shows a representation of the new growth model. It first must be said that it was indeed possible to produce the desired trajectory with CCEM v7, but at the expense of tweaking the parameters with no explanation, in violation of principle (1).  The growth model structure is unchanged: growth is the difference of Investment x ReturnOnInvestment and Decay (rate at which production infrastructure must be replaced or maintained). What is different in v8 is that we introduce multiple RoI factors : competitive labor costs, social expenses as a ratio of GDP and energy costs, and “tech & efficiency”. The first three are descriptive factors that are extracted from past data, the last one is a “subjective(modeler)” evaluation of the efficiency of innovation based on software  and tech fluency. With this model and its equation (not shown here), one is able to both calibrate from 1980 and 2010, with a lever of tuning for the KNU “Tech Efficiency” which is both moderate and explainable. This is not a “better model” because it can be tune more precisely, it is a better model because it can be “calibrated with sense”.


The following shows the simulation results obtained with the “median value” of the “Tech & Efficiency” KNUs (There is still a factor that is unknown and reflects the modeler’s beliefs, that are my own in this case). When playing with the simulator (G2WS), the user may change this to match her/his intuition. Although the model was changed so that we can calibrate both from 1980 and 2010 with similar results, we show here the GDP of the five geopolitical blocks both in constant (2010) and current dollars.

 

Une image contenant diagramme, ligne, Tracé, capture d’écranLe contenu généré par l’IA peut être incorrect.

 

The obvious comments that one may draw from these scenarios:

  • Looking at the data with and without inflation paints a different picture. With inflation, recession is disguised as “slow growth”.
  • This model is bad news for Europe and fits perfectly the conclusion of the report proposed by Mario Draghi. I will return briefly on this topic in the conclusion.
  • The 2030 values – factoring a 2% inflation - are close to the economist’s consensus, as told by my favorite LLMs, but this is not the case for 2050. Both Europe and “Rest of the World” are suffering from Oil & Gas price increases.

I also need to address my modeling choice of using constant dollars as the main unit, and not using PPP (purchasing power parity) as is advised by many economists and proposed by Nassim Taleb in his last essay “The World in Which We Live” (which is a great source of inspiration, especially to understand the importance of “decay” in the previous model). PPP is a way to both compensate for parity rate variation and inflation. However, the “price of a market basket” is not a good driver for what really matters to growth: high-skill salaries (think data scientist), cost of high-tech suppliers (think GPU) and the cost of strategic raw materials (think copper or oil). Fore these things, there is a world price, and the PPP compensation over-exaggerates the comparison between low-costs and high-costs countries (sure, you may buy more chairs with one dollars in Vietnam than in the US, but it does not really matter at the macro CCEM level).

 

3. Energy Transition Model

Trying to explain the evolution of energy production in volumes and prices, starting from 1980, has proven to be an interesting challenge. The chart in the illustration below shows production prices (worldwide level) at a few dates between 1980 and 2023, all expressed in constant 1980 dollars, that is, adjusted for inflation. The result is something that economists know, but that is hidden to public knowledge (because we consume energy at domestic prices which are covered by rising taxes) : energy prices have declined in the past four decades (past the oil crisis of the 70s). If you have trouble believing this (and I would), please look at the natural gas price chart and then readjust for inflation. Consequently, CCEM v8 is simpler: instead of using price sensitivity (elasticity) extracted from past data to reproduce the production price evolution, we introduce another higher level KNU which is the “stablePrice” which is stable market price if energy demand matches the supply level, hence a price driven mostly by production costs. CCEM then computes an equilibrium price for the actual situation when price is adjusted to match demand and supply. This model is easier to calibrate and works well both when starting in 1980 or 2010. Let us also remember that the energy price does not play a major role in global warming, what matters is the volume of consumed fossil energy.

 

This is also a topic where CCEM calibration has been modified through the use of LLMs to ask for consensus. The following illustration corresponds to the question “what is the current estimate fossil reserves (for Oil, Natural Gas and Coal) expressed in PWh, at the current (2020) production price, at double and quadruple this price”. The results are consistent with what I had gathered from interviews and literature reading but obtained in minutes versus day. This is still a KNU ! the fact that the three major LLMs provide similar answers only stem from the fact that they are using the same sources. When playing with the simulator, the amount of available fossil resources at a given cost is one of the major “beliefs” that you may play with.

 

Une image contenant texte, ligne, capture d’écran, Tracé

Le contenu généré par l’IA peut être incorrect.

This is still more fossil than I had used in CCEM v7 (2025 view versus 2020 view) and much more that I was using in 2008 when I started working on CCEM. The “peak oil” has shifted from 2020 (in the 2008 view) to 2035 with the 2020 data (CCEM v7 simulation) to 2042 with the input KPU that corresponds to the previous chart (used in v8). This has a direct impact on the global warming “observed” in 2100. Compared to pre-industrial level, you get 2.6, 2.8 or 2.9C in addition to pre-industrial level, depending on the fossil reserve KNU. It is interesting to notice the uncertainty comes from so-called “unconventional fossil”, since, for example, the peak oil for “conventional oil” has been observed in 2006.

While I was calibrating CCEM v8, it was interesting to update the renewable energy figures with the recent acceleration that has been observed in China. Because of this acceleration, GTP5 forecasts 14Pwh of renewable electricity in 2030 (versus 10PWh produced in 2020, which is fairly consistent with what I would expect given the reliable data from IRENA. Note that the hard part for wind or solar is not the manufacturing but the deployment. Still, this is a higher number than previous CCEM simulation (but still small enough that it does not change the big picture).

The following illustration is the energy scenario (energy production in PWh, without the usual amplification on renewables, look at OurWorld in Data to get the current data )

Une image contenant texte, capture d’écran, Caractère coloré, diagrammeLe contenu généré par l’IA peut être incorrect.


If you take the time to look at the “World Energy Outlook” from IEA (2025 edition), you will see that the 2010-2050 portion of the upper chart is fairly consistent with what the IEA forecasts (see the illustration below as an example).

These are still hypothetical results, driven by beliefs about the KNUs. Hannah Ritchie proposes an interesting analysis in her article :  Will oil and gas consumption keep rising through 2050?. I will let you read her viewpoint, not everything should be taken for granted, but I really like her analysis of the PV (PhotoVoltaic) growth, that has been constantly underestimated by IEA for more than a decade (this is what makes it a key “known unknown”). She also points out that the hypotheses about electrification, such as the rate of electric versus ICE vehicles, are a key driver to forecast energy consumption.

It is indeed very clear to anyone who has worked on long-term energy planification, from RTE (read about their recent plea to accelerate electrification) to Vaclav Smil (cf. my previous blog post about “Energy Matters Modeling Manifesto”) or Yves Bamberger, that the usage transition is the key “known unknown”. It can be stated without doubt that electrification is the major KPI of energy decarbonization.  In CCEM v8, I have introduced energy consumption sectors (from EIA) to solidify KNU hypothesis. Predicting the transition from one source of primary energy to another is hard: you must match demand (usage – the hard part) and supply, take into account how the energy is consumed (mobility constraints, storage constraint) and the possibility of using an “energy vector” such as electricity and hydrogen. In CCEM v7, the KNU parameter is the substitution timetable for each geopolitical block. Preparing this matrix for each zone is tedious and somewhat repetitive. In CCEM v8 we have four macro sectors (see illustration below) and thus four associated KNUs (transition timetable for each), from which the zone transitions are extrapolated using known IEA data.

 

 

It is important to emphasize that this is not a better model than the previous V7, it is a simpler model that is easier to explain! You have seen that I have a “energy transition acceleration” hypothesis, based both on IEA past data and sector prospective analysis (my own belief, which is closer to Hanah Ritchie with a pinch of Vaclav Smil). With the energy transition that corresponds to the previous table, we get the following electrification rate. Starting from 10% in 1980 and 16% in 2010, we get 23.1% in 2030 and 29.5% in 2050. If we are more conservative and apply a “transition as usual” belief, we get 21.7% in 2030 and 25% in 2050. Here you can see the viscosity of the electrification transformation: it started in the 1950s, but it really takes time, energy and money.  What you can get from this simulation is that you should not believe people telling you that we are not making progress towards decarbonization (read Hannah Ritchie) nor the utopic NetZero scenarios based on electrification ratios higher than 50% (read Vaclav Smil). As told, “better median KNU” does not mean that they are no longer “unknowns” – when you use CCEM you play with other KNU values, according to your own beliefs. An updated version of the “G2WS simulator” is coming early February with CCEM v8 as new engine. You will be able to play with fossils, clean and energy transition speed and see what happens (cf. introduction).


4. Warming Damages & Adaptation

The two previous scenarios correspond to the following CO2 emissions and associated warming:

  • The “accelerated transition” yields CO2 481 ppm in 2050, resulting in a +1.7C warming, and C02 583ppm in 2100, resulting a +2.8C warming.
  • The “transition as usual” scenario yields 485 ppm (+1.8C) in 2050 and 602ppm (+2.9C) in 2100

These figures are slightly worse than previous (CCEM v6 and v7) simulations because of the increased reserves of fossil fuel. What is interesting, obviously, is to explore the set of possible KNU values to see a “cone of possible futures” emerge. As told in previous blog posts and research papers (cf. “CCEM: A System Dynamics Model for Global Warming Impact from Energy Transition to Ecological Redirection”), the cone is more narrow than one might think (+2.4 to +3.1), although the world would be quite different in these two extreme cases.

During the previous mentioned dinner, I shared the following slide that explains that although we know a lot qualitatively about the anticipated negative consequences of global warming, it is much harder to put a quantitative value. This slide identifies three groups of damage analysis, regrouped by time period:

  1. First, economists from teams of Moodys, Nordhaus and others, tried to derive expected GDP impact from past data. Although their work was substantial and data-driven, it suffers the problem that we exposed in the introduction: it is hard to extrapolate what could happen in 2080 based on weather catastrophes from the past 40 years. The famous quote that “+3C would generate -3% GDP impact” has received consequently lots of criticism.
  2. The second group of estimates try to apply some correction to the forecasted impacts (SwissRe for instance) or look at more specific studies to extract some data points to produce more robust extrapolations. I have already discussed, for instance, the GIVE model from Berkeley which was used as a reference for many of the key papers of the early 2020s (such as the famous paper from N. Stern, J. Stiglitz et C. Taylor, « The economics of immense risk, urgent action and radical change : towards new approaches to the economics of climate change»).  In this group of studies, the impact of +3C is somewhere between -7% to -12% of GDP impact.
  3. Since these values, which I use for CCEM simulations as the “damages” KNU, do not push for drastic political changes, a new set of studies has emerged. The most well-known and most-quoted damage model is NIGEM (from NIESR in the UK). NIGEM predicts impact in the 20%-30% range for +3C warming in 2100. However, to get to these really high numbers, the same model predicts significant damage levels (5-6%) at +1.7C, which is hard to materialize when you look at the current data and short-term forecasts of re-insurance companies.


My point here is to emphasize how much of a KNU the question of damage is. With CCEM simulation, I use all the range covered in this picture, including NIGEM values to test sensitivity. This  level of uncertainty for damages is a key driver of the political debate, as explained in my previous blog post. It is very clear that we are headed in the wrong direction, but the cost to change is higher than what we were told and the disasters waiting for us are hard to measure. This ambivalence may be found everywhere, even when  listening to Matthieu Riccard in a podcast, where he quotes Nicholas Stern to state that “mitigating expenses would obviously save far more money than they cost in the future”. Unfortunately, there is nothing obvious here, and this is a real problem. More money and science must be applied to better understand the consequences of climate change.

As explained in the previous blog post, a consequence of the difficulty to evaluate mitigation is that adaptation will be the prevalent answer for many actors. I have quoted “Survivre à la chaleur”, by M. Glachant and F. Lévêque, because it is a highly insightful and evidence-based book that underscores an essential message: global warming is already unfolding, and adaptation is now unavoidable. The authors highlight that adaptation is not a distant policy objective, but a process already driven by individuals, businesses, and farmers responding to mounting climate pressures. While mitigation demands coordinated international action, adaptation generates local costs and benefits shaped by national and regional institutions. The book also stresses profound implications for insurance systems: premiums will rise sharply to reflect escalating risk, some assets may become uninsurable, and many organizations will treat higher insurance costs as a new and permanent cost of doing business.

Adaptation can be understood as long-term self-insurance: investing steadily to protect assets and to reduce future climate-related damage. This logic is particularly evident in agriculture, where spending today helps prevent tomorrow’s losses. The efficiency of adaptation as an insurance policy is another KNU in the CCEM model. I have used GPT5 to come up with an assessment of our (very vague and approximate) analysis of mitigation. The uncertainty should not be a surprise: it starts with the uncertainty of the damages (from fires, canicules, droughts, flooding …), is compounded by the uncertainty of the mitigation strategy (how much of the damage can be avoided). A synthesis of current estimates suggests that roughly $150 trillion in global assets are exposed to climate risks, potentially leading to $5–20 trillion in damages by 2100. Annual adaptation investments of $300–500 billion could prevent 50–60% of those losses, yielding returns of 4 to 10 times the money spent. Reframed over the full century, a +3°C scenario could generate $20–50 trillion in direct damages, with cumulative impacts of $250–500 trillion on global GDP — and up to $150–300 trillion of these damages could be avoided with about $40 trillion of adaptation spending, still implying a substantial ROI of 4 to 8 times.  It is interesting to compare these figures with the recent McKinsey report : “Advancing adaptation: Mapping costs from cooling to coastal defenses”. This report is an interesting source of data about the types of adaptation that are required and the unit costs they entice. It is also more aggressive than the studies collected by GPT5 in the sense that it advocates for a higher level of spending (from a few hundred billions in 2050, which is what the CCEM “median KNU” advocates, to 1T to bring all countries to a high level of protection). One may notice that the McKinsey paper quotes the x6 RoI in the beginning with no justification, while this number seems to hold only with more specific and targeted adaptation targets. When applied globally, you need the NIGEM damage estimate to justify a high RoI at $1.2T spending (even when we factor inflation).

One of the goal of CCEM v8 calibration towards a more robust model was to be able to run “two centuries simulations”, ending in 2200. This now works very well, and you can see the effect of the “end of fossil fuels” on the world GDP. Somehow, you can see in the next century what we would have seen this century if fossil reserves were at the level we expected in 2000. Global warming grows to +3.2C, and this growth is limited by the scarcity of fossil fuel, but GDP shows a slow decline globally with a huge difference between blocks (globally, GDP in constant dollars in 2200 is the same as 2030). I will wait for CCEM v9 to share more details about these simulations, because the level of unhappiness (combination of warming and economic recession) means that the model needs some “improvements” on the “ecological redirection” side. During the dinner discussion, we talked about the possibility that “unhappiness and tensions will rise faster than warming”. Another obvious observation is that you cannot model the 22nd century without taking proper care of the demography. For the main part of the 21st century, demography is a parameter (there is uncertainty, but the impacts are moderate), for the 22nd century it is a key known unknown. The next version of CCEM (v9) will introduce in its “system dynamics graph” the following factors

  • Aging of population, active/inactive ratio, costs of social expenses
  • Impact of technology (AI) on employment,
  • Inequality and redistribution policies



5. Conclusion


I will first conclude this long post with a short positive remark: CCEM simulation shows that despite considerable inertia, there is room to maneuver the “World” ship in the right direction. This is the right moment to advertise Ritchie’s new book: «
Clearing the Air ». I have not finished reading her book, and I will certainly propose a short summary in an upcoming blog post, but I wanted to talk about it already because its narrative fits superbly with the insights that one draws when running multiple scenario simulations with CCEM. I must disclose that I use OurWorldInData as my main data source, so there may be some influence there. Still, when torturing CCEM to produce “doom scenarios”, one can observe some resilience, and a faithful calibration over the past four decades brings the “positive expectation” that reading Hannah’s books, and looking that the data that she mentions, creates.

A more somber side of the conclusion is the intuition that comes with modeling Europe GDP to match the 1980-2025 period. This is not a deep analysis or an economic-proof-by simulation, but rather “food for thought” (that is what the changes that I had to put in the model to reproduce the past four decades tell me about the next three):

  • What we see in Europe is a classical “tragedy of commons” : we started with an “economy flywheel” that was working well and each stakeholder (governments and regulation) kept adding layers after layers of burdens, all with good intent, that slows the growth engine (in many ways). This is true for energy (and the obsession for green at the expense of efficiency), for regulations, for taxes, etc. Unfortunately, the flywheel has lost its inertia and is running now at a low competitive speed (compared to US or China) that will takes decades to fix. Now that the macro-economic consequences of natural resource shortages are visible (especially energy), the consensus is emerging fast.
  • This brings us to the aforementioned “European competitiveness report” from Mario Draghi. CCEM growth model could not agree more (when you look at the scenarios) with the urgent necessity to upgrade European competitiveness: lower energy costs, lower production taxes, recreating incentive for profits (including less redirection and a tech-friendly attitude), social costs reduction (including retirement burden), etc.


Obviously, these general ideas about Europe apply even more strongly to France, but I keep CCEM modeling at a continental level to avoid getting too depressed. Also, to emphasize the ideas expressed in the introduction, the goal of CCEM (or any system dynamics model) is not to tell you what you should think, but to help you materialize the consequences of what you already think. For those readers with System Dynamics experience, this blog post is a very positive testimony about what you can gain by continuously improving the quality of the calibration process.

 

 

 

 


 
Technorati Profile