This post concludes the first phase of my
computational experiments with S3G (Systemic Simulation of Smart Grids) which I run
during 2011 & 2012 summers. I presented the results at the 2013 ROADEF conference a few
months ago and I have made the extended set of slides available on my box account (left
of the blog page – My Shared Documents).
1. S3G Experiments
A simple
description of S3G is available in a
previous post. It should help to understand what is presented in the
slides, since many of those slides were included in that post. The objective of S3G
is to simulate the production and consumption of electricity throughout a long
period of time (15 years) with a global “ecosystem” perspective.
I will first add some explanations to three topics:
the set of models, the satisfaction model and the GTES search for equilibriums.
It is important to understand the limits of the current experiment before
giving out the preliminary findings, since they need to be taken “with a grain
of salt”.
As mentioned earlier, S3G uses very simple models
(i.e. simple equations and few parameters) for the component of the “energy
production & consumption” complex system. This is a deliberate choice,
because I lack the expertise to produce more complex sub-models, and, mostly,
because I want to focus on the overall system complexity (that is, what happens
when all this simple subsystems are put together). This is clearly explained in
my SystemX
IRT introduction keynote. Still, it’s worth taking a look at each of these
sub-models:
- The energy demand generation is quite simple. I start with daily and yearly patterns, obtained by cut & pasting historic curves found on the web and I had random noise, which I can control (time or geography-dependent). I don’t think that this is a limitation for this experience.
- I have a crude vision of “NegaWatts” that represent energy consumption that may be saved through energy saving investment. NegaWatts are virtuous: there is no reduction of economic output, but they require money. Here my model is really too simple, but somehow it falls outside the scope of what I was trying to accomplish. I use a simple hyperbolic function to represent the fact that, as electricity prices grow, people are likely to invest to try to reduce their consumption. Since it is very difficult to foresee the negawatt development in the next 20 years, it is better to use a single parameter (slope of the hyperbole) and make it vary to cover all kinds of scenarios.
- I have an equally crude model for demand/response, which, contrary to Negawatts, is instantaneous but affect economic output. In my model (a simple S-curve), demand is reduced when peak price becomes too high. We’ll see later on that this is indeed too simple and that it should be further developed in a future next step.
- The market share model – to determine the market share of the grid operator against the incumbent – is a simple/classic S-curve. My previous experience with similar economic simulation tells me that it is enough to produce a realistic experience (this is not the the systemic complexity lies).
- On the other hand, the dynamic pricing model - how does the incumbent modulate his wholesale/retail price - is the heart of the relationship between the local and the national operators, and my current version is too simple. I assume that the price is a function of the output (demand), so that peak consumption yields a higher price. I have chosen a very simple function for my dynamic price equation: a piece-wise linear function, with a constant price up to a fixed (constant) production, and then a linear surcharge when the production is higher than this constant. Obviously, one would like to test and analyze more complex dynamic pricing schemes, since dynamic price and demand/response behaviors are a key engine for smart grids. The reason why I used a simple model is that this is precisely the complexity-generator for the model, and using a randomly-complex model makes it very difficult to analyze later. This pricing structure is under the control of regulation, and I am waiting to better understand what our political instances have in mind to encode a richer model (see later).
- Last, the smart grid electricity production model is reasonably detailed for such an experiment. The decision about which source of electricity to use is actually straightforward due to the mix of production constraint (one must use the electricity that is produced) and economic goals (when sourcing, get the cheaper source). The only tricky part is the management of storage. I use simple rules, with a number of parameters that are tuned within the machine learning loop of the GTES simulation. Hence I let the simulation engine discover how to best use local storage. For instance, the local operator can both use storage as a “buffer” for its own production or as “reserve” to play the market (buy when cheap and sell when expensive). When I consider the small amount of storage that is actually used (because of storage price), this part of the model is quite satisfactory.
A GTES run is the simulation of an optimization loop
that tries both to maximize each player’s satisfaction and to find a Nash
equilibrium. Hence, defining each player’s satisfaction is a critical part of
this S3G. Let us first recall that there are four players (actually, set of
players) in this “game”. Each player has three goals, with a target value that
is associated to each goal. We define a “strategy” as a triplet of pairs (goal,
target). The satisfaction is then expressed with respect to each goal with a
pseudo-linear function: 100% if the target value is reached and a linear
fraction otherwise. The overall satisfaction for a strategy is the average
satisfaction with respect to the three goals of the strategy. In the GTES
method, we separate the parameters that represent these goals (grouped as a
strategy) from the other parameters that the player may change to adjust its
results, which we call the “tactical play”. The principle of the GTES game is
that each player tries to adjust its “tactical parameters” to maximize its “satisfaction”
(w.r.t its strategy). Here is a short description of the four players in the
S3G game:
- The “regulator” (political power) whose goal is to reduce CO2 emissions while preserving economic output and keeping a balanced budget (between taxes and incentives). Its three “goals” are, therefore, the total output (consumed electricity + negaWatts), the amount of CO2 and the state budget (taxes - subsidies). Its tactical play includes setting up a CO2 tax, regulating the wholesale price for the suppliers and creating a discount incentive for renewable energies.
- The existing energy companies, here called “suppliers”, whose goal is to maintain their market-share against newcomers, maintain revenue and reduce exposure to consumption peaks. Their tactical play is mostly through pricing (dynamic), but they also control investment into new production facilities on a yearly basis.
- The new local energy operators, who see “smart grids” as a differentiating technology to compete against incumbents. Their goal is to grow turnover, EBITDA and market-share. Their real-time tactical play is dynamic pricing, and they may invest into renewable and fossil energy production units, as well as storage units.
- The consumers are grouped into cities, whose goal is to procure electricity at the lowest average price, while avoiding peak prices and preserving their comfort. The cities’ tactical play is mostly to switch its energy supplier (on a yearly basis) and to invest into “negaWatts”, which are energy-saving-investments (more energy-efficient homes, etc.).
GTES stands for Game-Theoretical Evolutionary Simulation.
I have talked about it in various posts,
and a summary may be found here.
I gave a keynote talk about GTES at CSDM
2012, the slides are available here.
GTES is a framework designed to study a model through
simulation, in order to extract a few properties from this model (learning
through examples), either explicitly or implicitly. GTES is based upon the combination of three
techniques:
- Sampling: since some parameters that occur in the economic equations are unknown, we draw them randomly from a confidence interval, using a Monte-Carlo approach. Monte-Carlo simulation has become quite popular (especially in the finance world) over the last decades (while computers became more powerful, obviously). The need for Monte-Carlo is a signature of complexity and non-linearity: simulation becomes necessary when one cannot reason with averages. The beauty of linear equations is precisely that one may work the average value. In a complex non-linear system, deviations are amplified and there is no other way to predict their effect than to look at it, case by case (hence the sampling approach).
- Search for Nash Equilibrium in a repeated game: We set the parameters that define the player’s objective functions and look for an equilibrium using an iterative fixed-point approach (in the tradition of the Cournot Adjustment). The good news with S3G is that it is a “simple” complex system, hence finding a Nash equilibrium is easy. However, it is precisely easy because of the simple pricing model (cf. previous discussion).
- Local Search as a machine learning technique: once the parameters that define the objective function are set, the other parameters that define the behavior of each player may be computed to find each player’s “best response” to the current situation. We use a simple local search (“local moves” = dichotomic search for the best value for each tactical parameter), coupled with “2-opt” : the random exploration of moving two parameters at the same time, using “hill climbing” as a meta-heuristic. From an OR point of view, this are rudimentary techniques, but they seem to do the job. The complexity of the optimization engine that one must embed into GTES depends on the complexity of the model. If the dynamic pricing model was made more complex, a stronger local search metaheuristic would be necessary.
Explaining GTES
will take many years … I was invited at ROADEF’s yearly event
last month to present some of the successes that I have had with this approach
over the past 10 years. I have a book, “Enterprises
and Complexity: Simulation, Games and Learning” in my “pipe”, but I expect
at least five more years of work are needed to get it to a decent state (in
terms of ease of understanding).
2. Most interesting findings with S3G experiments
A “S3G session” is made of interactive runs of “experiments”, which are
GTES computational executions. More precisely, an experiment is defined through
two things:
-
The
randomization boundaries, for those parameters that will be sampled.
-
Some specific
values for some parameters, since the goal of a “serious game” is to play
“what-if scenarios”, by explicitly changing these parameters. For instance, we
may play with the investment cost of storage, to see if storage is or will be
critical to smart grids.
Multiple scenarios have been played to evaluate the sensitivity to
“environmental parameters such as the variability of energy consumption
(globally or locally), the fossil energy price (gas and coal), the possible reduction
of the nuclear assets, the impact of carbon taxes or the impact of wholesale
price regulation. Here is a short summary of the main findings that were
presented at ROADEF:
- Smart Grids and variability.One theoretical advantage of smart grids operator is that they could react better to variations. Simulation does show some form of better reaction from the local operator than the national operator to either fluctuation (electricity demand that varies compared to historical forecast) and local variation (for instance, through local changes of climatic conditions). However the difference is very small, and could be disqualified as insignificant from a statistical perspective. This result depends on storage price (see later) and wholesale price structure. With the current values, one of my key arguments in favor of smart grids (systems of distributed systems are expected to be more flexible and reactive) does not seem to hold.
- Carbon tax and Nuclear strategy.
I played with carbon tax to see if the raise of carbon tax would have an effect, and it does, but it is a negative one since it favors nuclear energy and since green energy is still too expensive. On the other hand, the decision to reduce the share of nuclear energy in the national supplier (either for a long-term withdrawal or a long-term cap as announced by the French government) creates favorable conditions for smart grid operators, quite logically. However, simulation shows that the results are weak (small advantage) and unstable (they depend heavily on the overall systemic equation of wholesale prices coupled with “environment” variables such as energy prices). - Storage and Photo-voltaic costs.As explained earlier, I used the Web as my main information source, and got unit prices for storage (per MWh) and photo-voltaic that vary considerably according to the sources. I designed a number of scenarios to see what would happen if the prices fall down, as is expected by a number of “green experts”. The availability of cheap storage has an important impact, but one need to see a price reduction by a ratio of 5 to 10 (depending where you place the start point) to see this impact materialize. The simple rule seems to be that storage TCO (total cost of ownership) should get as low as 50% of wholesale price to shift the system’s behavior (quite logical if you think about it). A similar remark may be made about Photo-voltaic energy, which price is still far too high to change the smart grid operator economic mode.
- Wholesale & retail price structure.This is the heart of the smart grid ecosystem: the rules/regulation that governs wholesale pricing – which controls the “coopetition” between supplier & operator – and the dynamic pricing for retail, which controls the benefits driven both from demand/response and negawatts. In the game theory tradition, we have built a strategy matrix that shows the result of conflicting strategies between the supplier and the operators, ranging from bold (focus on market-share) to aggressive (focus on revenue) through “soft” (more conservative). The sensitivity to the price regulation structure is such that it does not make sense to draw too much out of my simulations, except the fact that this is the critical part.
- Sensitivity to oil price.I have played with a number of scenario regarding fossil energy price trends in the next 15 years. The sensitivity is much lower than expected, when comparing suppliers against operators. There is a clear impact on consumers, but the benefit in favor of green energy and smart grid operators is offset by the advantage in favor of nuclear energy. One may add that with shale gas, non-conventional oil and coal, this type of scenario is not likely in the next 15 years. What we see in the US is precisely the opposite.
3. Limits of current approach and next steps
Let me first summarize three obvious limits to the S3G
approach:
- As explained, both wholesale and retail dynamic pricing models are too simple. The shape of the curve is simplistic, but also the fact that price only depends on total demand is unrealistic (taking production costs into account is a must).
- One of the expected benefits of smart grids is improved resilience, both to catastrophic events and to significant internal failures. I did not try to evaluate resilience, because I did not have enough data to generate meaningful scenarios. If you look at what is happening in Japan, local storage is deployed to increase resilience in the advent of a natural catastrophe, with good reasons (together with HEMS: home energy management systems).
- My demand/response model is equally too simple, from two separate perspectives. First I only look at “shaving”, that is electricity that is not consumed, because the usage is forsaken for price/availability reasons. Another interesting alternative is to look at demand displacement, where the consumptions is “shifted” instead of “shaved”. Many usages, mostly related to heating, have enough inertia to be shifted by a few minutes. The other simplifying dimension is that I only look at the instantaneous benefit brought by demand/response that is the non-consumption of electricity at a time when prices are high. However, market prices do not raise that much, nor long enough, to make this “shaving” worth a lot of effort. On the other hand, it may help to avoid investing into excessive marginal capacity, which has a higher payoff.
This last argument is pointed out in the « Loi
Brottes ». This article explains clearly the difference between the “capacity
adjustment” value creation and the “production adjusting”.
- "En l'état actuel du droit, aucun mécanisme n'est prévu pour rémunérer l'effacement au titre de sa valeur en capacité entre fin 2013 et l'hiver 2015-2016, autrement que par le biais du mécanisme d'ajustement, ce qui limite le développement des capacités d'effacement", indique-t-il dans l'exposé des motifs. Cet amendement a donc pout but d'assurer dès l'entrée en vigueur de la loi "le développement des effacements par un dispositif d'appel d'offres, dans l'attente de la mise en place du mécanisme de capacité pérenne qui permettra aux acteurs concernés de développer des capacités de production et d'effacement de consommation".
The reason why I focus on
« production adjustment » is that it is much easier to simulate. Capacity
adjustment is a three-parties value proposition (the user, the demand-response
operator, and the producer whose capacity may be reduced). It requires a
regulation (hence the Brottes law), to shift the capacity avoidance into
operational benefits for the operator who will eventually share it with the
user.
I will leave the S3G code alone for a while. When I
resume this work (2014), I plan to take the following next steps:
- Improve the satisfaction formula, using a product form instead of a sum. This is a classic technique when defining KPI for performance measurement. A product (i.e., multiplying the various sub-terms of section 1 instead of summing them) yields a more truthful representation of strategic satisfaction (it is much better to reach all three goals at 60% than getting 100% on two and totally missing the third).
- Introduce parallelism (with a MapReduce architecture) to reach more stable results with more samples. Monte-Carlo simulation is designed for easy parallelization.
- Enrich the dynamic pricing model (while sticking to piecewise-linear formulas) and re-evaluate the “model constants” (energy production and storage prices, which constanly evolve).