1. Introduction
This
short blogpost is a revisit of the concept of sustainable IS development in the digital
age, that is in the age of constant change. Sustainable development for
information systems is how to make choices so that building the capability to
deliver the required services today does not hinder the capacity to deliver a
few years later the services that will be needed then. It is a matter of IS
governance and architecture. Each year money is allocated to building new
systems, updating others, replacing some and removing others. Sustainable
development is about making sure that the ratios between these categories are
sustainable in the long run. It is a business necessity, not a IS decision;
sustainable IS development is a classical short-term versus long-term
arbitrage.
The
initial vision of sustainable IS development comes from a financial view of IS.
In the word of constant change, the weight of complexity becomes impossible to
miss. This is why the concept of technical debt has made such a strong
comeback. The “technical debt”
measures the time and effort necessary to bring back a system to a “standard
state” ready for change or upgrade. In a
world with little change, technical debt may be left “unclaimed”,
but the intensification of change makes the debt more visible. The
“debt” metaphor carries the ideas of interests that accumulate over time: the
cost and effort of cleaning the debt increases as the initial skills that
created the system are forgotten, as the aging underlying technologies become
more expensive to maintain and as the growing complexity makes additional
integration longer; more difficult, and more expensive.
From
a practical point of view, complexity is the marginal cost of integration.
Complexity is what happens inside the information systems, but the most direct
consequence is the impact on the cost when one needs to change or to extend the
system. If you are a startup or if you begin a new isolated system, there is no
such complexity charge. On the other hand, the cost of change for legacy system
is multiplied by the weight of integration complexity. Complexity may be
measured as the ration of the total effort of building and integrating a new
function into an information system divided by the cost of developing this new
function itself.
A
dynamic view of sustainable IS development, therefore, needs to take complexity
into account. Sustainable development needs to keep the potential for
innovation intact for the next generations (in the software sense, from months
to years). Complexity and change do not invalidate the previous financial analysis
of sustainable development based on refresh rate and obsolescence cost, they
make it more pregnant because the financial impact of technical debt grows as
the change rate grows. Other said, a static sustainable development model sees
change as a necessity to reduce costs whereas a dynamic model sees the
reduction of complexity as a necessity to adapt to external change.
The
post is organized as follows. The next section recalls some of the key ideas of
“Sustainable Information System Development”. SD initial framework is drawn
from a model of IT costs that looks at the cumulative effects of system aging.
The purpose of SD is to derive governance rules to keep the budget allocation
stable in the future and balanced between the maintenance of the current system
and the need to adapt to new business requirements. Section 3 provides a short
introduction to the concept of technical debt. “Technical debt” measures the
effort to return to a “ready for change” state and if often measured with
days/months. Time is a very practical unit but it does not make it easier to
master technical debt, especially when complexity is involved. Section 4 adds
the concept of complexity to the sustainable IS model. Cleaning technical debt
is both defined as returning to a “standard” – often defined by rules and best practices
– and reducing integration complexity. There is no such thing as a standard
here, this is the essence of IS architecture to define a modular target that
minimizes integration complexity.
2. Sustainable IS Development
Sustainable IS Development is a
metaphor that borrows from the now universal concept of « sustainable
development »:
- The following definition was proposed by the Bruntland commission: "Sustainable development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs.
- This is a great pattern to define sustainable development for information systems: “Sustainable IS development is how to build an information system that delivers the services that are required by the business today without compromising the ability of meet in a few years the needs of future business managers.”
This
definition may be interpreted at the budgeting level, which makes it a
governance framework. This is the basis of what I did 10 years ago – cf. this
book or the course about Information Systems that I
gave at Polytechnique. It may also be
interpreted at the
architecture level, as shown in the book “Sustainable
IT Architecture: The Progressive Way of Overhauling Information Systems with
SOA” by Pierre Bonnet, Jean-Michel Detavernier, Dominique
Vauquier, Jérôme Boyer and Erik Steinholtz. In a
previous post almost 10 years ago, I
proposed a simple IS cost model to derive sustainable IS development as an
equation (ISSD). I will not follow that route today, but here is a short
summary of the three most important consequences of this model.
ISSD
model is based on a simple view of IS, seen as a collection of software assets
with their lifecycles and operation costs (in the
tradition of Keen). Governance is defined as an arbitrage between:
- Adding new components to the system, which is the easiest way to add new functions modulo the complexity of integration.
- Maintaining and upgrading existing components.
- Replacing components with newer versions
- Removing components – the term “applicative euthanasia” may be used to emphasize that old legacy applications are never “dead”.
ISSD
model is a set of equations derived from the cost model that express that the
allocation ratio between these four remains stable. The main goal (hence the
name “sustainable”) is to avoid that consuming too much money today on
“additions” will result in the impossibility to evolve the information system
tomorrow. This simple sustainability model (ISSD) shows that the ability to
innovate (grow the functional scope) depends on the ability to lower the run
cost at “iso-scope”, which requires constant refreshing.
At the
core of the SD model is the fact that old
systems become expensive to run. This
has been proven in many ways:
- Maintenance & licence fees grow as systems gets older, because of the cumulative effect of technical debt from the software provider side, and because at some points there are fewer customers to share the maintenance cost.
- Older systems start to get more expensive after a while, when their reliability declines. There is a famous “bathtub” curve that shows the cost of operations as a function of age, and while maturation helps at first (reducing bugs), there is an opposite aging factor at work.
- The relative cost of operations grows (compared to newer system) because legacy technology and architecture does not benefit from the constant flow of improvement, especially as far as automation and monitoring is concerned. Think of this as the cost of missed opportunity to leverage good trends.
The good practice from the ISSD model is to keep the refresh rate (which is the
inverse of the average application age) high enough to benefits from HW/SW
productivity gains to accommodate for the necessary scope increase.
Remember that “software
is eating the world”, hence the sustainable vision of IS is not one of a
stable functional scope. Sustainability must be designed for growth.
The
average age of your apps has a direct impact on the build/run ratio. This is a
less immediate consequence of the ISSD model, but it says that you cannot hope
to change the B/R ratio without changing the age of your apps, hence without the
proper governance (whatever the technology vendor will tell you about their
solution to improve this ratio). This is another of stating what was said in
the introduction: Sustainable IS
development is a business goal and it is a matter of business governance.
3. Technical Debt
The
concept of technical debt is attributed to Ward
Cunningham even though earlier references to similar ideas are easy to
point out. A debt is something that you carry around, with the option to pay it
off which requires money (effort) or to pay interest until you can finally pay
off. Similarly, the mediocre software architecture that results either from too
many iterative cycles or shortcuts often labelled as “quick and dirty” is a
weight that you can either decide to carry (pay the interest: accept to spend
more effort and money to maintain the system alive) or pay off (spend the time
and effort to “refactor”
to code to a better state). For a great introduction to the concept of
technical debt, I suggest reading “Introduction
to the Technical Debt Concern” by
Jean-Louis Letouzey and Declan Whelan.
The
key insight about technical debt is that it is expressed against the need for
change. I borrow here a quote from Ward Cunningham: “We can say that the code is of high quality
when productivity remains high in the presence of change in team and goals.”
The debt is measured against an idea standard of software development: “When taking short cuts and delivering code
that is not quite right for the programming task of the moment, a development
team incurs Technical Debt. This debt decreases productivity. This loss of
productivity is the interest of the Technical Debt”. The most common way is
to measure TD with time, the time it would take to bring the piece of
code/software to the “standards of the day” for adding or integrating new
features.
The
concept of “interest” associated with Technical Debt is all but theoretical. There is a true cost to keep the technical
debt in your code. Although TD is subjective by nature (it measures an
existing system versus a theoretical state) most of the divergences that
qualify as “technical debt” have a well-documented cost associated to them. You
may get a first view about this by reading Samuel Mullen’s paper “The
High Cost of Technical Debt”. Among many factors,
Samuel Mullens refers to maintenance costs, support costs or labor costs. One
would find similar results in any of the older cost models form the 80s such as
COCOMO II.
Another interesting reference is “Estimating
the size, cost, and types of Technical Debt” by
Bill Curtis, Jay Sappidi and Alexandra Szynkarski. This CAST study focus on 5 “health
factors” (that define the “desired standard”) with the following associated
weights:
- Robustness (18%)
- Efficiency (5%)
- Security (7%)
- Transferability (40%)
- Changeability (30%)
Here,
TD is the cost to return to standards in these 5 dimensions and the weight is
the average contribution of each dimension to this debt. Some other articles
point out different costs that are linked with technical debt such as the
increased risk of failure and the higher rate of errors when the system
evolves.
Complexity
is another form of waste that accumulates as time passes and that contributes
to the technical debt. This was expressed a long time ago by Meir Lehman (a
quote from the CAST paper): “as a systems
evolves, its complexity increases unless works is done to maintain or reduce it”.
Complexity-driven debt technical is tricky because the “ideal state” that could
be used as a standard is difficult to define. However, there is no doubt that
iterative (one short step at a time) and reactive (each step as a reaction to
the environment) tend to produce un-necessary complexity over time. Agile and
modern software development methods have replaced architecture “targets” by a
set of “patterns” because the targets tend to move constantly, but this makes
it more likely to accumulate technical debt while chasing a moving target. Agile development is by essence an iterative
approach that creates complexity and requires constant
care of the technical debt through refactoring.
4. The Inertia of Complexity
In the
introduction, I proposed to look at complexity as the marginal cost of
integration because it is a clear way to characterize the technical debt
produced by complexity. Let us illustrate this through a fictional example. We
have a typical company, and this is the time of the year when the “roadmap” and
workload (for the next months) has been arbitrated and organized (irrespectively
of an agile or waterfall model, this situation occurs anyway). Here comes a new
“high priority” project. As the IS manager you would either like to make
substitutions in the roadmap or let backlog priority works its magic, but your
main stakeholders ask, “let’s keep this simple, how much money do you need to
simply do this “on top” of everything else ?”. We all know that this is
everything but simple: depending from the complexity debt, the cost may vary
from one to 10, or it may simply be impossible. We are back to this
“integration cost ratio” (or overweight) that may be close to 1 for new
projects and young organizations while getting extremely high for legacy
systems. Moreover, adding money does not solve all the issues since the skills
needed for the legacy integration may be (very) scarce, or the update roadmap
of these legacy components may be dangerously close to saturation (the absence
of modularity, which is a common signature of legacy systems, may make the
analysis of “impact” – how to integrate a new feature – a project much more
difficult than the development itself). This paradox is explained with more
details in my second
book.
A
great tool to model IS complexity is Euclidian
Scalar Complexity, because of its scale invariance. Scalar complexity works
well to represent both the topology of the integration architecture and the
de-coupling positive effects of API and service abstraction. Whereas a simple
model for scalar complexity only looks at components and flows, an
service-abstraction model adds the concept of API, or encapsulation, smaller
nodes between components that represent what is exposed by one components to
others. The scalar complexity of an information system represents an
“interaction potential” (a static maximal measure), but it is straightforward
to derive a dynamic formula if we may make some assumption about the typical
refresh rate of each components.
This
model of the expected cost of “refreshing” the information system is useful
because, indeed, there is a constant flow of change. One of the most common
excuse for keeping legacy systems alive is a myopic vision of their operation
costs, which are often low compared to renewal costs. The
better reason for getting rid of this technical debt is the integration
complexity that the presence of these legacy components adds to the system.
However, this requires exhibiting a simple-yet-convincing cost model that
transforms this extra complexity into additional costs. Therefore, I be back in
another post with this idea of scalar complexity modelling of integration
costs.
Meanwhile,
the advice that can be given to properly manage this form of technical debt is
to be aware of the complexity through careful inventory and mapping; then to
strive for a modular architecture (there is a form of circularity here, since
modularity is defined as way to contain the adverse effects of change – cf. the
lecture that I gave at Polytechnique on
this topic). Defining a modular information system is too large a topic for one
blog post, although defining a common and share business data model, extracting
the business process logic from the applications, crafting a service-oriented
architecture through API, developing autonomous micro-services are some
of the techniques that come to mind (as one may find in my
first book).
A much
more recent suggested reading is the article “Managing
Technical Debt” by Carl Tashian. This is a great article about how to
reduce the complexity-driven technical debt. Here are the key suggestions that
he makes:
- Keep points of flexibility. This is at the heart of a good service-oriented architecture and a foundation for microservices. However, as Tashian rightly points out, microservices are not enough.
- Refactor for velocity. Take this two ways: refactor to make your next project easier to develop, but also to constantly improve This is a great insight: when refactoring, you have the insight of performance monitoring your existing system. It is easier to improve performance while re-factoring than in a crisis mode when the run-time problem has occurred.
- Keep dependencies to the minimum. Here we see once again the constant search for modularity.
- Prune the low-performers (usage). A simple-but-hard to follow piece of advice, intended to reduce the total weight.
- Build with test and code reviews. A great recommendation, that is outside the scope of this post, but obviously most relevant.
- Reinforce the IS & software engineering culture.
5. Conclusion
This
post is the hybrid combination of two previously well-known ideas:
·
The sustainable management of
IT budget should be concerned with application age, lifecycle and refresh rate.
To paraphrase an old advertisement for batteries “IT technology progress – such
as Moore’s law – is only useful when used”.
·
In the digital world, that is
the world of fast refresh rate, the inertia of the system should be kept
minimal. This is why the preferred solution is always “no code” (write as
little as you can), though a strict focus on value, through SaaS (letting other
worry about constant change), though abstraction (write as few lines of code as
possible), etc.
The
resulting combination states that IT governance must address IS complexity and
its impact on both costs and agility in a scenario of constant change. The
constant refactoring both at the local (component) and global Enterprise
Architecture level (IS as a whole) should be a guiding factor in the resource
allocation process. Sustainable IS development is a business decision, which
requires the ability to assess the present and future cost of IS operations,
lifecycle and integration.
Because
the digital world is exposed to more variability (e.g., of end-customer usage)
and a higher rate of change, best practices such as those reported in Octo’s book “The
Web Giants” are indeed geared towards minimizing inertia and maximizing
velocity and agility. The exceptional focus of advanced software companies to
keep their code minimal, elegant and modular is a direct consequence of
sustainable development requirements.
This post was kept non-technical, without equations or
models. It is very tempting, though, to leverage the previous work on scalar
complexity and sustainability models to formalize the concept of complexity
debt. Scalar complexity is a simple way to assess the complexity of an
architecture through its graphic representation of boxes (components) and links
(interfaces). The assess the dynamic “dimension” of technical debt associated
to complexity, one needs a model for constant change. This way, the metaphor of
weight (the inertia of something ponderous like a boat or a whale) may be
replaced with a metaphor that captures the level of interaction between moving
components.
Devising a proper metaphor is important since the “school of fish versus whale” metaphor
is often used and liked by business managers. Complexity debt adds a twist
about the “scalar complexity of the school of fish”: its needs to be kept to
the minimum to keep the agility of the actual school of fish (for the metaphor
to work). I will conclude with this beautiful observation from complex biological
system theory: The behavior of a school
of fish or a flock of birds is emergent from local behaviors: fishes and birds
only interact with their local neighbors (other said, the scalar complexity is
low). Thus the “school of fish” metaphor is actually a great one to design
sustainable information systems.
x