Sunday, January 29, 2023

An Attempt to Sort Out Digital Carbon Footprint Evaluations

  

1. Introduction

Ten years ago, together with Erol Gelenbe, we wrote a NATF report on the impact of ICT (Information & Communication Technology) on worldwide electricity consumption. You may also read the associated ACM article. This report was the result of a collective study on ICT electricity consumption, prompted by a growing concern that ICT CO2 impact was growing exponentially, especially from 2006 to 2010. The report showed that there was indeed growth, but no strong accelerations, and that many crazy forecasts that were just … forecasts. I decided six months ago to refresh this analysis and to address the larger issue of the impact of digital on CO2 emissions. My reasons for returning to this question were twofold.  First, as was the case 10 years ago, there is a rising cycle of concern, with lots of exaggerations and scary forecasts about what will happen in 2030. Second, at we are now much more knowledgeable about the lifecycle analysis of servers and digital devices, so we can address the “scope 3” questions more thoroughly than we could in 2010.

The goal of this blogpost is just to share a few numbers which I have painfully collected and sorted out. They are meant to be used as “orders of magnitude”, since the level of uncertainty is still quite high, but they may come handy to the reader when trying to assess the situation. To update this 10 years-old study, I will apply the following methodology. First, I will look at 2015 because we have plenty of data and many published studies, so I can make a synthesis with a reasonable level of confidence. I will look at the global numbers but also how they were produced (resource units x carbon unit costs). Then I will retrofit to 2010 to see how it fits with previous studies of 2010, including the NATF document. Using the resource units (number of servers, laptops, smartphones, TVs …) and unit costs model, I can extrapolate to 2020 and propose a fact-checking matrix to compare against what is found today on the Web. This is the main contribution of the post, reflected in the matrix that may be found in the conclusion. I will also offer my own prospective for 2030, which is not that difficult once you have the structure of the ICT footprint impact, but which is subjective by nature.

The main findings, compared to the previous study are a mix of confirmation and new information. On the server side, the progress made with individual consumption and PUE means that the total worldwide consumption of data centers has stayed globally constant over the past 20 years, despite a vigorous growth in the numbers of servers. However a new class of server-side devices, namely the blockchain mining infrastructure has made a stupendous entrance with a consumption that is growing fast and will soon be similar to the rest of the data centers. On the device side, the continuous growth of 2000-2015 has stalled, so the prospect for the future is rather good, while network usage will continue to grow, exponentially fast in term of traffic, but steadily and moderately in term of CO2 impact. When we add all CO2 impacts of our digital activity, we find today a share slightly below 3% of our total CO2 emissions, which represents approximately 2% of our total greenhouse gas emission (58Gt today).

This post is organized as follows. Section 2 focuses on “scope 2”, that is the electricity consumption due to digital activities. I will adopt the distinction between ICT (Information & Communication Technology) and E&M (Entertainment & Media) proposed by Jens Malmodin and Dag Lunden in their per “The Energy and Carbon Footprint of the Global ICT and E&M Sectors 2010–2015”. This paper serves as the backbone for this post, as the EINS report FP7-2888021 “Overview of ICT Consumption (D8.1)” was, 10 years ago, the backbone of the NAFT report. My goal is to reconstruct and share a few useful “key figures” to understand the number of devices and the units of electricity consumption. Once I have analyzed/reconstructed the 2015 numbers from their paper, I will reconcile them with other sources regarding the 2010 values (which is easy). I will then project them to extrapolate the 2020 values, which is harder and must include newcomers such as bitcoin mining. Section 3 deals with LCA (Lifecycle analysis) and the “scope 3” impact, with a specific focus on the CO2 footprint of manufacturing. I separate scope 2 and scope 3 because scope 3 analyses are both more difficult and more recent. The early studies of 10 years ago, including mine, were rather naïve. In this section, I start with the same source for 2015 and I propose some adjustment based on what we know today about manufacturing estimates. I will apply the same logic of reconciling with 2010 numbers and extrapolating to 2020 numbers. Section 4 applies this analysis, in a prospective manner, to 2030. On one hand, it is speculative and thus offered as food for thought. On the other hand, the literature is full of scary predictions about the exponential growth of ICT impact, so it is useful to understand what the drivers are and to make your own opinion about what realistic growth may be. As far as I am concerned, most of the forecasts I have seen in the past year are off by a factor of two.

Because the topic of ICT CO2 impact is very sensitive, and because I am not in a position of neutrality since I have been an ICT professional all my life, I need to point out that I am not an expert on this topic. I am leveraging several published research articles and applying the type of modelling and analysis that I have been doing for a long time (to be specific, what gives me credibility is not what I know, but the large number of errors I have made in the past 20 years performing similar analyses). I have been interested in the topic of CO2 impact for a really long time thanks to my father Paul CASEAU (since 1978 to be precise), former hear of EDF R&D, and because I had access to qualified experts when I was part of EDF Scientific council 10 years ago. Before that, I had been in charge of sustainable development for a few years at Bouygues Telecom, which gave me access to the great network built by Fabrice Bonnifet. I have a further motive in collecting data about ICT CO2 impact, which will be the topic of a future post, but which may be found in my research agenda. As always, the content of this blog is purely personal and does not reflect in any way the opinions of my past and present employers.



2. Digital Energy Consumption Update

One of the difficulties when discussing “the impact of digital life on the planet” is to be clear about the scope of digital and what kind of digital we are talking about. In this post I follow the approach proposed by Jens Malmodin and Dag Lunden:

-           ICT is mostly made of servers (data centers), networks (fixed & mobile), user devices (there is a long list, see Malmodin’s paper for detail, the most important ones are smartphones, laptops, desktops, telco interner/routers boxes) … to which I have added bitcoin mining (a new kind of data centers that was not there in 2010 or 2015)

-          E&M (Entertainment and Media) is made of TVs, STB (set-top boxes), home audio systems and many other small categories. Today I will simply focus on TV + others, since TV sets are a topic of interest of their own, are the bigger category of E&M, and it is critical to know whether TVs are included when you read a figure about “digital impact”.

I will start with what I have collected about ICT electricity consumption, which was the core of the NATF study that I quoted in the introduction.

 

Server consumption has remained relatively flat with worldwide value which is slightly above 200 TWh/year. The number of servers shipped worldwide has grown from 8.9M in 2010 to 11.09 in 2015 to 12.15M in 2020, according to Statista. Consequently, the number of servers has grown, from approximately 40M in 2010 to 55M in 2020. The servers have also grown more powerful to accommodate a strong growth in the workload, but their energy efficiency has significantly increased over the past 20 years. There are many papers to read on this topic, but I strongly suggest “Beyond the Energy Techlash: The Real Climate Impacts of Information Technology” by Colin Cunliff. The paper spends some time debunking some of the myths and false claims associated to ICT consumption. It is very similar to the NATF paper, but being written in 2020, it is more relevant. The key message in both documents is that the progresses made on (1) PUE (2) Typical server energy intensity (3) average number of servers per workload (thanks to the cloud) and (4) the average storage drive energy use (kilowatt-hour/terabyte) have made significant progress from 2010 to 2018, which compensate the growth in servers and workloads. You can find another analysis in the article “Recalibrating global data center energy-use estimates” by Eric Masanet, Armand Shehabi, Nuoa Lei, Sarah Smith and Jonathan Koomey (sciencemag.org), which quote a total consumption of 205 TWh in 2018. The values that I found for 2010 and 2015 vary in the 200-240 TWh range (for 2010, Malodin reports 240 and ITU reports 205). It is important, when comparing with other published articles, such as “Assessing ICT global emission footprint : Trends to 2040 & recommendations” by Lofti Belkhir and Ahmed Elmeligi, to check if the datacenter consumption figures are collected or forecasted (easy to track when the source is anterior to the date of the reported value !). There has been a large stream of papers forecasting huge growth of datacenter electricity consumption, and this dramatic growth had not occurred.

Network consumption is growing at a constant rate. While the consumption of network was less than datacenters in 2010 (185TWh for Malmodin), it grew to 242TWh in 2015. In addition to the Malmodin paper, a great source on this topic is the ITU document (L.1470) : “Greenhouse gas emissions trajectories for the information and communication technology sector compatible with the UNFCCC Paris Agreement”. This report gives similar numbers (230TWh in 2015), to be compared with 220 TWh in the IEA report. It is harder to extrapolate the 2020 numbers, but it seems reasonable to expect a 20% growth, so I selected 280 TWh as my estimate in the table below.

On the other hand, the total consumption of user devices has probably peaked in 2015. The history of ICT device electricity consumption shows two stages: significant growth the previous decade, up to 342TWh in 2015, followed by stabilization because we see a clear stabilization in the number of devices. The number of smartphones sold each year went from 300M in 2010 to 1420M in 2015 and then 1433 in 2021 (2020 was a special year, see the curve on Statista). For laptops, the shipment figures are 201M in 2010, 163M in 2015 and 222M in 2020 (COVID rebound). For desktops, we have 157, 113 and 80M.  Malmodin & Lunden’s paper works with the estimate of 3700 M smartphones, 970M laptops and 370 desktop PC in operations in 2015. If you make the division you will see that he assumes 34 kWh as the unit (yearly) electricity consumption, which is less what most “back of the envelope study” (365d x 8h x 60W -> more than 150kWh) assume, but more than what field studies have found (20kWh).  The 2015 number matches that of the ITU report (345 TWh). Because the number of device has stabilized and it looks like the consumption of user devices such as a laptop is fairly stable, I have used 2015 figure as my estimate for 2020. Notice that the 342 TWh number includes 45TWh for phones, 34TWh for laptops, 109 for PC and 30 TWh for displays.

The arrival of Blockchain and bitcoin mining makes for a totally different picture, that of true exponential growth with a worldwide consumption of approximately 100TWh in 2022, that is the half of all other servers worldwide.  On this topic the best source that I found is the IEA report “Data Centres and Data Transmission Networks” (quite recent : September 2022). The figures are pretty consistent what what is reported by Statista on Bitcoin Energy Consumption (from which I drew the 70 TWh figure reported in the table below).

The most significant contributor of the E&M segment is the consumption of TV sets. There are conflicting trends at work : on the one hand, to move from LCD to LED has reduced the unit consumption of a pixel; on the other hand the growth of the TV sets in size and number of pixels (from HD to fullHD to 4K and, maybe, to 8K). However, according to Malmodin’s paper, our previous CRT and LCD TVs were such power hogs that the newer larger LED sets result in a continuous improvement from 200kWh in 2010 to 140 kWh in 2020. The estimate total of 1900M of TV sets worldwide in 2015 generated 160TWh of electricity consumption.  Together with STB, home theaters and other devices, the electricity consumption of the E&M sector in 2015 was 467kWh.

 

3. LCA (Life Cycle Analysis) and Carbon Footprint for Manufacturing

Let us start with data centers. The carbon footprint reported by Malmodin and Lunden in 2015 is 160 Mt of CO2, with a breakdown of 135 Mt for operations, 25Mt for scope 3, mostly manufacturing. Operations being mostly electricity consumption, it means that Malmodin uses a rather large CO2 intensity for the worldwide average, namely 560 g/kWh. We will return to that question later on. If we divide the manufacturing number by the number of servers, we get approximately 500kg/year per server for scope 3, which is consistent with both a number of spec sheets from Dell or HP, actually on the high side. We notice that scope3 is roughly 16% of the total footprint, a figure that we find in many documents from server manufacturer, but which often suppose a high CO2 intensity for operations. With more realistic CO2 intensity, the previously mentioned Dell server represents 320kg/year for manufacturing and 1760 kWh/year of electricity consumption, which can be evaluated as 680 kg/year for a better mix of 380g/kWh, leading a 30% scope3 contribution to the footprint. This discussion may be found in “The carbon footprint of servers”.  To better understand the carbon footprint a of server, I recommend to read the longer Boavizta article. To retrofit to 2010 and to extrapolate to 2020, we need to understand the variation of the manufacturing footprint, which is hard because the topic is newer than electricity consumption for server. In the following table, I have assumed constant carbon unit costs (cost per server). The result is a slow progression between 2010 and 2020 that reflects the growth with the number of servers.

As far as devices are concerned, Scope 3 (manufacturing) is the major driver of the carbon footprint. For 2015, Malmodin & Lunden report a total scope 3 footprint of 196 Mt, including 64 Mt for smartphones, 32 Mt for laptops and 28 Mt for desktops. The associated CUC (carbon unit cost) is 200 kg (163 M new units shipped producing 32 Mt), which is lower than the typical 300 kg that we find in other more recent studies. The typical CUC numbers for a modern laptop are 300kg for manufacturing and 100kg for usage (for instance, from the circular computing article  “what is the carbon footprint of a laptop”). As before (electricity consumption), it is hard to get a common number from different sources (for instance, look at the Oxford IT services paper). When looking at the PCF (Product Carbon Footprint) of the Lenovo T490, the first figure one sees is the 615 kg, lifetime footprint until you read later that, because of the large variation, Lenovo reports the 95 percentile confidence number (safe by overestimation) rather than the average of 421 kg which is written below (+/- 108). When trying to adjust to 2010 and 2020 values, it looks like the unit costs have grown from 2010 to 2015 as laptops became more sophisticated, but that the manufacturing unit costs has stabilized since then. These are the hypotheses that I took more generally to extrapolate the 2015 total device footprint to 2010 and 2020.

To compute the carbon footprint of ICT, we need the evaluation of Networks. For 2015, Malmodin reports a total footprint of 169 Mt, that has grown from 144Mt in 2010. It is hard to find much information about the scope3 / scope 2 structure of the network footprint (neither in the document or in the references such as “The electricity consumption and operational carbon emissions of ICT network operators” but the report states that total scope 3 for network and data center is approximately 50Mt, which I have used here (25 Mt for both). Altogether, we get a carbon footprint of 730 Mt for ICT, and a retrofit that gives 700Mt in 2010. The 2010 number is very consistent with what was found in the NATF report, and also what is reported in the study “The climate impact of ICT: A review of estimates, trends and regulations” by Charlotte Freitag, Mike Bernes-Lee, Kelly Widdicks, Bran Knowles et al. For 2015, we have a similar value of 730 Gt proposed by Colin Cunliff in the previously quoted ITIF paper.

To get a reasonable estimate for 2020, I have used the previously mentioned CUC together with the expected resource unit. The big change is the necessity to include bitcoin mining as a new category. I have applied the same scope 2 / scope 3 structure than data centers to extrapolate the CO2 footprint from the electricity consumption. There is a lot of uncertainty about the CO2 intensity of electricity consumption for bitcoin mining, as told in the Cambridge Bitcoin Web site. Thus, I evaluated the 2020 footprint of Bitcoin at 75Mt.

To get a full estimate of the “digital footprint”, we need to add the E&M sector. Malmodin & Lunden estimate the total footprint at 280Mt in 2015, out of which 160Mt are due to TV sets (note that STB are worth 53.3 Mt). For TV sets, Scope 3 (manufacturing mostly but also shipping) accounts for 70Mt, which yields a CUC (carbon unit footprint) of approximately 300kg (obtained from the shipment volume). The number of TV sets has grown from 1.47 billion in 2010 to 1.6 in 2015 according to Statista (while Malmodin quotes a higher figure of 1.9 billion) but the growth is now very slow. When we add all the numbers, we get a digital footprint of 1154 Mt in 2020, which is consistent with the values presented in the Freitag paper.



4. Prospective Analysis for 2030

This third section is very different from the previous two. Before, I tried to collect and sort out published numbers with my own attempt to select sources that I believe to be credible. Here I will build my own analysis and share with you my prevision for 2030 consequently. This is just food for thoughts, you should do your own … I believe in sharing my thoughts and exposing myself to constructive criticism, but I have no crystal ball and what follows only reflects what I think today and does not pretend to be right or accurate.

For the server parts, I consider that the progress on server unit consumption will continue, as the world moves to AMD servers of newer generations that are indeed more energy-frugal. For the manufacturing part, there is also some hope coming from the lab since the most energy-consuming part, that is the lithography, is also making progress. We can expect a significant reduction of server’s CUC in the future, but here I only assumed a small improvement in 2030.  There is a more complex question of using “green energy” for data centers. Here I apply the CO2 intensity of electricity that is produced globally (which is what the studies that I have used are doing). You might think that, as large cloud providers are switching to green energy, their scope 2 footprint should become zero. On the other hand, green energy that is obtained through certificates does not really impact the planet if the regional mix  does not change. The benefits of zero-carbon energy source only materialize for the planet when their share becomes significant in the total mix, which is not the case at the worldwide scale. This is a complex topic, which I do not have the time to address here, so my forecast does not include the benefits of “greener sources of energy” for data centers.

The 2030 forecast reported in the conclusion totals at 1060 Mt of CO2. The growth is mostly the growth of bitcoin mining, which is very hard to forecast. I propose 300TWh in this table, but I really hope to be shown wrong. Any guess here is as good as mine, since 300TWh is a huge slice of the electricity pie. For the other ICT categories, there is a fair amount of continuity since, at the first level of analysis, I have used constant CUC (unit carbon footprint) and regular (linear) growth of resource units. As we learn more about the expected improvements for manufacturing chipset, this table will be revised. As it stands, it is pretty conservative : it does not reflect much improvement, but no “crazy growth” either (with the exception of bitcoin mining).

The goal of the table was also to challenge some of the ratios (share of ICT or digital in the total CO2 / GHG footprint) so I have added the worldwide figures in the table to show the matching ratios. This requires a few comments:

1.       The values for 2010, 2015 and 2020 are easy to find, the value that I decide to put in 2030 is highly controversial, since they are related to political claims of various governments. For instance, I would not feel comfortable to propose a number for France. However, at the worldwide scale, the CO2 and GHG (greenhouse gas) have evolved regularly enough so that a conservative forecast is reasonably safe (+/- 10%)

2.       Beware of the difference between CO2 and GHG, that has been growing constantly over the past decades. Today, the non-CO2 gases account for half the warming effects of the emitted CO2. The 52Gt figure popularized by Bill Gates in his last book is no longer current. As a result, it seems fair to evaluate Digital with its share of CO2 emissions (the bold line in my table), but if you divide by GHG, you get a smaller number, significantly smaller. In 2020, the share of digital in the GHG emissions is 2%.

 

5. Conclusion

The following table reports all the numbers presented in this blog post. I have colored the cell to reflect my level of confidence. Green means that I was able to cross-check against a couple of reference with an error level that seems below 10%. Orange cells contain figures where I still lack enough sources or where I have a pending interrogation but consider the figure to be “a practical hypothesis” with an expected incertitude level at 30%. Pink cells are “educated guesses”, offered to give a complete prospective, but with no confidence. Obviously, this is “Work in Progress” (WIP) and is subject to regular updates.

 


 I decided to share this table, albeit its WIP status, because it is hard to collect these numbers, which are badly needed to make one’s opinion. Not only ICT impact is often exaggerated, but mostly the trends are inflated. It take a long time to follow the “data thread” to find out which data source was used in the first place. To illustrate this with an example, I received some “digital footprint ratios” while participating in the “fresque du numérique”, which quoted ARCEP numbers, which are based on ADEME, which quotes Carbon Shift Project and and greenit.fr, which eventually quote published research papers that are mostly about forecasts. A similar story could be told about the report of the “Convention Entreprise Climat” (cf. page 118).

 

To conclude, I will propose three pieces of advice:

-          Beware of percentages (like, the “impact of digital is 4% of greenhouse gases”), make sure that you are told which ratio is used, that is if you are given A/B as a percentage, make sure that you know which values are taken for A and B. When you see a document that has only percentages and no CO2 Mt values, keep a critical eye!

-          Once you are given A & B, make sure that the scope is clear : what is taken in and out, what are the resource units and the “carbon unit costs” (CUC)

-          Always check the references to find which sources are actual collected data analysis versus prospective studies. I have read more than 100 articles to prepare this post, while the actual number of trustable sources (published scientific articles) is less than 10, half of which are more of the “prospective” kind, trying to predict the future, as I did in Section 4. To illustrate this point, there are far more opinions about what the consumption of a laptop should be than scientific studies based on collected consumption of real users.

 


Tuesday, November 1, 2022

Software Craftsmanship Through Beautiful Code

 

1.Introduction 

I wrote a post this summer, for our Michelin IT blog, entitled “Software-Driven Excellence For Michelin” where I explain why software excellence is critical today. The pitch is written about Michelin, but it applies to most companies today since it is the logical consequence of two trends that I have commented abundantly in this blog: “software is eating the world” and the “expected value delivery, from technology to innovation, is accelerating”. As a consequence, each company needs to manage software as flows, and to deploy the best “Accelerate practices” following the lead of the “software best-in-class”. This one of the central ideas of my own book, “The Lean Approach to Digital Transformation”, inspired my many key thinkers, that one must organize its software development process with the best possible level of mastery of CICD, DevOps, Product mode, and so on. It does not mean than “one size fits all” and that we should all copy the best Silicon Valley software startups and scaleups. For instance, the great article from Gergely Orosz, “How Big Tech Runs Tech Projects and the Curious Absence of Scrum”, is very insightful for everyone, but companies have different levels of maturity and skills levels. The practice of SCRUM is sometimes superfluous, and sometimes quite effective. When defining what software-driven means for them, companies need to be humble and realistic to fairly assess their current level of software excellence. The “talent density”, the level of margins and the ability to adapt the compensation structure to the current “talent war”, the size of the developer networks and how well it is intricated in open source communities at scale … all weigh on the capacity to adopt some of the “best of the best” practices.

This being said, software in general and artificial intelligence in particular is indeed eating the world, so software excellence is a must for every company. For most companies, software excellence is a “collective sport” that requires to develop a community, inside and outside the company. As Bill Joy wrote, “there are more smart people outside than inside your company”, so leveraging the software community at large, with the multiplicity of open source – and commercial software platforms – is a must. Software excellence is about software engineering, in the sense of this beautiful quote : “Programming is what you do on your own, software engineering is what you do through time and space” (I heard this quote from a guest on the Thoughtworks podcast, any help to attribute it properly is welcome). What you do “through time” is mostly about maintainability and agility (the ease of change), what you do “through space” is mostly about collaboration and programming in the large (the ease of sharing code and intent).  Software craftsmanship is a one of the ways to reach software excellence. In the seventh chapter of my book, I give a few pointers about the benefits of software craftsmanship:

-    Software craftsmanship as its manifesto shows, is embedded in the agile movement, but goes further to recognize the importance of “the craft”, the know-how of the software developers (and there are many roles here, not only programmers). Software craftsmanship inherits the lean practices of “standards”, which belong to the teams and evolve continuously : “Developing programming standards - or refactoring standards  - is both necessary to encourage "code reviews" and "peer programming", and useful to let the team organize its continuous training, alone or within a "guild" of programmers who share a technical or functional domain”.

-    The “beauty of the code” comes from sharing, which is both a practice to grow quality (“more eyeballs find more bugs”), to develop craft (to quote here from the manifesto : “Not only individuals and interactions, but also a community of professionals”) and to foster reuse. This starts at the team level: “digital transformation reintroduces the need for code reviews”.

-    Beauty is also defined “in time and space”, that is the ease of maintenance and adapting the code to further needs, and the ease of collaboration (previous point). This a key point of chapter 7 and the topic of various blog posts such as “The business value of code elegance in the digital age”.

-    Sharing is actually a good proxy for “beauty” and yields a practical metric, quite similar to Google’s page rank for web content.

-  In this book chapter, I define “code elegance” as the combination of minimalism, readability of intent and “virality of design” (which is the combination of genericity of reusable patterns and the cleverness that makes the code seen as a “useful, reusable trick”)

Today’s post will explore some aspects of software craftsmanship through the prism of “beautiful code”, thanks to a few classical books. Section 2 is centered around Bob Martin’s “Clean Code” book. It is a collective book with many prestigious signatures, but under the leadership of Bob Martin (with many references to his large set of contributions). I will focus on “Clean Code” although there are other books with titles that focus on craftsmanship, because I find “Clean code” to be the “source book”. I will also mention briefly “Clean architecture”, because code and software architecture are intricately related, but the focus in this post is mostly code. Section 3 looks at another great resource, the 2007 collective book on “Beautiful Code”.  This book, together with another great classic, “Programming Pearls”, is a wonderful read for anyone who likes to write code, but also a great source of wisdom to understand what “craftsmanship” is. I will then conclude with a short comment about user experience design, which is not the focus of the books selected for this post, but is nevertheless a key component of modern software craftsmanship, together with system reliability engineering.



2. Clean Code

 

« Clean Code : A Handbook of Software Craftsmanship », by the legendary “Uncle Bob”, was published in 2008. As is the case with “Beautiful Code”, these are not the most recent books that one can read about software craftsmanship, and they show their age, for instance with the lack of user experience design focus. Still, this is a collective book with contributions from the “best software minds” and the editing/writing talents of Bob Martin. This is definitely not a book to summarize, it is a book to read deeply and “work with”. As Bob Martin says in the introduction: « Learning to write clean code is hard work. It requires more than just the knowledge of principles and patterns. You must sweat over it. You must practice it yourself, and watch yourself fail. You must watch others practice it and fail. You must see them stumble and retrace teir steps. You must see them agonize over decisions and see the price they pay for making those decisions the wrong way. Software craftsmanship is seen as a discipline, a long journey towards “mastery”, in the sense of Daniel Pink.

Software craftsmanship is about experience. It takes time to develop the “craft”, the intuitive judgment about what could work and what probably won’t stand the test of travel through space and time. It is not a set of practices that could be codified and embedded into KPIs. This does not mean that there are no software quality metrics, it just means that knowing them is not enough: “The bad news is that writing clean code is a lot like painting a picture. Most of us know when a picture is painted well or badly. But being able to recognize good art from bad does not mean that we know how to paint. So too being able to recognize clean code from dirty code does not mean that we know how to write clean code!

 

I am not the only one to see the lean roots in software craftsmanship. In his foreword, James Coplien recognizes the relevance of lean 5S to software craftsmanship. I have borrowed the 5S idea to Mary Poppendieck in my own book, here I quote (and compress) his foreword:

The 5S philosophy comprises these concepts:

• Seiri, or organization (think “sort” in English). Knowing where things are—using approaches such as suitable naming—is crucial.

• Seiton, or tidiness (think “systematize” in English). There is an old American saying: A place for everything, and everything in its place. A piece of code should be where you expect to find it—and, if not, you should re-factor to get it there.

• Seiso, or cleaning (think “shine” in English): Keep the workplace free of hanging wires, grease, scraps, and waste.

• Seiketsu, or standardization: The group agrees about how to keep the workplace clean. Do you think this book says anything about having a consistent coding style and set of practices within the group? Where do those standards come from? Read on.

• Shutsuke, or discipline (self-discipline). This means having the discipline to follow the practices and to frequently reflect on one’s work and be willing to change.

The topic of the book is “clean code”. Many definitions are given in the book such as “Clean code can be read, and enhanced by a developer other than its original author. It has unit and acceptance tests. It has meaningful names. It provides one way rather than many ways for doing one thing. It has minimal dependencies, which are explicitly defined, and provides a clear and minimal API”. I will now share some of the thoughts extracted from reading the book, as filtered through the prism of “beautiful code”.

 

2.1 Beautiful Code as Writing Standards

Beautiful code starts with a code that is easy to read: “Clean code is simple and direct. Clean code reads like well-written prose. Clean code never obscures the designer’s intent but rather is full of crisp abstractions and straightforward lines of control”.  In particular, the writing standards in the book deal with:

-   Names: proper naming conventions are critical to readability, especially from a time and space perspective. Names must use real words but avoid excessive length, with a focus on “revealing the intension” and using “searchable” words.

-   Functions: functions must be short (approx. 20 lines long), with a clear purpose (one main goal for each function, that is precisely captured by the name). Functions should “really do one thing” and keep their number of input arguments below 3. Functions must be homogeneous as far as the abstraction level is concerned: “Mixing levels of abstraction within a function is always confusing. Readers may not be able to tell whether a particular expression is an essential concept or a detail. Worse, like broken windows, once details are mixed with essential concepts, more and more details tend to accrete within the function”. One could write a complete 10 pages essay, just on this topic. I find that “your mileage may vary”, based on the type of language and type of problem you are addressing, but these few pieces of advice do stand the test of time. There are few comments about how to address exception handling which I truly enjoyed – such as: “Returning error codes from command functions is a subtle violation of command query separation. It promotes commands being used as expressions in the predicates of if statements. … Try/catch blocks are ugly in their own right. They confuse the structure of the code and mix error processing with normal processing. So, it is better to extract the bodies of the try and catch blocks out into functions of their own”, now that I spend a fair amount of time writing Go code (exception handling is not Go’s strong point).

-   Commenting: this is another topic where the range of opinions is wide. The book points out the obvious “bad (unnecessary) comments”. Comments should never be an excuse for bad code. However, there are cases where comments are useful to enrich the explanation for intent, or to warn about a non-trivial design decision. This is something that we will see in the next section. I refer the reader to Chapter 17 about the “commenting” smells. This final quote summarizes the author’s viewpoint, which I share: “Code should be literate since depending on the language, not all necessary information can be expressed clearly in code alone. Big Dave shares Grady’s desire for readability, but with an important twist. Dave asserts that clean code makes it easy for other people to enhance it. This may seem obvious, but it cannot be overemphasized. There is, after all, a difference between code that is easy to read and code that is easy to change”.

-   Formatting: how to set up the code in documents, pages and lines. This is key topic for “time and space” code motion: “First of all, let’s be clear. Code formatting is important. It is too important to ignore, and it is too important to treat religiously. Code formatting is about communication, and communication is the professional developer’s first order of business”. For instance, the recommendation for horizontal formatting is to avoid excessive indentation (short functions) and to keep line length between 80 and 120 characters (depending on the sources, I tend to like 100). The order is also important, as one might guess: “If one function calls another, they should be vertically close, and the caller should be above the callee, if at all possible. This gives the program a natural flow”.

2.2 Beautiful code as patterns

What makes the code both easy to read and to reuse is the clear identification of abstraction patterns. Patterns may be constructs of the programming languages such as classes or data structures (more about this in the next section). Patterns may be algorithmic or software architecture “parametric recipes”, such as the concurrency patters proposed in Chapter 13 of this book (a topic where the age of the book shows). The book includes lots of suggestions about OOP (Object-oriented programming), some of which are borrowed from Bob Martin other books: “No matter which module does the calling and which module is called, the software architect can point the source code dependency in either direction. That is power! That is the power that OO provides. That’s what OO is really all about—at least from the architect’s point of view”. As he has famously quoted: “ A class should have only one reason to change”, that is a unique stakeholder with a unique goal for the class. The is the Single Responsibility Principle.  It follows that classes should be small (long classes multiply the risk of failing SRP). Since maintainability and change management go hand in hand, we read in Chapter 10 that the class hierarchy should be organized for change.

 
2.3 Beautiful code as practices

 

Craftsmanship is not simply about the craft object (the code) but about practices to deliver this object. The two most obvious are testing and refactoring. Testing is an obvious topic here, and should be the topic of another post, as there is much to say. Testing is part of craftsmanship: “First Law:  You may not write production code until you have written a failing unit test. Second Law:  You may not write more of a unit test than is sufficient to fail, and not compiling is failing. Third Law: You may not write more production code than is sufficient to pass the currently failing test”.  Just to quote one of the many suggestions, tests should be fast, independent, repeatable, self-validating and timely (FIRST). Refactoring is also an obvious practice since the consequence of agility is emergent design (and continuous architecture, for similar reasons). Chapter 12, written by Jeff Langr, is devoted to the practice of emergent design. Design does not precede coding in a rigid (waterfall) way, it co-evolves with the actual programming. This is the heart of iterative software development methods. But there is “a law of nature” (see my book on “lean digital transformation”) that applies here: any emergent (incremental) process produces waste that need to be addressed through pruning and refactoring. This is what nature does for biological processes (life), this is what we must do as software craftsmen. Chapter 14 deals with successive refinements, which reminds me of the “fractal programming method” that we had formalized at Bouygues’s e-Lab 20 years ago : Start with the big picture (a tree), get a program that runs with dummy functions, write the unit tests – starting at the edge with input data extraction, and then replace the leaves in the tree with successive refinements, starting with the most difficult problem first.

  

2.4 Clean code as principles
 

The book starts with a collection of quotes from famous software giants, such as Bjarne Stroustrup, inventor of C++ and author of The C++ Programming Language, who says: “ I like my code to be elegant and efficient. The logic should be straightforward to make it hard for bugs to hide, the dependencies minimal to ease maintenance, error handling complete according to an articulated strategy, and performance close to optimal so as not to tempt people to make the code messy with unprincipled optimizations. Clean code does one thing well”. To achieve these goals, there exists a number of guiding principles which have emerged over the years. A first rule says that you should avoid repetition: “This is one of the most important rules in this book, and you should take it very seriously. Virtually every author who writes about software design mentions this rule. Dave Thomas and Andy Hunt called it the DRY principle (Don’t Repeat Yourself). Kent Beck made it one of the core principles of Extreme Programming and called it: “Once, and only once.” Ron Jeffries ranks this rule second, just below getting all the tests to pass”.  Bob Martin has proposed a set of five principles called SOLID:

-   The SRT principle which we just saw,

- The Open-Closed Principle, that says that a class should be closed for modification and open for extensions (though subclasses).

-   The Liskov Substitution Principle, that says that something that is true for instances of a class should stay true for instances of subclasses (or subtypes). For instances, a consequence of this principles says that function ranges should be co-variants and arguments should be contra-variants (a cool controversy in the “types for OOP” community which I was a member of 30 years ago). Here also, a separate 10 pages blog post would be necessary to do justice to this question.

-  The Interface Segregation Principle, that says that a client of another class/modules should not depend on the interfaces that it does not use.

-   The Dependency Inversion Principle, which is another OOP principle about class structures that says that high-level modules should not depend on lower-level modules and that abstraction should not depend on concrete classes.

 
2.5 Clean Architecture

In “Clean Code”, Bob Martin also mentions the “PPP set of principles” : SRT, OCP and Common Closure Principle, which says that classes which tend to change for the same reason should be placed together in the same module. This principle is explained in the other book that I mentioned: “Clean Architecture : A Craftsman's Guide to Software Architecture and Design”. Software architecture is a key aspect of craftsmanship that would require a different post. Although this post is about beautiful code, I decided to include a few ideas gather from “Clean Architecture”. Obviously, this is even more partial, incomplete and biased than usual, so I urge you to read the two books. There are clear links and resonance between the beautiful design at the code and the architectural level: modularity, readability of intent, minimality … to name a few. This quote from Grady Booch illustrates the proximity with the ambition mentioned in the introduction: “Architecture represents the significant design decisions that shape a system, where significant is measured by cost of change”. “Clean Architecture” was published in 2017, so it represents a viewpoint that is closer to the present state of software development.

  1. This book is heavily concerned with “maintainability”, including agility, that is the possibility to evolve a software system “at the speed of business”: “The goal of software architecture is to minimize the human resources required to build and maintain the required system”. As mentioned in the introduction, an “elegant code” that is easier to change and to fix, brings business value (in this book, “maintenance” is seen as the complete corrective and evolutive scope): “Of all the aspects of a software system, maintenance is the most costly. The never-ending parade of new features and the inevitable trail of defects and corrections consume vast amounts of human resources”. Further along the book, the author emphasizes that architecture is about building sustainable (agile, maintainable, evolutive) systems: “However, the architecture of a system has very little bearing on whether that system works. There are many systems out there, with terrible architectures, that work just fine. Their troubles do not lie in their operation; rather, they occur in their deployment, maintenance, and ongoing development”. Without surprise, clean architecture heavily relies on SOLID principles: “How would we have solved this problem in a component-based architecture? Careful consideration of the SOLID design principles would have prompted us to create a set of classes that could be polymorphically extended to handle new features”. I would like to point out the obvious and dissent with the exclusive focus on maintainability: architecture is also critical for operations to improve performance operability, high availability, reliability and robustness to charge.
  2. With agile maintenance in mind, the core of design and architecture is modularity, which is the art of drawing boundaries to regroup things that change together and to separate things that can be (up to a point) designed together: “Software architecture is the art of drawing lines that I call boundaries. Those boundaries separate software elements from one another, and restrict those on one side from knowing about those on the other” ... “Gather into components those classes that change for the same reasons and at the same times. Separate into different components those classes that change at different times and for different reasons”. This is precisely what I used to teach at Ecole Polytechnique 10 years ago, although applying this simple piece of advice is much more an art than a science. Understanding the “landscape of change” is hard and usually comes with experience. This is not something that you can grasp when you start with a blank page, hence you need to practice continuous architecture (cf. Section 2.3 on emergent design and successive refinement): “The issues we have discussed so far lead to an inescapable conclusion: The component structure cannot be designed from the top down. It is not one of the first things about the system that is designed, but rather evolves as the system grows and changes”. The modular architecture is the foundation for designing a system: “We use polymorphism as the mechanism to cross architectural boundaries; we use functional programming to impose discipline on the location of and access to data; and we use structured programming as the algorithmic foundation of our modules.
  3. Understanding the “landscape” (structure) of change means being able to distinguish “quanta of changes” and “bounded contexts” (back to drawing lines). The practical goal of software craftsmanship is to reduce the effort necessary to process the quanta of change, while the goal of architecture is to “grind the quanta” to improve maintainability.  The importance of designing for modularity and properly identifying the frequency and root causes of expected change cannot be overstated. This is the core of multi-modal architectures as described in my own book: “Boundaries are drawn where there is an axis of change. The components on one side of the boundary change at different rates, and for different reasons, than the components on the other side of the boundary”. This also applies to proper design of system tests. Systems must be designed in a way that makes testing easier, and that makes maintaining the system tests easier: “The solution is to design for testability. The first rule of software design—whether for testability or for any other reason—is always the same: Don’t depend on volatile things. GUIs are volatile. Test suites that operate the system through the GUI must be fragile. Therefore, design the system, and the tests, so that business rules can be tested without using the GUI”.
  4. Because the book is more recent, it offers a much better (more actionable) look at concurrency issues. It starts with the fundamental distinction between stateless and stateful components, and the associated concept of mutable states. As we all know, concurrency issues starts with the distribution of mutable states (cf. the “CAP Theorem”): “Since mutating state exposes those components to all the problems of concurrency, it is common practice to use some kind of transactional memory to protect the mutable variables from concurrent updates and race conditions “. Trying to summarize principles about concurrent programming in one paragraph does not make sense, but I pick the following quote as a teaser to read the book: “Architects would be wise to push as much processing as possible into the immutable components, and to drive as much code as possible out of those components that must allow mutation”.  From the obvious difficulty of managing mutable states at large comes the next evolution towards event-driven architecture: “This is the idea behind event sourcing. Event sourcing is a strategy wherein we store the transactions, but not the state. When state is required, we simply apply all the transactions from the beginning of time”.

 

3. Beautiful Code


The book for this section, “Beautiful Code – Leading Programmers explain How They Think”, is a collection of essays edited by Andy Oram and Greg Wilson. Each essay is written by one « leading programmer » who has selected his favorite piece of software to explain what “beautiful code” means to him. This is not a book to summarize, since the main interest is to use it as a source of inspiration for your own programming. However, with each commented fragments of program come a few detailed explanations about what makes this style of design, programming, or algorithm noteworthy. Here I have extracted a few thoughts about what makes code beautiful. If I step back, I find that the main interest of the book is a set of great articles about programming domains such as bioinformatics, the CERN library, STM (Software Transactional Memory) in Haskell, or the enterprise system for NASA’s Mars Rover Mission, to name a few. So, I invite you to read the book (it takes time, but it is a lot of smaller bites).

Beautiful code is useful, generic, elegant and efficient. Without surprises, we find that the main goals for “beauty” in code are similar to what we saw in the previous section. The emphasis on “useful” is different because the kind of programming addressed in “Beautiful Code” is deeper (and lower level) that what is addressed in “Clean Code” (the first is more about elementary software libraries and algorithmic components, the second is more about “regular IT” software development). If I had to summarize the whole book with one sentence, I would say that “software craftsmanship is sustainable performance: how to achieve speed and effectiveness without compromising readability and maintainability. There is a wide consensus on the fact that being useful (that is used by a large community and reused in multiple contexts) is the pragmatic meaning of “beauty”. Genericity speaks of abstractions and patterns (see later), about the fact that a fragment of code, such as the implementation of Python dictionaries, can be used in a very diversified set of contexts. The need for elegance is captured in the title of one of the articles: treat “code as an essay”. This article quotes the “seven pillars of pretty code”:

-  Blend in (keep a consistent programming style, throughout the life of the software system)

-   Bookish: formatting tips, including short lines,

-   Disentangle code blocks,

-   Comment code blocks,

-  Declutter (cf. the previous section, clean-up and refactor, as told by Jon Bentley: Strive to add function by deleting code)

-    Make alike look alike (facilitate reading by making intent more visible).

Keeping things simple (as simple as possible but not simpler), concise, short and without clutter (hooks for future needs that will not materialize) is a piece of advice that is found everywhere in the book. Minimalism is clear virtue for the great article (16) about the Linux driver model. It also emphasizes the importance of beautiful code to foster collaboration: “The Linux driver model shows how the C language can be used to create a heavily object-oriented model of code by creating numerous small objects, all doing one thing well .. The development of this model also shows two very interesting and powerful aspects of the way Linux kernel development works. First, the process is very iterative … Second, the history of device handling shows that the process is extremely collaborative”.  Concision is a critical property for most authors, especially since there is a tension between concision, performance (performance tuning often results in code expansion that improve speed at the expense of readability) and readability itself (concision is not always synonymous of easy-to-read). The “aha moments” of the book come from examples that combine the three.

Beautiful code often relies on beautiful data structures.  Beautiful data structures play a key role for beautiful programming in general and beautiful code in this book. They provide “easy representation” (easy to understand and easy to manipulate) and are “tuned to a purpose” (i.e., deliver the expected efficiency for the algorithm’s purpose). The book ranges from low-level data structures to increase the speed of an elementary algorithm (the next book is full of such examples) to higher-level of abstraction data model. The introductory article from Brian Kernighan about regular expression matching shows the beauty of regular expressions as patterns, one of the most reused patterns of all time : “I don’t know of another piece of code that does so much in so few lines while providing such a rich source of insight and further ideas” (a great definition of beauty !). The article about the ERP5 data model (ERP from Nexedi) is a great testimony to the power of well-crafted generic data model. With a few abstractions (the key concepts that are handled by the ERP), we get the two benefits of powerful combinations (the genericity of the model) and the elegance of the associated code. The process/data model acts as the framework which yield both readability and repeatability (speed of learning) when writing ERP code. A lot could be said here about code and data equivalence. A few articles show the power of treating code as data, which will come as no surprise to anyone with a LISP background. If I had more time, I would write about the CLAIRE programming language, my pet topic for the past 30 years.

Reusability is fueled by beautiful patterns, which make the bridge with beautiful architecture. One of such patterns is the use of recursion which we find many times in the book, as is underlined by Brian Kernighan in his first article. The fourth article shows examples from search algorithms, with classical patters such as dichotomic search. Two other articles show the beauty of iterators as code abstractions. The “Distributed Programming with Map Reduce” article is now an historical piece, because of the prevalence of MapReduce since then: “MapReduce has proven to be a valuable tool at Google. As of early 2007, we have more than 6000 distinct programs written using the MapReduce programming model”. Design matters. As told in one article that is deep into performance tuning: “If there is a moral to this story, it is this: do not let performance considerations stop you from doing what is right. You can always make the code faster with a little cleverness. You can rarely recover so easily from bad design”. The next article, from Michael Feathers about a framework for integrated test, follows with: “I have some ideas about what good design is. Every programmer does. We all develop these ideas through practice, and we draw on them as we work”. This article offers a lesson in designing a generic and reusable framework through the proper choice of Java classes. Throughout the book, we visit a number of familiar patters, such as functional programming, the use of callbacks, polymorphism and object-oriented programming, that are explained in the context of solving one problem with a “beautiful piece of code”. The correspondence between software and system architecture and the similarities between “clean code” and “clean architecture” are illustrated by a few papers about enterprise systems, such as NASA’s collaborative information portal: “code beauty for an enterprise system is derived partly from architecture, the way the code is put together. Architecture is more than aesthetics. In a large application, architecture determines how the software components interoperates and contributes to overall system reliability”. Some of the design principles advocated here are: standards-based, loose coupling, language independence, modularity, scalability and reliability. Architecture is about communication to other stakeholders, as illustrated by the following quote: “While the structure of a program of no more than a few hundred lines can be dictated by algorithmic and machine considerations, the structure of a larger program must be dictated by human considerations, at least if we expect humans to work productively to maintain and extend them in the long term”.

Tests can be beautiful too, and testing is a beautiful practice. I strongly recommend the 7th article, “Beautiful Tests”, by Alberto Savoia. He proposes a classification of tests which are all necessary : Unit tests should be beautiful for their simplicity and efficiency; other tests will be beautiful because they will help you to improve both the code and your understanding; some other tests are beautiful for their breadth and thoroughness, they help you gain confidence that the functionality and performance of the code match requirements and expectations. I give you an abbreviated version of his conclusion: “Developers who want to write beautiful cod can learn something from artists. Painters regularly put down their brushes, step away from the canvas, circle it, cock their heads, squint, and look at it from different angles and under different light. … think of testing as your way of stepping away from the canvas to look at your work with critical eyes and different perspectives – it will make you a better programmer and help you create more beautiful code”. Another article emphasizes the need for stress tests: “As software engineers, you are responsible for your own stress tests” (this comes with a set of three pieces of advice: implement early, pound on it and focus on the edge conditions).

However, craftsmanship should not be reduced to “beautiful”. Some examples are more difficult to read, and some performance optimization that may prove necessary in some context, do make the code harder to understand and to change later on. In some case, we have this “aha moment” of something that is faster yet time, at other time, the “clever trick” to improve the speed of the algorithm is complex. The key point here is that “easy to read” is very subjective and depends on your own experience and expertise level. This is why expertise matters … because some the problems do require some of the tricks that are explained in the two books presented in this section. This is also why comments may be useful, especially when they explain why a simpler and more obvious design actually failed. There are some examples of such comments (including one that is dubbed by the author as one of the most useful comment in the world). The overall goal of « intent readability » is hard in itself, and even harder when you factor in the heterogeneity of the readers. This being said, as someone who write code over long period of times with large breaks (the disadvantage of being a slasher), I find that comments about design options and why a simpler strategy was not used, are huge time saver over time. I find that this quote expresses the challenge of serving multiple experience levels: “Designing software to be used by other developers is a challenge. It has to be easy and straightforward to use because developers are just as impatient as everyone else, but it can’t be so dumbed-down that it loses functionality. Ideally, a code library must be immediately usable by naïve developers, easily customized by more sophisticated developers and readily extensible by experts”.

I will finish this section with a brief mention of Jon Bentley’s book, “Programming Pearls”. This book is an edited collection of 25 “columns” (short essays). This post is already too long, so I will not attempt to make a synthesis but rather pick a few quotes that underline some of the ideas that we just extracted from “Beautiful Code”. Because “Programming Pearls” is the work of one single author, it carries a crisp and coherent meaning for “clean code”. The overall principles for writing such code are quite similar to what we saw earlier: “Coding skill is just one small part of writing correct programs. The majority of the task is the subject of the three previous columns: problem definition, algorithm design, and data structure selection. If you perform those tasks well, writing correct code is usually easy”. Jon Bentley is also a strong advocate for concision, which requires more thinking ahead: “Why do programmers write big programs when small ones will do? One reason is that they lack the important laziness … they rush ahead to code their first idea”. He also advocates for minimalism: “The cheapest, fastest and most reliable components of a computer system are those that aren’t there. Those missing components are also the most accurate (they never make mistakes), the most secure (they can’t be broken into), and the easiest to design, document, test and maintain. The importance of a simple design can’t be overemphasized”.

Although these two books are heavily focused on efficiency (half of the essays mention efficiency as a key quality for beautiful code), there is a tension between writing readable code and efficient code. Sometimes there is a “aha” clever trick that makes the algorithm concise AND efficient (and this is indeed a moment of beauty, Jon Bentley proposes a number of such examples, where you think: “this is so much better than what I would have done as my first impulse), some other times, as I pointed out earlier, the clever trick that makes the code faster is tricky and far from elegant. So, Jon Bentley warns about balance: “ Many other properties of software are as important as efficiency, if not more so. Don Knuth has observed that premature optimization is the root of much programming evil; it can compromise the correctness, functionality and maintainability of programs. Save concern for efficiency for when it matters”. These two books speak a lot about algorithms performance, trying to save processor cycles. Today, with powerful CPUs (and GPUs) performance is more often a system performance issue (disk, concurrent access, network usage, …) but some of the key principles for performance analysis and improvement apply at all scale, from programming to system design. As with any capacity planning effort, performance analysis starts with data extraction (and today we have much better tools than the profilers we used 20 years ago) and the creation of performance models to identify the sizing factors. Performance models must include the state of the systems (“cold” start versus continuous “hot” operation) and load models (to perform the previously mentioned “stress tests”). In a VUCA world, performance tuning is not simply about meeting the latency/load/availability requirements, it is about delivering robustness to unexpected loads, to growth (hence the prospective analysis of the sizing factors is required) and to distributed resources failures.

 

4. Conclusion

 

The software craftsmanship manifesto talks about delivering value: “Not only responding to change, but also steadily adding value”. However, value is not creating by delivering software, but by software being used to produce value. For some of the low-level components described in « Beautiful code”, this distinction is irrelevant. But for most newer software systems, the user plays a key role and her user experience is critical to actually produce the intended value. This brings back to the introduction: software craftsmanship is indeed about beautiful code, but is must also focus on reliable system design and excellence in user experience (UX) design. This is a topic for another day, but I leave you with the “mobile user rights” that we cooked up a few years ago at AXA’s digital agency, as a tool to balance a post that definitely weights on the “geek” side of the scale:

·   I may use my mobile application wherever I am, whatever I am doing; the app is always readable irrespectively of the light or the distance from the screen.

·   I can start on mobile device and continue on another one

·   I will not need to type the same information twice; I know the reason when I am asked to input information and the act of input is made as easy as possible.

·   I will never experience the app to stop without a reason

·   I can find easily and quickly what I search; I can reuse and share easily what I find the in the app

·   I identify myself easily, only when it is necessary

·   I enjoy a pleasant, simple and playful experience with my mobile app

·   I do not wait for the app, it reacts fast when I interact with it, and it let me knows that it took my input into account

·   I do not need to adapt my behavior to use the app, the app adapts to me: I expect personalized and unique messages and notification, that are sensitive to context and that I can control.

 

To summarize this post, looking at craftsmanship from the “beautiful code” angle has allowed us to articulate four key aspects of software craftsmanship:

-   The importance of “code elegance”, which is about readability of intent, because it helps to share in time and space.

-   The importance of communities, at multiple scales, from the development team to software craftsmen guilds (including open source communities).

-   The importance of “standards” that codify and transport in “time and space” the “beau geste” (professionally appropriate way of working – the “good gesture” is a key concept from companionship when learning a craft).

-   The importance of experience to develop Kaizen (in the lean sense, think of the introductory quote of Bob Martin, about learning from your and other’s mistakes) towards the relentless quest for mastery.

 


 
Technorati Profile