Saturday, December 1, 2012

Lean, Scrum, Agile and Extreme Programming

Two thousand years ago, Epictetus told us “it is impossible for a man to learn what he thinks he already knows”. This principle is still one of the main reasons why it is so difficult to deploy change, when new (development) methods are concerned. I have been advocating for lean and agile methods for many years and the most common pattern I hear is: “I know, I do it already” :)
The goal of this post is to focus on the difference between agile in general, SCRUM in particular, and lean software development. I have been speaking about lean software development for three years and I have found that the (legitimate) confusion between lean and agile does not help. I keep hearing “I already know about agile, scrum … or lean”. I can relate to this. I started as a research scientist in the field of programming languages and software engineering twenty-five years ago, and was there (in the US & the academic community) when agile was born, I knew personally some the founders of the Agile Manifesto. I started practicing agile and extreme programming when I returned to France to create the e-Lab. I brewed my own agile philosophy for a small team of “PhD programmers” under the designation of Fractal Programming. Since then, I have joined larger organizations and tried to promote agile methods. At first, I was promoting what I once knew and tried, but I have quickly discovered that the agile movement is growing constantly, and that we know much more today than what you learned yesterday. Hence the Epitectus quotation.

The topic of mapping the similarities and differences between lean development and agile is quite interesting because the influences and relationships make for an intricate pattern. Agile methods and its various subsequent developments have been implementing “lean principles” for many years. Lean Software Development, on the other hand, inherits from the knowledge built by agile communities. There is already a number of interesting post on this topic. Matthias Marschall’s contribution is a great synthesis which is very similar to what I’ll write in a few minutes. He shows the strong relationships and the differences, and point out at the development process dimension: lean helps to get a larger perspective. His slideshare is also interesting, but is not structured enough from my “conceptual” perspective, which I why I found useful to write my own analysis down. Another great contribution by Cecil Dijoux points out the influences but stresses the benefits of lean as providing a larger perspective, especially the ability to scale up agile methods to enterprise management and transformation. Cecil talks about Kanban, to emphasize that it goes further than the traditional visual management practices of SCRUM. There are so many other examples. For instance, talks about agile methods such as Laurent Sarrazin’s which makes the influence of lean principles obvious. Laurent’s is a great example of a modern talk about agile, which goes much deeper that what I was hearing 15 years ago. In a reciprocate manner, great pieces about lean development, such as Welovelean’s, are still very close to the founding principles of the Agile Manifesto.

The claim of this post is : there is more to lean software development than a great synthesis of the best agile methods. To make my point, I will briefly summarize the key ideas of the successive steps of the evolution path from Agile, XP, SCRUM to Lean Software Development. I need to stress out that the following description is both too brief and incomplete. It is quite succinct because (a) I have already talked about this in previous posts (b) it is easy to find better references for Agile (manifesto), XP, SCRUM or Lean Software. It is incomplete by construction, since to promote clarity, I have only selected three additional ideas/principles when moving from Agile to XP, then to SCRUM, then to LSF.
Let me then characterize the essence of agile (when it started) as the combination of:
  1. Small teams : a strong emphasis on strong interactions.
  2. Small batches : break a larger project into manageable chunks (sprints), do them one at a time and constantly re-evaluate what the next chunk will be.
  3. Time boxing : there is nothing like a time constraint to help you focus on what really matters. Deliver less, but deliver on time.
  4. Coevolution of code/design/architecture : the end of the waterfall model, the code and its requirements are built continuously.
  5. Role of face-to-face communication : A key insight of agile is that to break the waterfall model, you need to reintroduce face-to-face interaction and banish the email push, which breaks the responsibility and engagement chains.
  6. User stories : the only way to capture a need is to give it some flesh and make it into a story. Read Daniel Pink’s “A Whole New Mind” to understand why.

There is a lot of lean in these principles & practices. Obviously once cannot reduce agility to this short list of principles, I should also explain a lot of associated practices (supported with tools) regarding the development process. Let me now turn to Extreme Programming, which I will characterize with three additional ideas (or increased emphasis):
  1. Test-driven development: the best way to develop code is to write test cases first.
  2. Sustainable pace: the only way to satisfy a customer in the software world is to last, because software is a “living thing” that evolves constantly. Hence, the rhythm must be set to reach long-term goals and respect everyone in the development process.
  3. Code is valuable: XP was the first method to significantly raise the status of code, through practices such as code reviews, refactoring or pair programming and principles such as “elegant code”, “do it right the first time”, etc.
Here, as well as in the following section on SCRUM, lean is everywhere J That is, we can see these principles as the application of lean principles to software development, derived from the TPS. I would then characterize SCRUM with the following additional contributions:
  1.  Visual Management: give the team a project room, use the walls to share release notes, the backlog, planning information, etc.
  2. Practices and Rites: SCRUM is practical method that makes agile easier to learn through its rites, such as standup meetings.
  3. Reflection: SCRUM proposes sprint  retrospectives, a great tool to promote continuous improvement. One could say that standing back is part of agile, but SCRUM brings a true emphasis on “doing it for real”.
These twelve sets of practices make a maturity model for SCRUM. To qualify to lean software development, I would add another three steps:
  1. Kanban: the application of visual management to the visualization of process flow and work in process (WIP). Kanban is a critical tool to reduce WIP and to implement pull (just in time), two key lean principles.
  2. Kaizen :  the heart of the lean approach towards problem solving. Lean makes problem solving a team activity and a learning tool. There are many associated tools and practices such as the “Five whys” or Toyota’s A3.
  3. 5S and waste removal : sort, clean-up, organize, focus on simplicity and writing less code. Some of it is part of XP, but lean goes further, with more tools and deeper practices (e.g. value-stream mapping).

This is a simplified view of lean software (although the combined practices of the 15 previous bullet points represents a fair amount of commitment), see the seven principles of Mary and Tom Poppendieck or my own description of Lean Software Factory for more details. A great practical introduction to lean software development is “Lean from the Trenches” by Henrik Kniberg.

Another key evolution of the past 20 years is the scope of agile/lean development method, which has grown from software development to product innovation. In the spirit of keeping this post simple, I will consider the scope with four successively embedded steps (from the smallest to the largest):
  • Software development is the original scope of agile methods. It obviously includes testing, as well as the proper code management & configuration tools to make coding as efficient as possible. See my reference to Curt Hibbs in my previous post on software factories.
  • The next step is to move to continuous integration. The ability to generate a “fresh product” every night (for instance) that can be fully tested, with as many automated test as possible, yields a completely different culture and level of innovation. This requires a significant amount of tools and training. This is the case where you need to change the tools in order to change the culture.
  • The third step is continuous deployment, which is best illustrated by Devops. I have mentioned Devops already, the big step is to include operations in the lean/agile process. Moving from CI to CD requires more tools, more automation … and a big culture change since it require to move from “agile in the lab” to “agile in the field”.
  • The last step is continuous product improvement and innovation, which is beautifully explained in The Lean Startup from Eric Ries. In addition to continuous deployment, we need to gather continuous feedback (e.g., with A/B testing) and re-inject this knowledge into the development cycle. A great source to better understand lean startup principles is Ash Maurya’s blog, and these principles do not apply to startups only!

Lean Software Development is the combination of both dimensions: the depth of lean practices and the scope. I could easily derive a maturity index from this: the maturity level would be the product of:  (a) how many of the 15 principles are really in action, times (b) what is the scope of the lean/agile process. I won’t go into this direction because lean software is much more than a development method; it is a work philosophy, in the “Toyota Way” sense. It is mostly a work philosophy that is built on the motivation and satisfaction of the software producers. A key insight from “Implementing Lean Software Development” is that one must move away from a project-oriented culture based on contracts towards a product-oriented culture based on collaboration. A wonderful illustration of the scope of the lean transformation is the great book “Les Géants du Web”, just released by Octo. Its subtitle is “culture / practice / architecture”; most of what I have just described is beautifully explained and illustrated. There is lean and agile everywhere in that book, but this is a book for everyone, not reserved to managers or software developers. The book is about what makes the best “Web companies” successful, from a culture, practice and architecture point of view. However, the similarity with the lean philosophy is striking.

Sunday, September 16, 2012

Systemic Simulation of Smart Grids : First Results

The month of August provides me with the opportunity to get back to my computer and to resume my programming projects. I have been able to complete the first step of S3G (Systemic Simulation of Smart Grids) which I started last year. I had the opportunity to attend the EU-US Frontiers of Engineering Symposium, where I have presented my project and received encouraging and interesting feedback.
The objective of S3G is to simulate the production and consumption of electricity throughout a long period of time (15 years). The S3G model, which is illustrated by the following figure, may be summarized with five parts:
  • Energy demand: for each city, energy demand is generated from an hour-by-hour and day-by-day template, adding some random variation (the extent of which is a model parameter) together with a city-specific variation. This number is then reduced by the amount of “negaWatts”, computed from the total amount invested by the city. The model uses a ratio obtained from a concave-increasing function of the investment.
  • Dynamic Pricing: both suppliers and operators use a simple affine pricing model, with a constant price when the demand is less than a “base power”, and a linear formula when the demand is higher.
  • Production: suppliers use nuclear power according to planned schedule and adjust to resulting demand with fossil plants. Operators always use their green power (store it in the “buffer” or resell it when there is too much of it). They adjust to the city demand with their own fossil plant and wholesale electricity from suppliers, at the lowest marginal cost.
  •  Consumption: The actual electricity consumption for each city is the demand, minus “shaving”, which is obtained by applying an S-curve to the sale price.
  • Market-share: for each city, the market balance between the national supplier and the local operator is determined yearly using another S-curve.

This S3G model is both simple and complex. It is simple because it is based on a handful of equations, resulting in a simulation code that is 500 lines long. On the other hand, it is a complex model for two reasons. On the one hand, there are multiple feedback and interaction loops that make it difficult to analyze how the system will react to perturbations. On the other hand, there are many unknown parameters in this model (such as market sensitivity, demand-response behavior, negaWatt capabilities, etc.). 

The following figure shows, on the right part, a rough summary of the simulation loop that is run for each time period (3 hours, hence 8 times per day). On the left part, it shows the three main GTES generic procedures.
I have already spent some time on computational experiments that will be presented at CSDM in December. A GTES simulation run returns the average and standard deviation of a few key business parameters, as well as some indication of the Nash convergence. Giving averages and a few deviations is a poor restitution of the rich data gathered during the computational experiment, but the goal is simply to “get a feeling for what is happening”, as opposed to producing a forecast. The current set of experiments is designed to understand the main issues that were exposed in the previous post. I have defined a fictional country somehow similar to France decomposed into 10 regions/cities. I have run 8 experiments that may be defined as follows:
  1. The “default” is a reference point, from which “what-if” sensitivity analysis is made. The economic parameters are set in such a way that alternate operators start with a 20% market-share and should be able to increase it if they demonstrate a better management of variability.
  2. The second experience raises the variability of energy consumption (globally), while the third experience raises the local variability (each city is more different from each other).
  3. The fourth experiment doubles the fossil energy price (gas and coal). In the default scenario, it is randomly drawn between 20€ and 40€/MWh.
  4. The fifth experiment imposes a 5% reduction of the nuclear assets for the supplier during the first 5 years.
  5. The sixth experiment sets a carbon tax at 100€/t, the proceeds of which is used by the “regulator” to subsidize green energy investment.
  6. The seventh experiment explores the impact of overall demand variation in the next 15 years. The model assumes a constant growth/decline of electricity demand which is expressed as a percentage. This experiment plays with different values to see if demand impacts the profitability of smart grids.
  7. The last experiment is a small variation of the first one, where wholesale prices are more rigidly constrained. During this set of experiment, I considered that the constraints that regulate wholesale prices for the supplier are fixed. An interesting next step will be to make them a strategic parameter for the regulator (which is a better representation of reality and makes for a more interesting game).

I still need to do some additional tuning before presenting my results at CSDM. I will then make my slides available on this blog. For once, I need to run longer simulations (with larger samples) to ensure that the Monte-Carlo sampling is stable. I also want to introduce better local optimization meta-heuristics, such as Tabu search. One of the key issues for smart grid is the pricing model, both for the operator and the supplier. The more complex the pricing model, the better the actor may play with the demand-response concept. However, a more complex princing model requires better optimization techniques to ensure that the automatic computation from GTES (where each actor “learns” its “best response”) is relevant.

This being said, here are a few findings that may be drawn from the hundreds of runs made with the S3G model, and that are worth sharing:
  •  There is a systemic benefit of distribution and autonomy to cope with variation. This is shown in our second and third line. It is a “subtle” variation (small effects), which means that the economy of the local operator is dominated by its capacity to operate at much lower customer management costs than the supplier.
  • CO2 tax increases play a very small role, and one that is difficult to anticipate since it both favors the local operator (support green subsidies) and the supplier (raises the difference between fossil and nuclear).
  • “De-nuclearization” is a favorable scenario for smart grid operators, as are most regulations that are adverse to the supplier. The obvious limitation is the resulting price increase that reduces the total economy output (and the country’s competitiveness).
  • The “community advantage” (that is, the ability for a local operator to better manage the demand-response loop because it is “closer” to its end customer) is marginal, and it is quite unclear if the payback from demand-response management is enough to sustain the operator’s business model.
  • Investing in local storage is never an interesting option (at current prices). We needed to slash the price by over an order of magnitude to see a viable payback in less than 10 years.
  • There is a clear competition between local operators and suppliers. The learning component of GTES makes for “agile” players who react closely to each other signals. The pricing structure plays an important role (we have only explored a simple variable pricing scheme). A logical consequence is the importance of regulation.
  • The results are sensitive to the strategies of the player. A next step for S3G is to build a “strategy matrix”, which is a tabular “what-if” sensitivity analysis where we see what happens if the goals of the players are changed.

Thursday, July 5, 2012

Lean Software Factory

This post is a follow-up to an invited talk that I delivered at CESAMES. That presentation, which is available on the widget on the left, talked about complexity, architecture and lean software development. It started from the question – its title – “How to design information systems?” and ended with an introduction to lean software development. Today I’ll go further and explore the relationship between the (software) product and the (software) factory. The key idea is derived from the Toyota Way: you need a great factory to manufacture a great product. Not surprisingly this will lead to the concept of “Lean Software Factory”. This post is organized as follows. The first section talks about what makes a good software product. I will focus mostly here on software development, but most of what will follow is equally relevant to information systems. The second section deals with the concept of “software factory”. Looking at the “software factory” means taking into account the software manufacturing process into account, and not only the final product. It is also a metaphor that carries the need for automation, engineering methods and industrial discipline. The last section is about “lean software factories”, which add agile development practices and wisdom to the strengths of software factories, together with a continuous improvement and culture geared towards excellence that is inherited from lean management.

1. What makes a good SW product?

There are many expected qualities and assessing a piece of software is a multi-dimensional tasks. Here I will only focus on three of them, because they are both very popular (all books and papers related to software engineering discuss those three) and very important to the value that is produced through a software product:

  • Modularity – “the degree to which a system’s components may be separated and recombined” (Wikipedia) … or the capacity of the architecture to maximize in its decomposition the independence of subcomponents. Modular systems are easier to maintain and to evolve. In the world of software, modularity translates into the minimization of “impacts” when something must be changed.
  • Versability / evolvability – “the property of having many abilities”, that is software that can serve many purposes, together with “the capacity of adaptative evolution” - software may evolve rapidly to support new functions. This combination is often coined as “agility” in the software world.
  •  Openness : for a piece of software, being open may mean many things. It may means that it exposes API (application programming interfaces) so that other software components may interact with it. It may also mean that the code is open to all for scrutiny and collaborative improvement, as in “open source”. It may mean that the piece of software is built as a platform, which is designed to be enriched by subcomponents written and proposed by other developers. Writing software as a platform is at the heart of successful development strategies, such as Google’s or Facebook’s.

These three qualities form a deeply interrelated network (each influences the other). They are also notoriously difficult to achieve, especially in a rapidly changing environment. These qualities are resulting properties of both:

  • Software architecture, that is, design properties which explain how the software product is organized. This part is well understood, albeit difficult: this the art of software architecture. If I follow François Jullien’s distinction between Greek and Chinese strategy, architecture is clearly the Greek, top-down, left brain, goal-oriented part of the strategy for producing modular, agile and open software.
  • The way the product is made (the software development culture). This translates into “genes” which are embedded into the software product. According to the same distinction, software development culture is the Chinese, bottom-up, right brain, emergent strategy for producing modular, agile and open software. This is less understood and even less formalized, it is a form of wisdom transmitted in an oral tradition. It is also controversial since many still believe that once the specification is completed, it doesn’t matter who writes the code …

I am certainly not one of them J. On the contrary, I have found over the years that great software products most often come from companies with great software cultures. It is not a new thought either, even for me. In my first book “Urbanisation, SOA & BPM”, I noticed that modularity and agility are properties that describe the whole lifecycle and not only the development part. Agility in code writing does not do much for you if you cannot run your tests in an agile mode (Chapter 6). Similarly, modularity must translate into modular testing (running fewer tests), modular deployment (deploying fewer components) and modular maintenance (fixing problems faster because the area of investigation is limited to a smaller area).

The remainder of this post is devoted to « the way software products are built », for lack of time but mostly because my previous writings and books have been about software and information systems architecture. You may read the CESAMES slides to get an overview on achieving modularity, agility and openness through architecture. For those of you who don’t read French, here is a “101” summary:

  1.  Model-driven Architecture : architecture should be based on models and part of the development should be derived with semi-automated methods. Models are another form of “genes” for a software product: careful design at the model stage goes a long way to embed “future-proofness” (using scenario/whatif methods)
  2. Data models are critical: from semantics, conceptual analys, ontology to object lifecycle management. Data architecture is the most useful tool to prevent data distribution difficulties which plague most large-scale, heterogeneous information systems.
  3. Layered Architecture : defines abstraction levels and reduces the management complexity, a time-proven “good practice” of software architecture.
  4. Service-Oriented Architecture: defines the service structure so that reuse and sharing is made easier. SOA is also the proper tool to align the different stakeholders so that software fits the enterprise’s strategy. A service architecture is a reification of functional architecture (a service may be seen as the combination of a function – what it does- an interface – how to invoke it – and a contract – so-called “non-functional” properties –), which embodies abstraction and encapsulation.
  5. Process Architecture : defines the “composition grammar” through a recursive/fractal analysis of interactions. Process architecture is to “Business Process Engineering” what SOA is to “writing down a list of exposed services”: there is much more than listing and formalizing, it is about identifying patterns and sub-patterns, and extracting those patterns which will prove stable through time. It is more an art than a science.
  6. Event-Oriented Architecture: identifies “events” as pivots for interaction (which is a subpart of process architecture and a foundation for SOA). Publish/Subscribe is another time-proven pattern for building modular designs, which is built on top of a shared/structured event catalog.

2. Software Factories

Focusing on the way software products are build leads logically to the concept of “Software factory”. The following illustration shows three way of looking at a software system (or an information system):

  • The first step (left-most) is to look at the technical system (leading to what we call IT : Information Technology). This view is concerned with practical components such as databases, software code and computing resources. Processes are defined in the “computer science sense”.
  • Then we step back and add the human into the picture, mostly the user but also other stakeholders (management, administration, strategy,…), leading to Information Systems. IT/IS distinction is well established in information systems literature. This view adds – rightly so – usage as a key dimension (following the very profound statement that a software application that is not used is of little value), and shared enterprise artifacts such as models and business processes.
  • The topic of this post is to step back another time and look at the “production process” (the “factory”) that continuously builds and update the information system. On the right-most part of the illustration, we see both the information systems and the organization (usually the ISD – Information System Division in a large company) that builds and runs it. I have separated the part that builds (dev: software development) and the part that runs (op: operation) as a forward reference to devops.

Because a software factory handles ideas and digital information, it could be argued that it is an autopoietic system (this is not a car factory, there is endless supply of ideas and disk space to write new code). Autopoiesis represents the capacity of a living system to continuously develop itself and to transform the process that creates the system and which is a part of the system itself. Autopoiesis generates emergence (of properties) and (what may be seen as) auto-organization. This thread of thought is quite interesting, because it helps understanding the emergence of our three desired properties, however I will leave it to another day, to avoid adding confusion to conceptual complexity.
To return to more practical grounds, the goal of a software factory is to deliver “good software products”, rapidly and efficiently. Because we now have the advantage of having read “The Lean Startup” (if you have not, I would urge you to do so, it’s the best way to understand what is truly expected of a great software factory), we know that there is another dimension to SW factory performance: the ability to listen carefully to the product’s users and react rapidly to their expectation. This is why Eric Ries advocates for MVP (Minimum Viable Products): the SF (software factory) must be able to crank up simple products very rapidly, which are exposed to customers, and modified according to their reaction (this loop is shown on the illustration).
A lot of things are known about how to build a “good” SF. Sharing a reference with the previous post, I picked the following recommendations in Curt Hibb’s book “The Art of Lean Software Development”:

  •  “Practice 0” : Source Code Management and Scripted Built. This is where the word “factory” becomes self-evident: one must use tools (such as source code management) and industrial discipline to manufacture software. There is a “Maslow pyramid” of skills necessary to reach continuous deployment, scripted built is the first step.
  •  “Practice 1” : Automated Testing, because testing is crucial for quality, as in any industrial process (automated testing means more testing), but also because we want to run tests as early as possible, following the principles of extreme programming.
  •  “Practice 2” : Continuous Integration, which means that the product is built continuously in an iterative manner, instead of the traditional waterfall model. Continuous integration to perform integration testing as early as possible; it is also a way to “see the product” and ensure that the factory is united on a common goal.
  •  “Practice 5” : Customer Participation, which is what makes the continuous improvement loop that is pictured in the previous figure possible. Customer participation is at the heart of agile methods.

There is much more in Hibb’s book, as we shall later see, but this is a set of sound, time-proven practices. Another great source of inspiration for designing a software factory is the previously-mentioned DevOps “method”. Many excellent software companies, such as Facebook, are making references to DevOps, which is not a coincidence. Following Octo’s lead, I would summarize DevOps’ contributions with two principles: “Infrastructure as code” and “Continuous deployment”. Infrastructure as code is the ability to address all computing resources (either cloud computing or a regular data center) through API and software. It follows the principles of “Practice 0” (tools & automation) and give true flexibility and agility to the software factory. It also changes the “ops culture” and prepares for the next step. Continuous delivery is the ultimate step of continuous development/testing/integration/deployment. It helps shortening the continuous improvement loop from Eric Ries and is necessary to move towards a “lean software factory” as we shall see later. Taking a look at Facebook development and operations’ culture is worth the effort. Facebook has developed large-scale continuous deployment into an art. I recommend watching Chuck Rossi’s tech talk on “Pushing millions of lines of code five days a week”.

3. Lean Software Factory

The idea of a “lean software factory” comes from mixing “lean software development” (cf. previous post) with “software factory”. That is, adding to what we just saw a few other principles which are taken from Agile methods and from the Toyota Production System (cf. Hibb’s book). This does not mean that all projects from a lean software factory should follow a SCRUM method (take as an illustrative example of agile method). Although this is debatable, I believe that some projects are still better run with a classical V-cycle model (I am not not the only one). There is no magical formula to define the “SCRUM/ agile boundary”, but V-cycle still makes a lot of sense for “back-office projects” with stable and complex specifications (such as a billing system). On the other side, agile methods are better suited to front-office projects (when the human in the loop plays a bigger role), where detailed requirements are difficult to produce because the need is ill-defined or changing rapidly.

To keep this post short, I will summarize the concept of “Lean Software Factory” with four “principles”. Each of them would require a full post, which I may do later on – Each “principle” is in fact a set of practices since “lean is something you do, not something you talk about”.  These principles add to the previously developed concept of “Software Factory”. The first principle captures the essence of “agile methods” and, therefore, only applies to agile projects, whereas the rest applies to both forms of software development (that is, including V-cycle development).

  1. The first principle is to apply an agile method such as SCRUM (Backlog + small batch + test as early as possible + continuous build) whenever the end-user has an important stake with the software product. Agile methods fit nicely with the principles of a SF as described in the previous section. They emphasize customer participation and continuous integration. The lean “touch” adds the concept of small batches (called sprints in SCRUM) and the principle of testing as early as possible, with the goal of producing the right code on the first time. Building a product one small step at a time with regular opportunity to adapt to environment feedback is a key idea which is well developed in “The Lean Startup”. Another key inspiration from lean is to reduce “lead time”, which translates into short sprints but also into time-constrained steps, including design.  This last technique also works within V-cycle: using time-constrained workshops to jointly achieve the requirements and design phases. Those familiar with Google’s software development values will recognize the “lean influence” (develop quickly, only what is necessary, remove waste).
  2. The second principle is to emphasize autonomous teamwork. Teamwork is critical for all kinds of projects and for most activities. A key teamwork moment from the lean culture is Kaizen, which is used to solve problems and search for continuous improvements (two sides of the same coin). Following “the Toyota Way”, all ideas are welcome and everyone’s contribution is useful in a LSF.  There is a lot to be said about kaizen (which I won’t do today) : it goes further than the continuous improvement culture that one may find everywhere (such as the feedback loop in the SCRUM method). “Lean culture” brings a systemic approach to problem solving, including the search for root causes with “5 Whys” and the construction/follow-up action plans, such as the famous “A3” from Toyota. Autonomy is a systemic condition for kaizen in particular, but more generally empowerment is necessary to achieve performance.  One of the best book to understand kaizen and autonomous teams is “The Lean Manager” by F. & M. Ballé.
  3. The third principle is to leverage visual management, which often takes the form of a project/product room. Visual Management has multiple benefits, the first of them is synchronicity, that is working with the same time. In the world of an industrial factory, the lean concept that embodies synchronicity is “takt time”. Although takt time also plays a role in a software factory, synchronicity is mostly about working at the same pace, without waiting. Dynamic wall-planning is the best tool to ensure synchronicity, together with the frequent stand-up meetings in front of the said planning, where the whole team keeps track of progress and difficulties. Visual management is also a tool for systemic training. Toyota encourages its workers to use as many schematics and illustrations as needed to share the detailed functioning of technical processes or pieces of equipment. This translates easily into the world of software. Using the walls to share architectural insights with the team is an efficient way to contribute to continuous learning as well as increase efficiency and prevent errors. Visual management is also used to visually control “work in process” (WIP). Reducing WIP is another key tenet from lean management that applies very nicely to software development. The practice of visualizing WIP with small cards (post-it) that are pasted on the wall according to their status in the development process is called Kanban in the lean IT world, in reference to the Totota pull system. Kanban / JIT (just-in-time) in the world of software is not as strictly enforced as in the world of industrial manufacturing. Still, the “pull” chain of control, where every step in a process is “serving” the following step (that is, providing what is needed at the right time, according to the availability of the receiving step), is a great tool to increase efficiency and avoid “overload” of tasks that are waiting to get completed. The “kanban” practice has two benefits: making each member of the team aware of the availability of others who will use their work later on, and reducing overloads and piles of unfinished work through an implicit capacity limit to what is accepted into the development process. The emphasis is on “no waiting” to both improve agility and quality (asynchrony is the best way to lose “context” when the control is handled from one player to another and to produce misunderstandings – this is explained in my last book, and is especially true for software development, which is a complex task, and where one quickly looses focus once your work  is completed.
  4. The last principle tells that a Lean Software Factory is a place with “love for the product”. This supposes two things: to know, understand and “love” the customer, on one hand, and to love software code on the other hand. The love of code is another multi-dimensional mix of culture and practice.  The practice part draws on extreme programming (code reviews, peer writing) and software factory (cf. previous section) with a “lean touch”. Mary Poppendieck explains how the 5S of lean may be applied to the practice of source code management. Lean practice also translates in writing less code (see previously mentioned Curt Hibb’s book). The culture part is also a mix of aesthetic (writing elegant, minimal code that one is pleased to show to the other members of the team) and discipline (following coding styles and guidelines, appropriate density of comments, etc.). This is very close to the values of open source software, one could say that a lean software factory should write its code as if all of it was going to be open source. We are getting back to the opening argument in the introduction: designing open software is a state of mind, hence it should be supported by the culture. There are two reasons why “the love of code” is emphasized as a key value. The first one is that application of what is learned about Toyota: to build a great company, the employees need to love the product that they are making. For Toyota it is about cars, in a software factory, it is about code. Lean management is all about professional pride, technical skills and craft. A lean software factory is not about undistinguishable “man-days” that may be outsourced to a lower-cost country. The second reason is that code is a “live object”, with a renewal rate which is getting higher as the world becomes more complex (Petra Cross from Google says that 50% of their code changes every month). Code that grows iteratively needs to be re-factored regularly (iteration yields accumulation); like a garden, it needs constant care and attention.

Technorati Profile