Sunday, April 27, 2014

Software Ecosystems and Application Sustainability


Today’s post is a set of simple, yet somehow deep, thoughts about the systemic nature of different ecosystems related to software. I was trained in the 80s to think about software costs in a “traditional software engineering” manner, using KPI, metrics and a spreadsheet (using cost models that were popularized by Peter Keen in the 90s). Somehow, this shows in my second book  where one may find references to the classics (Barry Boehm, Casper Jones, etc.) in the bibliography section. What characterizes this way of thinking is that it is a static approach (even if the system changes, one think about a “snapshot” taken at a given time), controlled (one assumes that all stakeholders cooperate under a common control) and global (the cost model operates on what is thought to be a complete picture).
Life in this century is different when it comes to software. Software is a “live thing” – in the sense that it constantly evolves to adapt to its environment -, mostly distributed, with a large number of stakeholders whose strategy escape the control of the software developer. Hence static should become dynamic, controlled should become collaborative and global should become distributed. The dynamic and complex relationships between the stakeholders and between the various distributed players who contribute to building a software piece yield the “ecosystem” label. This word is borrowed from biology and is a signature of complexity.

This observation is actually one of the reasons for the title of this blog “Biology of Distributed Information Systems”. It helps to think about software and information systems by borrowing concepts for biology and ecology, and it is definitely necessary to switch from a static to dynamic analysis.

This is a first post on this topic, so I will keep things simple (hence somehow incomplete and arguable), and focus on three ecosystems:
  • The OS, platform and application ecosystem
  • The open-source ecosystem
  • The application developer ecosystem
I must apologize in advance to real software experts :). First, this is a post intended for readers with no precise skills nor knowledge about software. Second, I will reason in an “abstract category” way that will not dive into interesting but difficult distinctions. For instance, in this post, an “app” is a piece of interactive content, whether it is a “true application” written for a smartphone, a simple HTML page, an HTML5 page decorated with Java script or an hybrid mobile app. For this first post, I am aiming at a “big picture from 10000 feet up”.


1. Software Global Ecosystem 

The starting point of the argument is the need for software that evolves constantly.  You may accept this at face value because it is a commonly heard argument. If you need convincing, the need for constant evolution comes both from the technology (the “what is possible today ?” perimeter changes constantly) and the users. The complexity (i.e., richness) of software usage today means that the “user is in the driver seat”. That is, software needs to be co-designed with users, which is of the principles of Lean Startup (to name one reference, hundreds would apply here). This leads to an incremental model, which in turn requires a (much) faster code production rate. I will assume that you buy this argument, since this is not the topic for this post, and it is a fairly common assumption.


From this we derive two key consequences for software in the 21st century (as opposed to the century when I was trained as a software developer):
  • Much more innovation is needed, which requires the help of an open innovation model. This leads to the concept of platforms, API and apps.
  • Much cheaper software production is needed, which itself requires a new level of sharing/reuse, based on a common/universal software architecture.
Software productivity, as defined by the cost to produce a function point, is improving slowly. This is a topic which I have addressed in depth in the previously quoted book. I have a vested interest with this question since I started my career as a computer scientist trying to build tools (languages) that would increase this productivity significantly. It turns out that the world needed a much more efficient way to reduce software development costs as we just saw and has found it through massive reuse, thanks a to common software architecture :
  1. Open OS : open operating systems (LINUX, Android, etc.) have become, thanks to open innovation in the form of open source, massive repositories of reusable value. For reasons that will become clear in Section 3, the world needs as few of those as possible.
  2. Platforms: on top of OS, platforms have emerged. One may think of the most common open source tools such as Apache or mySQL, the GAFA platforms or the web browsers. The rise of platforms over the last 20 years is coupled with the rise of API (Application Programming Interfaces) and the associated technologies (XML, Web Services, REST, JSON and the likes).
  3. Apps (interactive content) : this is what the end user sees and interacts with. The combination of SDKs and platform APIs, together with open-source libraries, have made the production of apps orders of magnitude more efficient than when I started writing code 30 years ago. 

Please note that the word “platform” is usually ambiguous: it may mean a cloud/service platform (a back-end platform that serves a front-end app) or an open back-end that collaborates with a many apps (or other service platforms) through APIs. Here I use platform in the second sense; the first is always included in the “app” perimeter because of “device agnosticism”. That is, to let the end user pick whichever device is more suited to her current context, each mobile app must have a dual cloud service platform. So, in this post, each “app” comes with associated set of back-office/cloud services and I reserve “platform” for the implicit open innovation approach (cf. “L’age de la multitude”)


2. Critical Mass and Software Usage 


Even if software development is incremental, launching a successful product requires building a “critical mass” of value before one may start the “lean startup” positive feedback loop of co-creation. This is the “V” in MVP (Minimum Viable Product): there is a threshold of value brought to the user that one must pass before the percolation model of viral adoption (helped with proper marketing) may kick in. The analogy with living organisms (“SW as a living thing”) is relevant here: software requires growth, constant change and a sustainable equation of user growth. The MVP aims to reach the tipping point when viral adoption becomes sustainable (what Eric Ries calls “getting traction”).

The “critical mass of value” that is required for this tipping point varies considerably according to the pain point that the piece of software is trying to alleviate and according to the current state of the art. It may be measured in terms of function points (how rich an experience is necessary) and social weight. The complexity of modern experience comes from their social nature (if you think about it, most apps on your smartphone nowadays have a social component). Hence installing a new habit requires fighting against Metcalfe’s Law, and displacing a previous social usage requires even more efforts. The value critical mass may be large, which explains why some legacy Microsoft products such as Word, which I am still using to write technical papers, have not been easily replaced by open source alternatives.

To reach this “value critical mass”, someone must invest an initial significant software development effort. In a complex world, where the risk to fail is high, one must reduce this initial software development cost, as we explained earlier. This is also a signature of the complex environment we live in: we must switch from ROI (return on investment) to the affordable loss principle.
The percolation model shows why the ROI principle is no longer relevant: it is very hard to predict how well a successful MVP will percolate. The world is full of software startups with amazing valuations because their app found its way to massive deployment and usage. But it would be very difficult to predict such success a few years earlier, when the MVP was still a prototype.

The affordable loss approach means reducing the development cost to something that “you may afford to lose”. This means leveraging the previously shown layered architecture, and mostly leveraging the strength of the open source community.  Open source software is a machine that is constantly churning out software platforms that start their own journey towards fame and critical mass. The quality of open source software is directly related to its adoption (because good software is built incrementally through feedback – a key axiom). Adoption is inversely proportional to genericity, hence a using open source software is a “connected art”. One must understand the communities’ sizes and dynamics to select “the pieces of the puzzle”. Open source software yields by construction a nested / layered structure of software libraries with a combination of very high quality stable platforms for the common needs and more experimental gems with specific capabilities. This is why the word “ecosystem” is so relevant to open source software. One should not think of a catalog of free software, but of a nested hypergraph of communities, where a collaborative price must be paid. This price is measured in (participation) time and (code) sharing. This represents a culture shift for most software organizations, but the efficiency of those who “play the game right” is such that it becomes the only game worth playing.
 
This is just a hint of what a proper open source strategy should be, since there are so many aspects that I am not touching here. Open source is not only about software libraries, it is also about development methods, tools and processes, cloud computing and hardware, to name a few. As a follow-up, I would suggest reading Octo’s great book “Les GĂ©ants du Web”, take a closer look at DevOps or leverage the value that is found in the Open Compute project.


3. The App developer's equation



This last part looks at app sustainability from the perspective of the developer. I have used the following (abstract) equation in my presentations for the past 10 years:

Attractiveness = Market x Generosity x Value / Effort   

In this equation,
  • Market is the size of the potential market size that a given platform is proposing. This equation was developed to understand the fate of mobile OS, but it applies to all kinds of platforms, from cloud service platforms that propose APIs (another dream of phone operators for the past 10 years) to connected objects.
  • Generosity is the share of the revenue (app price or advertizing revenue) that is sent back to the developer. For instance, Apple keeps a hefty 30%, whereas Android is more generous. Set-up costs should be factored in, for those platforms where some form of license or tools investment is still necessary.
  • Value is what the platform brings to the developer, as far as the end user experience is concerned. When I said “abstract” earlier, I meant that I don’t have a formula to measure “Value”. Most often, it is a judgment call from the developer, who evaluates what innovative and relevant services may be developed with the platform. There is subjectivity involved, such as the infamous “cool  factor”, which favors sets of APIs & features from which “cool stuff” may be built.
  • Effort is the amount of time it takes to build “one unit of value”. This is where the difference stands between great players, who provide the right SDK, community support, testing services, and an efficient delivery (store) platform, and other less qualified players. Many APIs exposure programs have failed during the past 10 years because the effort expected from the developer was much too high. This is also why one should enroll help from qualified actors such as Apigee to develop an API strategy.

Roughly speaking, the equation is an abstract form of “Expected income / Expected Effort”. I have used this equation in the past years to explain why there would be two or at most three mobile open OS in the future, but it actually tells a lot of things.  You may understand why Microsoft announced (at last) that Windows Phone would be free in the future. The “Market factor” yields a “winner take all”  dynamic that we have observed for many platforms (the Matthew effect : the platform with the more users attracts more developers, benefiting from more open innovation, hence attracting more new customers). It also gives a few insights for a successful platform strategy:
  1. Growth : get a critical mass of customers, as quickly as possible. To jumpstart the virtuous cycle (that is, enroll app developers while the market is still not here, one must use rewards and gamification – such as hackathons).
  2. Expose as much value from your APIs as possible, with a focus on differentiation that is exposing stuff which is both useful and not readily available elsewhere. This is probably the most strategic factor to predict the success of connected objects in the future, or larger domains such as the connected (smart) home.
  3. Reduce the effort for the developer by embracing the open source and Web standards (languages, development tools, API styles, libraries, etc.). The adjacent illustration of the “Fun vs. Effort” is taken from a humorous site, but thinking in terms of value/effort is critical to system analysis.

This equation gives also a way to evaluate the intrinsic value of a platform. It follows from what was said earlier that the value is the capacity to generate revenue streams from apps. This, in turn, is mostly related to accumulated user data. This leads to the idea, reported by Henri Verdier, that data is the new code. The algorithms change constantly, and the best one are produced by external developers (hence the open innovation paradigm). What changes more slowly is the API structure and what accumulates over time is the amount of user data. This is important enough to be worth a future post when I report about the work of our group at the NATF on Big Data.















 
Technorati Profile