Sunday, June 18, 2017

Digital Experience Factories



 1. Introduction


I have left AXA a month ago to join Michelin. This is always a great moment to reflect on some of the ambitions of the previous past years. Today I will write about Digital Experience Factories. As AXA Group Head of Digital,  I have worked on setting up a Digital Experience Factory, a software development organization geared to produce digital artefacts and experience, following my previous work on lean software factories at Bouygues Telecom. The introduction of “experience” in the name is a way to emphasize the importance of customer experience in the digital world, but this has always been a key ambition of lean software factories as is shown on the illustration in the next section. The goal of this post is to re-formulate the key ideas and principles of a Digital Experience Factory, now that I have added a few more years of experience. It should be said that the concept of software factory is now well established and that most of what looked new in 2012 is mainstream in 2017.

I have had a long history of experience and interest with software factories, agile organizations and lean software, but I really started to put the pieces together in 2012. I defined the “Lean Software Factory” as the target for our Bouygues Telecom Internet product software division (Internet gateways and set-top boxes)  by merging principles from Agile (mostly SCRUM), Extreme programming and Lean, as explained in this previous post. The theory and the background references were rich, but we actually focused on four practices only:
  • Team Problem Solving
  •  Using visual management in a project room
  •  Reducing WIP through Kanban
  • Love your code (5S for the code, coding discipline, code review, gardening, etc.)

This vision has been presented at the Lean IT Summit in 2013 and you may find the slides here with both the general principles and the four practices.  Four the French readers, a simplified presentation was made at the 4th Lean IT Summit in Lyon (2014), with the attached slides.

I will start this post with an illustration that was produced in 2012, because I have reused it extensively at AXA in a digital factory context. Although the picture was produced to illustrate our ambition with set-top boxes, it is sufficiently user-centric and generic to be widely applicable. I was happily surprised to find it so relevant five years later in a different context. I will propose a short summary in the next section.

Section 3 will focus on the critical dependency between the innovation and the software factory. I have used the major part of the past three years at setting up lean startup innovation processes. I have already touched in a previous blogpost at the importance of the relationship between the innovation and software delivery processes, but I would like to emphasize the co-dependance of these two processes which I have labelled “from customer to code” (lean startup) and “from code to customer” (devops). In the digital world (more generally in the modern complex world), “the strategy is the execution” (i.e., you are what you do).

The last section will talk about the role of digital experience factories in the world of exponential information systems.  Since “software is eating the world”, it creeps precisely everywhere in companies’ businesses, inside the company (each piece of equipment in a factory or human collaboration is becoming “smart or augmented”) and outside (customers’ digital lives or business partners). Software factories have a role to play in a larger software ecosystem, with a multiplicity of roles and stakeholders.


2. Digital Experience Factory Blueprint


The following picture is an illustration of a Digital Experience Factory – as well as a lean software factory. Although it is pretty old (in the digital time), I have found that it is still a good blue print for setting up a software factory in the digital world.



This picture is pretty much self-explanatory but I would like to point out a few things, i.e., explain the choice of a few keywords. To keep things short, let’s define the seven foundations of the Digital Experience Factory:

  1. The input for the factory is made of “pain points” & “user stories”. No one should be surprised to see user stories for an agile software shop, but I have found that they should not be separated from the “original paint points” which they are derived from. Our practice at AXA has been to build “UVP trees”, which a graphical representation that links the pain points, the UVP (unique value proposition) and the user stories. Sharing UVP with everyone in the software shop improves considerably the quality of the code (from a customer experience point of view). More generally, a key principle of a lean organization is to make sure that “the customer is represented on the production floor” and that customer testimonies – including pain points – are available (visually) to all actors of the process (not just the designers or product marketers).
  2. The output of any software development is end-user experience and is measured with user satisfaction. Customer satisfaction is the “true north” of any lean (in the Toyota Way sense) organization. Because customer satisfaction is complex (in a systemic sense), it requires an incremental approach and a feedback loop.
  3. CICD (Continuous Integration and Continuous Delivery) is the crown jewel of modern software organization. This is where the huge productivity gap resides, but this also requires significant efforts to setup. I refer you to the great Octo book, the Web Giants, which I have used extensively to evangelize and promote change in the past 5 years. CICD starts with Continuous Build and Integration and continued with Continuous Delivery using DevOps practices. This is probably the part of the picture that has evolved the most in the past 5 years since DevOps is now a mainstream critical recommendation.
  4. Test-driven development is also a critical aspect of a digital experience factory because it fuels the CICD ambition (automated tests and automated delivery go hand in hand), but also because it helps to produce higher quality code with less stress, hence more pleasure. I urge you to read Rich Sheridan wonderful book “Joy, Inc” to understand the importance of culture and pride in software development. This obviously goes back to the three tenets of self-motivation according to Daniel Pink: autonomy, mastery and purpose.
  5. Visual Management & Kanban are the most visible parts that are borrowed from the lean management principles in a Digital Experience Factory. There are two plagues in most software organizations:   rework (and its cousin, dead code) and waiting (people waiting for one another). Any software development audit that I have had to undergo in my past 15 years of professional experience has found these two issues. Visual Management, in general, and Kanban, in particular, is the best way to tackle these two problems.
  6.  “Source code is king” in a digital software factory. The new world of software is characterized by the increased rate of change and innovation. This creates two new requirements : (a) one must love one’s source code because it will be required to be looked at and changed constantly  (b) one must reuse as much existing code as possible, hence the importance of leveraging open source as a code feed (cf. the illustration).
  7. Synchronized team work : the digital experience factory is organized into squads, autonomous cross-functional teams. There are at least three important ideas here. First, Squads are cross-functional teams where all necessary skills are working together. Second, synchronized work means working together at the same time on the same problem. This creates an environment where everyone “understands a little bit of everything” which reduces informational friction considerably. Last, the squad is autonomous, both for speed and motivation.




3. Innovation Factories and Learning Loop



The following picture is borrowed from a presentation that I made at XEBICON 2015. It is the best illustration that I have of the interdependence of the innovation process and the software delivery process, and it illustrates what I have been attempting to build at AXA during the past few years.


The key point of this picture is that, although there are two processes, there is only one team and one product that is being built. Other said, these are the same people who participate to the two processes. The capability that the first process it aiming to build is to produce digital artefacts (mobile or web apps, connected object, cloud service, etc. – i.e., code) from listening to, and observing, the customer. The capability of the second process is being able to deliver to customers – at scale – a product/ service from the original code that is produced by the developer in a continuous, high frequency and high-quality manner. What I have learned over the years is that these two processes are very dependent on each other, which is, once more, a lesson from the Web Giants ! It is very difficult to run a lean software factory without the true customer centricity of a lean startup approach (cf. the importance of customer pain points, satisfaction, user stories and testimonies in the previous section). It is equally difficult to implement a lean startup approach without the performance of a great Devops software factory : the iterative process requires the high frequency of delivery, customer satisfaction demands high quality software with high performance and as few defects as possible.


As stated earlier, strategy and execution are merged in the digital world : a strategy only becomes real when it has been executed  and adapted to the “real-time” environment, and execution requires to grow and adapt the strategy continuously. Success becomes a function of the “digital situation potential”, which is a combination of skills, low technical debt and a flexible open architecture. The following illustration is borrowed from the same blog post. It shows the importance of separating different time scales:
  • T0: The immediate time scale of customer satisfaction: delivering what is requested. This is the takt time of the factory process.
  • T1: The “mid-term” time of continuous improvement. This is the kaizen time, which is more uncertain since some problems take longer to solve
  • T2: The “long-term” time of learning. In a complex word, most learning is performed “by doing”. Training happens “on the gemba”, through practice.



This picture is also a great illustration of how the double benefits of the digital experience factory agile and lean roots. Lean and Agile reinforce each other. Agile software development practices are mostly T0 (and some T1 with reflection practices) while lean emphasizes T1 (kaizen) and T2 (kaizen again ! plus dojo practices). Agile and SCRUM are born as “project development methods” whereas lean software programming is geared towards product development. I have covered this topic with more detail in my post about Lean and Architecture.  Long-term sustainable development requires architecture and it is somehow not easy to see where architecture fits in the agile framework, while architecture is a cornerstone for lean sustainable development. I also refer you to the book :  Lean Architecture – for Agile Software Development”  from James O. Coplien & Gertrud Bjørnvig.

A key piece of the Digital Experience Factory illustration is the feedback loop from customer experience (i.e., the satisfaction or the absence of satisfaction / the usage or the absence of usage). I have formalized the “Customer Feedback learning Loop” (CFLL) over the years, and our experience at AXA has helped a lot to setup best practices that may be easily reproduced.
  • CFLL is practiced with three “channels”: implicit, explicit and social.  Implicit listening means using the power of imbedded analytics to track the effective usage of customers. Explicit is “active listening” of users to hear about their usage and satisfaction. Explicit means that we look for verbatim (e.g., in the stores), testimonies from users (through interviews). Active means that this is a conversation, we may ask questions or answer to customers. Social listening requires setting communities/digital tools so that users may act as a group. Experience over the past 15 years has shown that the dynamics of feedback is very different with a group, which feels more empowered, than with individuals
  •  CFLL is managed as any quality improvement loops, using a “Toyota-style A3” which supports a PDCA (Plan-Do-Check-Act) approach. Looking for root causes using the “5 whys”, setting up kaizens with the whole team, and careful formulation of assumptions is critical, since digital troubleshooting is hard and full of counter-intuitive surprises.
  •  CFLL is part of the Growth Hacking toolbox and also leverages “source code as a marketing tool”, that is making the digital product a marketing and sales channel for itself.
  • Because social tools are useless without a community, a key task of the CFLL approach is to grow and nurture a community of engaged users. This is very well explained by Guy Kawasaki in his book “The Art of the Start” :  Success comes from fast iterations applied to a rich feedback ; there is no better way to get this rich feedback than building an “ambassador communities”.



4. Software is eating the world


I will conclude this post by stepping back and discussing about the role of the software factory in the larger software ecosystem that companies need to be part of, since “software is eating the world”. A first logical consequence of Marc Andreessen observation is that software is everywhere in the companies, with a much larger footprint than our “traditional information systems”. I use the world “software” because “digital” is ambiguous. It its broad sense, everything that using bits, digital information, is part of the “digital world”, hence software is part of it. In a narrower sense, which is used many companies, “digital” is what matters to customer, the impact of software, computers, bits … in their daily lives. Many company separate digital and IT because they use a narrower definition of digital (with the broader sense, smart factories, IOT, computer-mediated communication, information systems, web, mobile and cloud services, etc. are all part of the digital scope). To avoid that confusion, I use the word “software” as the common root for customer digital, information systems, Internet of Things, smart control of machines, and digital communication. This helps to understand that no software factory (in the digital, IS or other organizations) is an island.  It is part of multiple ecosystems, internally within the company and externally with other partners and stakeholders. In a world which is dominated by platforms,  a software factory is not only a process that produces code, it is also a host of a software environment (usually centered around a platform) and an ecosystem player through API (Application Programming Interfaces).  This also means that software factories are de facto partners with the company information systems, while at the same time it is clear that the footprint of software in companies is growing faster than their information systems. To reuse an old term from the 2000s, “shadow IT” will grow and not shrink in the future. 

There is indeed a common software ecosystem – of data models, API, architecture patterns – for each company that requires careful thinking and management, which comes from a global viewpoint. Platform engineering demands a common data exchange model (not necessarily a unique data model) as well as common engineering practices (know-how & culture) for API. In other words, “software is eating the world” but it will eat better and faster the world of your own business if you care to manage this emergent process. This is a great opportunity for information systems (IS) organization to play a “backbone role” for the various software ecosystems. Many of these software ecosystem issues are technical (software hosting and security constraints) and architectural issues (event-driven architecture, distributed data architecture) which require skills and experience that are part of information systems DNA. On the other hand, the loose coupling of platforms that are produced by autonomous teams may be a “new art” for the more traditional IS organizations. In the digital world, the platform is the team and the team is the platform: the platform is a live object that evolves continuously to adapt to its environment. A software platform is not something that you buy, not even something that you build, but something that you grow.

 I will conclude with a simple but powerful idea: Digital Experience Factories are technology accelerators, i.e. open ports for a companies to leverage the continuous flow of exponential innovation. What I mean is that Digital factories are part of exponential information systems, they make the edge (the border) of the information systems that is in contact with consumers, business partners, and innovative players. In a fast-evolving world, most companies are looking for ways to become “future-proof”. The architecture of exponential information systems draws from biology with a core that evolves slower while the frontier (membrane) evolves at a faster rate from the contact with the outside environment. Digital Factories are part of this “Fast IT” with a clear opportunity to leverage the flow of new technologies such as Artificial Intelligence, Machine Learning or Natural Language Processing. As I explained in a previous post, there are four requirements to harness these new software techniques:
  • Access to data (intelligent software drinks huge amounts of multi-sourced data)
  • Use of modern software stacks (i.e., leverage latest open sources libraries & APIs
  • Autonomous cross-functional teams
  • Lab culture (fact-based decision and iterations and failure are welcome)

One can recognize in this list the foundations of the Digital Experience Factory as explained in the second section :)





Sunday, March 5, 2017

Regulation of Emergence and Ethics of Algorithms



1. Introduction


Algorithms governance is a key topic, which is receiving more and more attention as we enter this 21st century. The rise of this complex and difficult topic is no surprise, since “software is eating the world” – i.e., the part of our lives that is impacted by algorithms is constantly growing – and since software is “getting smarter” every year, with the intensification of techniques such as Machine Learning or Artificial intelligence. The governance question is also made more acute since smarter algorithms are achieved through more emergence, serendipity and weakening of control, following the legendary insight of Kevin Kelly in his 1995 “Out of control” best seller: “ « Investing machines with the ability to adapt on their own, to evolve in their own directions, and grow without human oversight is the next great advance in technology. Giving machines freedom is the only way we can have intelligent control. » Last, the algorithmic governance issue has become a public policy topic since Tim O’Reilly coined the term “Algorithmic Regulation” to designate the use of algorithms for taking decision in public policy matters.

Algorithm governance is a complex topic that may be addressed from multiple angles. Today I will start from the report written by Ilarion Pavel and Jacques Serris “Modalities for regating content management algorithms”. This report was written at the request of Axelle Lemaire and focuses mostly on web advertising and recommendation algorithms. Content management – i.e. deciding dynamically which content to display in front of a web visitor – is one of the most automatized and optimized domain of the internet. Consequently, web search and content recommendation are domains where big data, machine learning and “smart algorithms” have been deployed at scale. Although the report is focused on content management algorithms, it takes a broad view of the topic and includes a fair amount of educational material about algorithms and machine learning.  Thus, this report addresses a large number of algorithm governance issues. It includes five recommendations about algorithm regulation intended for public governance stakeholders with the common intent of more transparency and control for algorithms that are developed in the private sector.

This short blog post is organized as follows. The first part provides a very simplified summary of the key recommendations and the main contribution of this report. I will focus on a few major ideas which I found quite interesting and thought-provoking. This report addresses some of the concerns that occur from the use of machine learning and artificial intelligence in mass-market services. The second part is a reply from the angle of our NATF work group on Big Data. As was previously explained, I find that we have entered a “new world” for algorithms that could be described as “data is the new code”. This cast a different shadow on some of the recommendations from the Ilarion Pavel & Jacques Serris report. As algorithms become grown from data sets through training protocols, it becomes more realistic to audit the process than the result. The last part of this post talks about the governance of emergence, or how to escape what could be seen as an oxymoron. The question could be stated as “is there a way to control and regulate something that we do not fully understand ?”. As a citizen, one expects a positive answer. Other sciences have learned to cope with this question a long time ago, since only computer scientists from Silicon Valley believe that we may control and fully understand life today (these issues arise constantly in the worlds of medicine, protein design or cellular biology for instance). But the existence of this positive answer for Artificial Intelligence is a topic for debate, as illustrated by Nick Bostrom’s book “Superintelligence – Paths, Dangers, Strategies”. To dive deeper into this topic, I strongly recommend the reading of "Code-Dependent : Pros and Cons of the Algorithmic Age" by Lee Rainee and Janna Anderson


2. Algorithm Regulation


First, I should start with my usual caveat that you should read the report versus this very simplified and partial summary. The five recommendations can be summarized as follows:

  • Design a software platform to facilitate the study, the evaluation, and the testing of content / recommendation algorithms in a private/public collaboration opened to research scientists
  • Create an algorithm audit capability for public government
  • Mandate private companies to communicate about algorithm behavior to their customers, through a “chief algorithm officer role”
  • Start a domain-specific consultation process with private/public stakeholders to formalize what these “smart content management services” are and which best practices should be promoted nationally or internationally.
  • Better train public servants who use algorithms to deliver their services to citizens

A fair amount of the report talks about Machine Learning and Artificial Intelligence, and the new questions that these techniques raise from an algorithm ethic point of view. The question “how does one know what the algorithm is doing” is getting harder to answer than in the past. On page 16, the concept of “loyalty” (is the algorithm true to its stated purpose ?) is introduced and leads to an interesting debate (cf. the classical debate about the filter bubble). The authors argue – rightfully – that with the current AI & ML techniques the intent is still easy to state and to audit (for instance because we are still mostly in the era of supervised learning), but it is also clear that this may change in the future.  A key idea that is briefly evoked on page 19 is that machine learning algorithms should be evaluated as a process, not on their results. Failure to do so is what triggered the drama of the Microsoft chatbot who was made non-loyal (not to say racist and fascist) through a set of unforeseen bet perfectly predictable interactions. One could say there is the equivalent of Ashley’s law of requisite variety in the sense that the testing protocol should exhibit a complexity commensurate to the desired outcome of the algorithm. Designing training protocols and data sets for algorithms that are built from ML to guarantee the robustness of their loyalty is indeed a complex research topic that justifies the first recommendation.

We hear a lot of conflicting opinions about the threat of missing the train of AI development in Europe or in France, compared to the US or China. The topic is amplified by the huge amount of hype around AI and the enormous investments made in the last few years, while at the same time there seems to be a “race to open source” from the most notorious players. The authors propose three scenarios of AI development. In the first scenario, the current trend of sharing dominates and produces “algorithms as a commodity”. AI becomes a common and unified technology, such as compilers. Everyone uses them, but differentiation occurs elsewhere. The second scenario is the opposite where a few dominant players master the smart systems (data and algorithms) at a skill and scale level that produces a unique advantage. The third scenario focuses on data ecosystems but recognizes that the richness and regulatory complexity of data collection make it more likely to see a large number of “data silos” emerge (larger number of locally dominant players, where the value is derived more from the data than the AI & ML technology itself). As will become clear in the rest of this blog, I see the future as the combination of 2 and 3 : massive concentration for a few topics (cf. Google and Facebook) that coexists with a variety of data ecosystems (if software is eating the world and tomorrow’s software is derived from data, this is too much to chew for a single player, even with Google’s span).

A key principle proposed by the authors is to “embody” the algorithm intent through the role of “chief algorithm officer”, with the implicit idea that (a) algorithms have no will or intent of their own, that there is always a human behind the code (b) companies should have someone who understands what the algorithm does and is able to explain it to stakeholders, from customer to regulators. The report makes a convincing case that “writing code that works is not enough”, the of “chief algorithm officer” should be able to talk about it (say what it does) and prove that it works (does what is intended). There is no proof, on the other hand, that this is feasible, which is why the topic of algorithm ethics is so interesting. The authors recognize on page 36 that auditing algorithms to “understand how they work” is not scalable. It requires too much effort, will prove to be harder and harder as techniques evolve, and we might expect some undecidability theorems to hit along the way. What is required is a relaxed (weaker) mandate for algorithm regulation and auditing: to be able to audit the intent, the principles that guarantee that the intent is not lost, and the quality of the testing process. This is already a formidable challenge.

3. Data is the New Code


This tagline means that the old separation between data and code is blurring away. The code is no longer written separately following the great thinking of the chief algorithm officer and then applied to data. The code is the result of a process – a combination of machine learning and human learning – that is fed by the available data. “Data is the new code” was introduced in our NATF report to represent the fact that when Google values software assets for acquisition, it’s the quantity and quality of collected data that gives the basis for valuation. The code may be seen as the by-product of the data and the training process. There is a lot of value and practical expertise with this training process, which is why I do not subscribe to the previously mentioned scenario of “AI as a commodity”. Smart systems is first and foremost an engineering skill.

A first consequence is that the separation of the Chief Data Officer from the Chief Algorithm Officer is questionable. The code that implements algorithms is no longer static, it is the result of an adaptive process. Data and algorithms live in the same world, with the same team. It is hard to evaluate / audit / understand / assess the ethical behavior of data collection or algorithms if the auditor separates one from the other. Data collection needs to be evaluated with respect to the intent and the processes that are run (which has always been the position of the CNIL) and algorithms are – more and more, this is a gradual shift – the byproduct of the data that is collected.

Data ethics is also very closely related to algorithm ethics. On page 29, the report tells that bias in data collection produces bias in the algorithms output. This is true, and the more complex the inference from data, the more complex tracking these biases may be. The questions about the ethics of data collection, the quality and the fidelity of the data samples, are bound to become increasingly prevalent. As explained before, this is not a case where one can separate the data collection from the usage. To understand fairness – the absence of biases - , the complete system must be tested. Serge Abiteboul mentioned in one of his lectures the case of Staples, whose pricing mechanism, through a smart adaptive algorithm, was found to be unfair to poorer neighborhood (because the algorithm “discovered” that you could charge higher prices when there are fewer competitors around). I recommend reading the article “Discovering Unwarranted Associations in Data-Driven Applications with the FairTest Testing Toolkit” to see what a testing protocol / platform for algorithm fairness could look like (in the spirit of the first recommendation of the report). The concept of purpose is not enough to guarantee an ethical treatment of data, since many experiments show that big data mining techniques are able to “find private pieces of data from public ones”, to evaluate features that we not supposed to be collected (no opt-in, regulated topics) from data that were either “harmless” or properly collected with an opt-in. Although the true efficiency of the algorithms of “Cambridge Analytica” are still under debate, this is precisely the method that they propose to derive meaning full data traits from those that can be collected publicly.

The authors of the report are well aware of the rising importance of emergence in algorithm design. On page 4, they write “one grows these algorithms more than one writes them”. I could not agree more, which is why I find the fourth recommendation surprising – it sounds too much of a top-down approach where data services are drawn from analysis and committees versus a bottom-up approach where data services emerge from usage and collected data. In the framework of emergent algorithm design, what needs to be audited is no longer the code (inside of the box which is becoming more of a black box) but the emergence controlling factors and the results:
  • Input data
  • Purpose (intent) of the algorithm
  •  “training” / “growing” protocol
  •  Output data

This brings us to our last section:  how can one control the system (delivering a “smart” experience to a customer) without controlling the “black box” (how the algorithm works) ?

4. How to Control Emergence ?


The third recommendation tells about the need to communicate about the way algorithms operate. Following the previous decomposition, I favor the recommendation on communicating about intent, with the associate capability (recommendation #2) to audit the loyalty (the algorithm does what its purpose says). On the other hand, I do not take this literally to explaining how the algorithm works. This was perfectly achievable in the past, but emergent algorithm design will make it more difficult. As explained earlier, there are many reasons to believe that it may simply be impossible from a scientific / decidability theory view point.

This is still a slightly theoretical question as of today, but we are coming fast to a point when we will truly no longer understand the solutions that are proposed by the algorithms. Because AlphaGo is using reinforcement learning, it has been able to synthetize strategies that may be qualified as deceiving or hiding its intent to the opponent player. But humans are very good at understanding Go strategies. In the case of the recent win of AI in poker tournaments, it is trickier since we humans have a more difficult time at understanding randomized strategies. We have known this from game theory and Nash equilibriums for a long time. Pure strategies are easier to understand but mixed strategies are often the winning ones. Some commentators assess that the domination of the machine over human is even more impressive for Poker than for Go, which to me reflects the superiority of the machine to handle mixed (i.e. randomized) strategies. As we start mixing artificial intelligence with game theory, we will grow algorithms that are difficult to explain (i.e., we will explain the input, the output, the intent and the protocol, not what the algorithm does). If one only uses a single AI or machine learning technique, such as deep learning, it is possible to still feel “in control” of what the machine does. But when a mix of techniques is used, such as evolutionary game theory, generative AI, combinatorial optimization and Monte-Carlo simulation, it become much less clear. As a practitioner of GTES (Game Theoretical Evolutionary Simulation) for a decade, it is very clear that the next 10 years of Moore Law will produce “smart algorithms” with deep insights from game theory that will make them able to interact with their environment – that is, us – in uncanny ways.

I have used the “backbox” metaphor because a systemic approach to control “smart algorithm” is containment, that is isolate them as a subsystem in a “box of constraints”. This is how we handle most of the other dangerous materials, from viruses to radioactive materials. This is far from easy from a software perspective, but there is no proof that it is impossible either. Containment starts with designing interfaces, to ensure what the algorithm has access to, and what outcome/ suggestions it may produce. The experience of complex system engineering shows that containment is not sufficient, because of the nature of complex interaction that may appear, but it is still a mandatory foundation for safe system design. It is not sufficient for practical reasons: the level of containment that is necessary for safety is often in contradiction with the usefulness of the component. Think of a truly great “strong AI” in a battery powered box with no network connection and a small set of buttons and lights as an interface. The danger of this “superintelligence” is contained, but it is not really useful either. The fact that safety may not come solely from containment is the reason we need complex / systemic testing protocols, as explained earlier.
Another possible direction is to “weave” properties into the code of the emergent algorithm. It is indeed possible to impose simple properties onto complex algorithms, that may be proven formally. 

The paradox is that there are simple properties of programs, such as termination, which are undecidable, while at the same time, using techniques such as abstract interpretation or model checking, we may formally prove properties about the outputs. For my more technical readers, one could imagining weaving the purpose of the algorithm using aspect-oriented programming into a framework that is grown through machine learning. This is the implicit assumption of the scifi movies about Asimov’s laws that are “coded into the robots” : they must be either “weaved” into the smart brain of the robot or added as a controlling supervisor – precisely the containment approach, which is always what gets broken in the movie. The idea of being able to weave “declarative properties” – that capture the intent of the algorithm and may be audited – into a mesh of code that is grown from data analysis is a way to reconcile the ambition of the Ilarion Pavel and Jacques Serris report with the reality of emergent design. This is a new field to create and develop, in parallel with the development of AI and machine learning in software that is eating the world. This will not happen without regulation and pressure from the public opinion.


These are not theoretical considerations because the need to control emergent design is happening very soon. Some of these concerns are pushed away by creating divides: “weak AI” that would be well controlled versus “strong AI” that is dangerous but still a dream, “supervised machine learning” that is by definition under control, versus “unsupervised learning” which is still a laboratory reseach topic. The reality is very different: these are not hard boundaries, there is a gradual shift day after day when we benefit from more computing power and more data to experiment with new techniques. Designing methods to control emergence requires humility (about what we do not know) and paranoia (because bad usage of emergence without control or foresight will happen).

Wednesday, December 21, 2016

Behavioral Change Through Systemic Games




1. Introduction


I had the privilege last month to give a keynote lecture on “Big Data, Behavioral Change and IOT Architecture” at the Euro-CASE Annal conference on “Big Bata – Smarter Products, Better Society”. You may download the slides here. My lecture was divided into three parts: the first was about our NTAF report on big data, the second focused on behavioral change and the last part presented some of my views about IOT architecture. I have already covered the first part and the last part in previous blogposts, so today I will talk about behavioral change.

There is an obvious link between Big Data, Internet of Things and Behavioral Change. Many of the “smarter products” leverage IoT technologies and big data to help us change our behavior. This is true for wearables that are intended to help us take better care of our health and well-being, but is this also true for many products for your car or your home. The IoT technology is used to capture data through sensors and provide feedbacks through screens, speakers, motors, actuators, etc. Big Data methods are applied to extract value from the captured data so that the overall feedback experience is “smart” – hence the “smart product” subtitle for this conference. However, it turns out that changing behavior is hard, and this is not a matter of technology, it is a matter of psychology. There is a fair amount of science that may be leveraged, but there is no silver bullet: designing digital objects or experiences that help you change your behavior is a difficult project.  I am not a behavioral scientist nor a psychology expert, thus this post is a short introduction to the topic. I am just trying to make a few cautionary points and to open a few doors.

This post will follow the same outline that I used during the conference. The next section (Section 2) sets the landscape of behavioral change with respect to “smarter products” and IoT.  The goal is to move the focus in IoT from data to user-centric design - which was my conclusion at the end of the lecture. Behavioral change requires times, stories and emotional design. Section 3 is a short summary of a NATF working group that worked for a year on understanding how people react to the exponential rate of change for ICT (information and communication technologies). The key takeaway is that there is no fear of ICT, but there exists adaptive stress. That stress may be relieved if we design digital experiences as learning experiments – quoting from Mary Helen Immordino-Yang,  “the goals and the motivations of the digital environment should be readily apparent”. Section 4 draws on a few well-renown scientists and sources to see how fun and learning may be embedded into digital experiences. The last section applies this to smart objects whose ambition is to coach you to change your behaviors towards a better or healthier lifestyle. The need to weave emotions, fun, self-learning and reflective-story-telling yields to systemic serious games. Behavioral change requires a systemic posture, because of the importance of feedback loops, adaptive planning, and chronology. It also requires to design “smarter products” as games with a focus on user emotions, story-telling and pleasure.

2. From Data to Knowledge through User-Centric Design


When experiencing with a connected wearable or device, most users do not want a dashboard, they want a story. I have already covered this in a previous post, to go further I suggest that you read “Inside Wearables - How the Science of Human Behavior Change Offers the Secret to Long-Term Engagement”.  Owners of connected devices become bored quickly of their data dashboards, once the excitement of the first days has faded away. The story of wearables that are offered for Christmas and forgotten a few months later is a perfect illustration. Self-tracking is a good and healthy habit – recommended by psychologists in many situations – but self-tracking without sense does not work because everyone is not a data scientist. This is wider than the field of health improvement and connected wearables: similar observations have been made about smart home connected devices. Remote control and monitoring through your smartphone is not enough value for the connected gadgets that we bring home – often as gifts.

As expressed in the previous post, connected devices must come with a story and a coach. If we look at the numerous behavioral change models, you need a good story to start you moving, and you need a coach to keep going. There are many references about the fact that we are moved, and hence remember better, by stories and not data sets, but I am partial to Nassim Taleb’s wonderful books. I strongly encourage you to read “Fooled by Randomness”. The importance of stories is deeply connected with the importance of emotions in learning which I will evoke later. Stories trigger emotions that acts as anchors in our learning process. One of the dominant behavior change mode is the TransTheoretical Model (TTM). Where stories are critical in the precontemplation/contemplation phases, the role of the coach is critical in the action/maintenance phases. The coach cannot be reduced to a feedback loop – otherwise dashboards would work. The coach must bring sense to the results that are collected by the connected device. Behavior change is hard; hence the coach role is difficult. The coach need to provide the proper information at the right time, together with the right emotion, to keep the “why” (motivation) alive while taking care of the “how” (engagement). We will return to how the science of “nudging” (i.e., designing the choice architecture) may help to nurture the user engagement.

Behavior change must be approached as a user-centric design challenge. The role of biorhythms and chronology is very important. For instance, attention span has a complex structure with specific rhythms. Transient attention is very short (less than 10s) – this is how magicians and conjurers operate – while focused attention is on the order of less than 10 minutes. The “coaching content” needs to be delivered at the right moment, for the right duration and in the right “state of mind” from an emotional standpoint. A lot is known about demotivation and habit-formation cycles, but this is not a hard science, there is not much data available and many controversies.  Still, it looks like we need two months on average (66 days) to create a new habit, with a “danger zone” of three weeks after the start (21 day) when motivation is at its lowest. This is consistent with a rule of thumb of elementary teachers that says that a new concept must be explained once, then repeated one day later and three weeks later.

Faced with the behavior change challenges, we need as much help as possible from social sciences, psychology, and neuro-sciences. Neuro-sciences have become very relevant in the last decade because we have learned a lot about the way the brain works and learns. Since the best-seller from Antonio Damasio, “Descartes’ Error”, we know that emotions play a critical role in our thinking and learning. I am quoting once again from the great book “Emotions, Learning and the Brain” by Mary Helen Immodino-Yang: “It is literally neurobiologically impossible to build memories, engage complex thoughts, or make meaningful decisions without emotion”. She explains very clearly that “Emotional Learning Shapes Future Behavior”: “ The learner’s emotional reaction to the outcome of his efforts consciously or nonconsciously shapes his future behavior, inciting him either to behave in the same way the next time or to be wary of situations that are similar”. The last chapter of the book is entitled “Perspective from Social and Affective Neuroscience on the Design of Digital Learning Experience”. It is very relevant and a great reading for anyone trying to help user change their behavior through connected devices and digital experiences. Here is a last quote from this chapter: “Here we turn the tables and suggest that many people may interact with their digital tools as if they were social partners, even when no other humans are involved. Thinking of digital learning as happening through dynamic, supported social interactions between learners and computers changes the way we design and use digital technologies for learning—and could help shed light on why we become so attached to our devices

3. Adaptive Stress Due to Technology Change Rate


In 2015 the NAFT ICT commission has undergone a series of interviews related to the effects of ICT usage. We interviewed leading sociologists or psychologists, such as Francis JauréguiberryDominique Cardon or Serge Tisseron, to better understand how digital experiences were accepted and appreciated by the average person. We wanted to better understand the tension, one could even say the paradox, between an ever-growing usage of smartphone, internet, and new digital services, while at the same times there exist clear and growing “distrust signals”. We started our discussions about “fears”: fear that digital communication was cutting people from “real communication”, fear that Google was making us stupid, etc.  The conclusion from the majority of the interviews is that ICT adoption is indeed fast and widespread, and actually well received by the vast majority of users. Digital usage adds to, but does not replace real life, and most people value “real life contacts” over digital ones. This is a complex topic that would deserve a separate post. Here I will just point out some of the conclusions or recommendations because they are clearly related to learning and behavior change.

The main common idea from our experts is that the worries that are being expressed about ITC usage are the symptoms of “adaption stress”. The rate of technology change is faster than the usage rate of change, which is itself much faster than the rate at which we understand these technology changes. We live in, and we welcome for most of us, a “world of accelerated permanent change”, where our products and services are constantly “upgraded” (we hate these “updates” because we do not understand them, they usually come when we do not expect them and they are forced on us). The main worry that is a consequence of this adaptation stress is the fear of not being in charge, the lack of mastery, especially from a time management perspective. Users who are interviewed by sociologists complain that are no longer in charge of their own time any more. They see ICT usage as taking too much of their free time, with a great difficulty to reclaim control (e.g, the fear of missing out). The “digital detox” approach is a classical counter-reaction to this feeling.

The main recommendation from this workgroup is, quite logically, to spend more effort in training and explanations related to the new digital products and services. The best way to reduce the stress of “losing control” is to give back the sense of “being in charge” with practical training. For instance, “digital life hygiene”, that is the practice of digital usage control, with both temporal and spatial zones of “digital detox” deserves to be taught. Digital training works best when it is both practical (in the “learn by doing” philosophy) and rooted in the real world, using devices, real life environment and situations to embed the conceptual learning into a kinetics experience (in the tradition of Maria Montessori). This idea of “inviting the real world back into the virtual one” came in many forms. A great piece of advice for teenagers and adult alike is to read aloud a message that is about to be sent electronically (SMS, chat, email, …) if complaining is involved. Neuroscience shows that reading aloud forces the facial muscle to express emotion, which are then carried to our “mirror neurons” so that we instantly feel what the effect on the other person may be (and then possibly adjust our message). Another set of recommendations about how to build better accepted digital experience were related to emotions: how to adapt the experience to user emotions (emotional design) but also how to leverage emotions as a training tool.

We all know that users don’t read “user manual” or documentation anymore. The challenge to alleviate the “adaptation stress” is to deliver digital experiences where learning and training is part of the customer journey. This is especially true for connected devices and quantified self digital experiences, as presented in Section 2. “Digital” means that data analytics is a given: we can analyze user journeys at each step of user experiences and measure both discovery and appropriation. From this, an appropriation maturity model may be build, which can be used as a guideline for “embedded connected tutorial”. The use of IOT and connected devices gives the additional advantage of the continuous feedback loop. Still, if training is conceived as an additional “online tutorial experience”, it most often fails to deliver the engagement that is needed for behavior change. The real challenge is to design the complete digital experience as a learning journey.


4. Adding Pleasure and Learning to Digital Experiences


  


Pleasure plays a key role in learning. The diagram shown to the right is borrowed from a biology conference about learning in a complex systems conference that I attended a few years ago. All living being, from very simple organisms to humans, build their behavior from this simple cycle (among other things). This is well recognized in design. I borrow this great quote from “The A-B-C of Behaviour” : “Fun is the mean by which we retrain our brain to learn new patterns of behavior”. Fun and pleasure are introduced into a digital experience through many means, from rewards to surprise. Rewards systems are heavily used in coaching or behavior change products. Surprise is a powerful emotion to trigger fun and to facilitate learning. I refer you to Michio Kaku’s explanation about the evolutionary role of emotions in his book “The Future of Mind”, which I have already mentioned in a previous post. He sees human as hard-wired to like surprises because they help to constantly tune our planning system, which is an evolutionary advantage. Intelligent beings plan and predict about their environment; a surprise occurs when what happens (a joke, a conjurer’s trick) is not what you were expecting. Because evolution has developed this pleasure from surprises, we are wired to explore and to learn. Helen Immodino-Yang express a similar idea: “In this sense, emotions are skills—organized patterns of thoughts and behaviors that we actively construct in the moment and across our life spans to adaptively accommodate to various kinds of circumstances, including academic demands”.

Learning is also a social activity, which means we should leverage the power of communities when designing behavior change experiences. This is also well-known in the design and digital world. Seth Godin has taught us to build viral experiences, where sharing is not an after-thought that is added to increase the spread of the product, but something that is at the core of the experience: “Virality is the product”. Experimental psychology and neurosciences show that we learn by imitation. The best way to build a tutorial is to show a video of someone else doing the very thing that needs to be learned (a great insight from the workgroup mentioned in the previous section). Dan Ariely, in his best-seller “Predictably Irrational – The Hidden Forces that Shape Our Decisions”, explains the importance of social norms.  In many instances, social norms are much more powerful than money to motivate people. More generally, what behavioral sociology tells us about cognitive biases is very relevant to design engaging learning experiments towards behavior change. For instance, Dan Ariely talks about the planning fallacy, the fact that we consistently underestimate the time it will take us to complete a task. Here a digital feedback loop may prove a useful help. Another fascinating example is the “high price of ownership”: “Ownership pervades our lives and, in a strange way, shapes many of the things we do”. Once we think that we own a thing, an idea, a goal … we overvalue what it represents. This is why the “emotion of ownership” is proposed as a goal by Don Norman, which is very relevant for digital experience and why customization is such an important feature (make it your own). On the opposite side, each choice is painful because of the effort that we make for any decision. Procrastination should never come as a surprise, and choice architectures should factor in the “consequences of non-decision”.

The best way to develop a learning experience that is woven into the overall experience is to “nudge” users towards behavior change. I am referring here to the “choice architecture” concepts popularized by R. Thaler and C. Sunstein’s best-seller : “Nudge – Improving Decisions About Health, Weath, and Happiness”. This books shows very interesting of designing choices frameworks that take our cognitives biases ( anchoring, over-valuing the present versus the future, availability biais : over-valuing what we have in front of us or at the top of our mind) into account. For instance, helping people to save more is a great behavioral change challenge. The “Save More Tomorrow” program showed how to use behavioral economics to increase employee savings.  The section about “social nudges” is a source of inspiration for introducing priming into digital experiences. Experimental psychology as a lot to tell us about how to nudge and motivate. For instance, to return to the reward topic, science shows that it is better to break down into many small rewards than giving a larger one less frequently. This is why the practice of small rewards such as “badges” is so common in the digital world (combining two insights about social and frequency). The concept of “nudge” and choice architecture is also very relevant to designing “progressive onboarding”, that is precisely the embedded incremental learning experience built into a digital product. To deliver the proper nudge, the experience designer must build a “usage and learning maturity model”, which is used to transform digital analytics (cf. Section 3) into an estimate about how much was learned already. From this the user may be “nudged” with the proper “tool tips” (a tool tip is a tiny piece of information that is presented at the right time, according to the usage context).

There exists a wealth of insights that behavioral change can borrow from experimental psychology. In addition to Ariely and Thaler, it is logical to mention Daniel Kahneman and his wonderful book “Thinking, Fast and Slow  (which I have commented in a previous blog post). This book contains wonderful examples related to “the marvels of priming”. For instance, hearing about other people changes your own ability : “This remarkable priming phenomenon – the influencing of an action by the idea – is known as the ideomotor effect. … The ideomotor link also works in reverse…. Reciprocal priming tend to produce a coherent reaction: if you were primed to think of old age, you would tend to act old, and acting old would reinforce the thought of old age”. Kahneman also illustrate our aversion to loss, which is closely related to the emotion of ownership : “We should not be surprised: losses evoke stronger negative feelings than costs. Choices are nor reality-bound because System 1 is not reality-bound”. He explains a number of “fallacies” (in the sense of the “narrative fallacy” of Taleb), such as the availability biais (WYSIATI : What you see is all there is). The insights of “the law of small numbers” is very relevant to dashboard and tracking: humans are not good at analyzing small data sets, we tend to see stories and correlations everywhere. This is even true for professional statisticians: “ It was evident that the experts paid insufficient attention to sample size”.  The list of biases from System 1 (our fast thinking process, cf. “Blink” from Malcom Gladwell) is summarized on page 105; this list is quite useful to improve the design of behavior change experiences. The combination of framing (using the power of words and emotion to  build the choice architecture) and understanding decision weights (we overvalue low probability events – the table page 315 is an eye-opener) can be leveraged to “nudge” more efficiently.

5. Behavior Change as a Systemic Game


If we assemble everything together - the need for incremental learning, the necessity of pleasure, the pleasure of learning – the best digital experience that we may propose for behavior change is to propose a “game to learn about yourself”.  Technology is definitely available to help : IoT sensors may monitor the user and her environment, digital tools and user interfaces may be used to tell a story and data science may be used to generate insights that feed self-discovery, learning and surprise (i.e., learning something new). Data Science is very relevant to develop such “serious games”. Machine learning algorithms are known to provide predictive, prescriptive and cognitive knowledge from dataPredictive analysis is very useful for the playful nature of the game. It is what makes a behavior change digital experience dynamic and interactive. Even if the prediction is not always accurate, it creates a surprise element and contribute to making the experience fun. Prescriptive analytics is about providing insights. This is a heavily debated topic because as we all know, correlation is not causation. Still, experience shows that powerful insights may be drawn from data collected with multiple sources of IoT sensors. Last cognitive analytics is about helping the user learn about herself.  To build such a self-learning experience and to understand the related challenges of behavior change motivation, the book from Samantha Kleinberg, “Why – A Guide to finding and using causes”, is a great source of insights. The book is full of warnings, which are closely related to the biases evoked in the previous section, such as the following: “many cognitive biases lead to us seeing correlations where none exist because we often seek information that confirms our beliefs”; ”It’s important to remember that, in addition to mathematical reasons why we may find a false correlation, humans also find false patterns when observing data” and “Most critically for this book, the approach of interviewing only the winners to learn their secrets tells us nothing about all the people who did the exact same things and didn’t succeed”. This last quote is interesting because it emphasizes the limit of statistics and samples, in contrast with the power of personalized medicine and coaching. Samantha Kleinberg’s book is also very positive because it shows that understanding the “why” is critical for self-motivation in behavior change: “Will drinking coffee help you live longer ? Who gave you the flu ? What makes a stock’s price increase? Whether your’re making dietary decisions, blaming someone for ruining your weekend, or choosing investments, you constantly need to understand why things happen”. It is also a comprehensive source about the art of explanations, which is closely related to learning.

The game paradigm implies that you learn by doing. This also applies to learning about yourself as a system. The game becomes a search to discover new insights, as with a treasure hunt. The object of the game is not the “static you” but the “dynamic version, a system that evolves constantly”. Applied to weight loss, it means that insights are not what your weight should be ( a form of medical advice – no fun in that) but rather behaviors that make you gain unnecessary weight (learning about yourself from your experience). This type of approach is natural for regular “quantified self” practitioners, but as we noticed, they are but a fraction of the overall population. This is a missed opportunity, in a sense, since self-tracking is good for you if you have the discipline for it. There are multiple references in all kinds of disciplines, from mental health and psychology to dieting or quitting smoking, including sports coaching. Quoting from another best-seller from Gretchen Rubin: “Current research underscores the wisdom of Benjamin Franklin chart-keeping approach. People are more likely to make progress on goals that are broken into concrete, measurable actions, with some kind of structured accountability and positive reinforcement.” The challenge that a behavior change game must solve is how to leverage behavioral science to bring the benefits of self-tracking (insights and systemic self-discovery) to people who do not have the mind nor the inclination for it. The system paradigm means that the user is “in the loop” and that the game must yield actions (and reactions) from which learning may be derived. For instance, experience shows that it is easier to nudge people to fix approximated data than to enter it in the first place.

Let us conclude with the observation that systemic games for behavior change fit squarely in the field of P4 (Predictive, Preventive, Personalized, and Participatorymedicine. Prevention is the goal of behavioral change experiences; the predictive capabilities from data science are necessary to develop engaging experiences; behavioral change games are personalized by construction, operating under the assumption that we are all different when it comes to behavior change, from motivation to effects. Last, behavior change games are participatory by construction on an individual level (learning comes from acting) and may leverage the power of social and community nudges, modulo the respect of customer privacy. Personalized medicine is, actually, more concerned with small data than big data.





 
Technorati Profile