Sunday, March 5, 2017

Regulation of Emergence and Ethics of Algorithms



1. Introduction


Algorithms governance is a key topic, which is receiving more and more attention as we enter this 21st century. The rise of this complex and difficult topic is no surprise, since “software is eating the world” – i.e., the part of our lives that is impacted by algorithms is constantly growing – and since software is “getting smarter” every year, with the intensification of techniques such as Machine Learning or Artificial intelligence. The governance question is also made more acute since smarter algorithms are achieved through more emergence, serendipity and weakening of control, following the legendary insight of Kevin Kelly in his 1995 “Out of control” best seller: “ « Investing machines with the ability to adapt on their own, to evolve in their own directions, and grow without human oversight is the next great advance in technology. Giving machines freedom is the only way we can have intelligent control. » Last, the algorithmic governance issue has become a public policy topic since Tim O’Reilly coined the term “Algorithmic Regulation” to designate the use of algorithms for taking decision in public policy matters.

Algorithm governance is a complex topic that may be addressed from multiple angles. Today I will start from the report written by Ilarion Pavel and Jacques Serris “Modalities for regating content management algorithms”. This report was written at the request of Axelle Lemaire and focuses mostly on web advertising and recommendation algorithms. Content management – i.e. deciding dynamically which content to display in front of a web visitor – is one of the most automatized and optimized domain of the internet. Consequently, web search and content recommendation are domains where big data, machine learning and “smart algorithms” have been deployed at scale. Although the report is focused on content management algorithms, it takes a broad view of the topic and includes a fair amount of educational material about algorithms and machine learning.  Thus, this report addresses a large number of algorithm governance issues. It includes five recommendations about algorithm regulation intended for public governance stakeholders with the common intent of more transparency and control for algorithms that are developed in the private sector.

This short blog post is organized as follows. The first part provides a very simplified summary of the key recommendations and the main contribution of this report. I will focus on a few major ideas which I found quite interesting and thought-provoking. This report addresses some of the concerns that occur from the use of machine learning and artificial intelligence in mass-market services. The second part is a reply from the angle of our NATF work group on Big Data. As was previously explained, I find that we have entered a “new world” for algorithms that could be described as “data is the new code”. This cast a different shadow on some of the recommendations from the Ilarion Pavel & Jacques Serris report. As algorithms become grown from data sets through training protocols, it becomes more realistic to audit the process than the result. The last part of this post talks about the governance of emergence, or how to escape what could be seen as an oxymoron. The question could be stated as “is there a way to control and regulate something that we do not fully understand ?”. As a citizen, one expects a positive answer. Other sciences have learned to cope with this question a long time ago, since only computer scientists from Silicon Valley believe that we may control and fully understand life today (these issues arise constantly in the worlds of medicine, protein design or cellular biology for instance). But the existence of this positive answer for Artificial Intelligence is a topic for debate, as illustrated by Nick Bostrom’s book “Superintelligence – Paths, Dangers, Strategies”. To dive deeper into this topic, I strongly recommend the reading of "Code-Dependent : Pros and Cons of the Algorithmic Age" by Lee Rainee and Janna Anderson


2. Algorithm Regulation


First, I should start with my usual caveat that you should read the report versus this very simplified and partial summary. The five recommendations can be summarized as follows:

  • Design a software platform to facilitate the study, the evaluation, and the testing of content / recommendation algorithms in a private/public collaboration opened to research scientists
  • Create an algorithm audit capability for public government
  • Mandate private companies to communicate about algorithm behavior to their customers, through a “chief algorithm officer role”
  • Start a domain-specific consultation process with private/public stakeholders to formalize what these “smart content management services” are and which best practices should be promoted nationally or internationally.
  • Better train public servants who use algorithms to deliver their services to citizens

A fair amount of the report talks about Machine Learning and Artificial Intelligence, and the new questions that these techniques raise from an algorithm ethic point of view. The question “how does one know what the algorithm is doing” is getting harder to answer than in the past. On page 16, the concept of “loyalty” (is the algorithm true to its stated purpose ?) is introduced and leads to an interesting debate (cf. the classical debate about the filter bubble). The authors argue – rightfully – that with the current AI & ML techniques the intent is still easy to state and to audit (for instance because we are still mostly in the era of supervised learning), but it is also clear that this may change in the future.  A key idea that is briefly evoked on page 19 is that machine learning algorithms should be evaluated as a process, not on their results. Failure to do so is what triggered the drama of the Microsoft chatbot who was made non-loyal (not to say racist and fascist) through a set of unforeseen bet perfectly predictable interactions. One could say there is the equivalent of Ashley’s law of requisite variety in the sense that the testing protocol should exhibit a complexity commensurate to the desired outcome of the algorithm. Designing training protocols and data sets for algorithms that are built from ML to guarantee the robustness of their loyalty is indeed a complex research topic that justifies the first recommendation.

We hear a lot of conflicting opinions about the threat of missing the train of AI development in Europe or in France, compared to the US or China. The topic is amplified by the huge amount of hype around AI and the enormous investments made in the last few years, while at the same time there seems to be a “race to open source” from the most notorious players. The authors propose three scenarios of AI development. In the first scenario, the current trend of sharing dominates and produces “algorithms as a commodity”. AI becomes a common and unified technology, such as compilers. Everyone uses them, but differentiation occurs elsewhere. The second scenario is the opposite where a few dominant players master the smart systems (data and algorithms) at a skill and scale level that produces a unique advantage. The third scenario focuses on data ecosystems but recognizes that the richness and regulatory complexity of data collection make it more likely to see a large number of “data silos” emerge (larger number of locally dominant players, where the value is derived more from the data than the AI & ML technology itself). As will become clear in the rest of this blog, I see the future as the combination of 2 and 3 : massive concentration for a few topics (cf. Google and Facebook) that coexists with a variety of data ecosystems (if software is eating the world and tomorrow’s software is derived from data, this is too much to chew for a single player, even with Google’s span).

A key principle proposed by the authors is to “embody” the algorithm intent through the role of “chief algorithm officer”, with the implicit idea that (a) algorithms have no will or intent of their own, that there is always a human behind the code (b) companies should have someone who understands what the algorithm does and is able to explain it to stakeholders, from customer to regulators. The report makes a convincing case that “writing code that works is not enough”, the of “chief algorithm officer” should be able to talk about it (say what it does) and prove that it works (does what is intended). There is no proof, on the other hand, that this is feasible, which is why the topic of algorithm ethics is so interesting. The authors recognize on page 36 that auditing algorithms to “understand how they work” is not scalable. It requires too much effort, will prove to be harder and harder as techniques evolve, and we might expect some undecidability theorems to hit along the way. What is required is a relaxed (weaker) mandate for algorithm regulation and auditing: to be able to audit the intent, the principles that guarantee that the intent is not lost, and the quality of the testing process. This is already a formidable challenge.

3. Data is the New Code


This tagline means that the old separation between data and code is blurring away. The code is no longer written separately following the great thinking of the chief algorithm officer and then applied to data. The code is the result of a process – a combination of machine learning and human learning – that is fed by the available data. “Data is the new code” was introduced in our NATF report to represent the fact that when Google values software assets for acquisition, it’s the quantity and quality of collected data that gives the basis for valuation. The code may be seen as the by-product of the data and the training process. There is a lot of value and practical expertise with this training process, which is why I do not subscribe to the previously mentioned scenario of “AI as a commodity”. Smart systems is first and foremost an engineering skill.

A first consequence is that the separation of the Chief Data Officer from the Chief Algorithm Officer is questionable. The code that implements algorithms is no longer static, it is the result of an adaptive process. Data and algorithms live in the same world, with the same team. It is hard to evaluate / audit / understand / assess the ethical behavior of data collection or algorithms if the auditor separates one from the other. Data collection needs to be evaluated with respect to the intent and the processes that are run (which has always been the position of the CNIL) and algorithms are – more and more, this is a gradual shift – the byproduct of the data that is collected.

Data ethics is also very closely related to algorithm ethics. On page 29, the report tells that bias in data collection produces bias in the algorithms output. This is true, and the more complex the inference from data, the more complex tracking these biases may be. The questions about the ethics of data collection, the quality and the fidelity of the data samples, are bound to become increasingly prevalent. As explained before, this is not a case where one can separate the data collection from the usage. To understand fairness – the absence of biases - , the complete system must be tested. Serge Abiteboul mentioned in one of his lectures the case of Staples, whose pricing mechanism, through a smart adaptive algorithm, was found to be unfair to poorer neighborhood (because the algorithm “discovered” that you could charge higher prices when there are fewer competitors around). I recommend reading the article “Discovering Unwarranted Associations in Data-Driven Applications with the FairTest Testing Toolkit” to see what a testing protocol / platform for algorithm fairness could look like (in the spirit of the first recommendation of the report). The concept of purpose is not enough to guarantee an ethical treatment of data, since many experiments show that big data mining techniques are able to “find private pieces of data from public ones”, to evaluate features that we not supposed to be collected (no opt-in, regulated topics) from data that were either “harmless” or properly collected with an opt-in. Although the true efficiency of the algorithms of “Cambridge Analytica” are still under debate, this is precisely the method that they propose to derive meaning full data traits from those that can be collected publicly.

The authors of the report are well aware of the rising importance of emergence in algorithm design. On page 4, they write “one grows these algorithms more than one writes them”. I could not agree more, which is why I find the fourth recommendation surprising – it sounds too much of a top-down approach where data services are drawn from analysis and committees versus a bottom-up approach where data services emerge from usage and collected data. In the framework of emergent algorithm design, what needs to be audited is no longer the code (inside of the box which is becoming more of a black box) but the emergence controlling factors and the results:
  • Input data
  • Purpose (intent) of the algorithm
  •  “training” / “growing” protocol
  •  Output data

This brings us to our last section:  how can one control the system (delivering a “smart” experience to a customer) without controlling the “black box” (how the algorithm works) ?

4. How to Control Emergence ?


The third recommendation tells about the need to communicate about the way algorithms operate. Following the previous decomposition, I favor the recommendation on communicating about intent, with the associate capability (recommendation #2) to audit the loyalty (the algorithm does what its purpose says). On the other hand, I do not take this literally to explaining how the algorithm works. This was perfectly achievable in the past, but emergent algorithm design will make it more difficult. As explained earlier, there are many reasons to believe that it may simply be impossible from a scientific / decidability theory view point.

This is still a slightly theoretical question as of today, but we are coming fast to a point when we will truly no longer understand the solutions that are proposed by the algorithms. Because AlphaGo is using reinforcement learning, it has been able to synthetize strategies that may be qualified as deceiving or hiding its intent to the opponent player. But humans are very good at understanding Go strategies. In the case of the recent win of AI in poker tournaments, it is trickier since we humans have a more difficult time at understanding randomized strategies. We have known this from game theory and Nash equilibriums for a long time. Pure strategies are easier to understand but mixed strategies are often the winning ones. Some commentators assess that the domination of the machine over human is even more impressive for Poker than for Go, which to me reflects the superiority of the machine to handle mixed (i.e. randomized) strategies. As we start mixing artificial intelligence with game theory, we will grow algorithms that are difficult to explain (i.e., we will explain the input, the output, the intent and the protocol, not what the algorithm does). If one only uses a single AI or machine learning technique, such as deep learning, it is possible to still feel “in control” of what the machine does. But when a mix of techniques is used, such as evolutionary game theory, generative AI, combinatorial optimization and Monte-Carlo simulation, it become much less clear. As a practitioner of GTES (Game Theoretical Evolutionary Simulation) for a decade, it is very clear that the next 10 years of Moore Law will produce “smart algorithms” with deep insights from game theory that will make them able to interact with their environment – that is, us – in uncanny ways.

I have used the “backbox” metaphor because a systemic approach to control “smart algorithm” is containment, that is isolate them as a subsystem in a “box of constraints”. This is how we handle most of the other dangerous materials, from viruses to radioactive materials. This is far from easy from a software perspective, but there is no proof that it is impossible either. Containment starts with designing interfaces, to ensure what the algorithm has access to, and what outcome/ suggestions it may produce. The experience of complex system engineering shows that containment is not sufficient, because of the nature of complex interaction that may appear, but it is still a mandatory foundation for safe system design. It is not sufficient for practical reasons: the level of containment that is necessary for safety is often in contradiction with the usefulness of the component. Think of a truly great “strong AI” in a battery powered box with no network connection and a small set of buttons and lights as an interface. The danger of this “superintelligence” is contained, but it is not really useful either. The fact that safety may not come solely from containment is the reason we need complex / systemic testing protocols, as explained earlier.
Another possible direction is to “weave” properties into the code of the emergent algorithm. It is indeed possible to impose simple properties onto complex algorithms, that may be proven formally. 

The paradox is that there are simple properties of programs, such as termination, which are undecidable, while at the same time, using techniques such as abstract interpretation or model checking, we may formally prove properties about the outputs. For my more technical readers, one could imagining weaving the purpose of the algorithm using aspect-oriented programming into a framework that is grown through machine learning. This is the implicit assumption of the scifi movies about Asimov’s laws that are “coded into the robots” : they must be either “weaved” into the smart brain of the robot or added as a controlling supervisor – precisely the containment approach, which is always what gets broken in the movie. The idea of being able to weave “declarative properties” – that capture the intent of the algorithm and may be audited – into a mesh of code that is grown from data analysis is a way to reconcile the ambition of the Ilarion Pavel and Jacques Serris report with the reality of emergent design. This is a new field to create and develop, in parallel with the development of AI and machine learning in software that is eating the world. This will not happen without regulation and pressure from the public opinion.


These are not theoretical considerations because the need to control emergent design is happening very soon. Some of these concerns are pushed away by creating divides: “weak AI” that would be well controlled versus “strong AI” that is dangerous but still a dream, “supervised machine learning” that is by definition under control, versus “unsupervised learning” which is still a laboratory reseach topic. The reality is very different: these are not hard boundaries, there is a gradual shift day after day when we benefit from more computing power and more data to experiment with new techniques. Designing methods to control emergence requires humility (about what we do not know) and paranoia (because bad usage of emergence without control or foresight will happen).

Wednesday, December 21, 2016

Behavioral Change Through Systemic Games




1. Introduction


I had the privilege last month to give a keynote lecture on “Big Data, Behavioral Change and IOT Architecture” at the Euro-CASE Annal conference on “Big Bata – Smarter Products, Better Society”. You may download the slides here. My lecture was divided into three parts: the first was about our NTAF report on big data, the second focused on behavioral change and the last part presented some of my views about IOT architecture. I have already covered the first part and the last part in previous blogposts, so today I will talk about behavioral change.

There is an obvious link between Big Data, Internet of Things and Behavioral Change. Many of the “smarter products” leverage IoT technologies and big data to help us change our behavior. This is true for wearables that are intended to help us take better care of our health and well-being, but is this also true for many products for your car or your home. The IoT technology is used to capture data through sensors and provide feedbacks through screens, speakers, motors, actuators, etc. Big Data methods are applied to extract value from the captured data so that the overall feedback experience is “smart” – hence the “smart product” subtitle for this conference. However, it turns out that changing behavior is hard, and this is not a matter of technology, it is a matter of psychology. There is a fair amount of science that may be leveraged, but there is no silver bullet: designing digital objects or experiences that help you change your behavior is a difficult project.  I am not a behavioral scientist nor a psychology expert, thus this post is a short introduction to the topic. I am just trying to make a few cautionary points and to open a few doors.

This post will follow the same outline that I used during the conference. The next section (Section 2) sets the landscape of behavioral change with respect to “smarter products” and IoT.  The goal is to move the focus in IoT from data to user-centric design - which was my conclusion at the end of the lecture. Behavioral change requires times, stories and emotional design. Section 3 is a short summary of a NATF working group that worked for a year on understanding how people react to the exponential rate of change for ICT (information and communication technologies). The key takeaway is that there is no fear of ICT, but there exists adaptive stress. That stress may be relieved if we design digital experiences as learning experiments – quoting from Mary Helen Immordino-Yang,  “the goals and the motivations of the digital environment should be readily apparent”. Section 4 draws on a few well-renown scientists and sources to see how fun and learning may be embedded into digital experiences. The last section applies this to smart objects whose ambition is to coach you to change your behaviors towards a better or healthier lifestyle. The need to weave emotions, fun, self-learning and reflective-story-telling yields to systemic serious games. Behavioral change requires a systemic posture, because of the importance of feedback loops, adaptive planning, and chronology. It also requires to design “smarter products” as games with a focus on user emotions, story-telling and pleasure.

2. From Data to Knowledge through User-Centric Design


When experiencing with a connected wearable or device, most users do not want a dashboard, they want a story. I have already covered this in a previous post, to go further I suggest that you read “Inside Wearables - How the Science of Human Behavior Change Offers the Secret to Long-Term Engagement”.  Owners of connected devices become bored quickly of their data dashboards, once the excitement of the first days has faded away. The story of wearables that are offered for Christmas and forgotten a few months later is a perfect illustration. Self-tracking is a good and healthy habit – recommended by psychologists in many situations – but self-tracking without sense does not work because everyone is not a data scientist. This is wider than the field of health improvement and connected wearables: similar observations have been made about smart home connected devices. Remote control and monitoring through your smartphone is not enough value for the connected gadgets that we bring home – often as gifts.

As expressed in the previous post, connected devices must come with a story and a coach. If we look at the numerous behavioral change models, you need a good story to start you moving, and you need a coach to keep going. There are many references about the fact that we are moved, and hence remember better, by stories and not data sets, but I am partial to Nassim Taleb’s wonderful books. I strongly encourage you to read “Fooled by Randomness”. The importance of stories is deeply connected with the importance of emotions in learning which I will evoke later. Stories trigger emotions that acts as anchors in our learning process. One of the dominant behavior change mode is the TransTheoretical Model (TTM). Where stories are critical in the precontemplation/contemplation phases, the role of the coach is critical in the action/maintenance phases. The coach cannot be reduced to a feedback loop – otherwise dashboards would work. The coach must bring sense to the results that are collected by the connected device. Behavior change is hard; hence the coach role is difficult. The coach need to provide the proper information at the right time, together with the right emotion, to keep the “why” (motivation) alive while taking care of the “how” (engagement). We will return to how the science of “nudging” (i.e., designing the choice architecture) may help to nurture the user engagement.

Behavior change must be approached as a user-centric design challenge. The role of biorhythms and chronology is very important. For instance, attention span has a complex structure with specific rhythms. Transient attention is very short (less than 10s) – this is how magicians and conjurers operate – while focused attention is on the order of less than 10 minutes. The “coaching content” needs to be delivered at the right moment, for the right duration and in the right “state of mind” from an emotional standpoint. A lot is known about demotivation and habit-formation cycles, but this is not a hard science, there is not much data available and many controversies.  Still, it looks like we need two months on average (66 days) to create a new habit, with a “danger zone” of three weeks after the start (21 day) when motivation is at its lowest. This is consistent with a rule of thumb of elementary teachers that says that a new concept must be explained once, then repeated one day later and three weeks later.

Faced with the behavior change challenges, we need as much help as possible from social sciences, psychology, and neuro-sciences. Neuro-sciences have become very relevant in the last decade because we have learned a lot about the way the brain works and learns. Since the best-seller from Antonio Damasio, “Descartes’ Error”, we know that emotions play a critical role in our thinking and learning. I am quoting once again from the great book “Emotions, Learning and the Brain” by Mary Helen Immodino-Yang: “It is literally neurobiologically impossible to build memories, engage complex thoughts, or make meaningful decisions without emotion”. She explains very clearly that “Emotional Learning Shapes Future Behavior”: “ The learner’s emotional reaction to the outcome of his efforts consciously or nonconsciously shapes his future behavior, inciting him either to behave in the same way the next time or to be wary of situations that are similar”. The last chapter of the book is entitled “Perspective from Social and Affective Neuroscience on the Design of Digital Learning Experience”. It is very relevant and a great reading for anyone trying to help user change their behavior through connected devices and digital experiences. Here is a last quote from this chapter: “Here we turn the tables and suggest that many people may interact with their digital tools as if they were social partners, even when no other humans are involved. Thinking of digital learning as happening through dynamic, supported social interactions between learners and computers changes the way we design and use digital technologies for learning—and could help shed light on why we become so attached to our devices

3. Adaptive Stress Due to Technology Change Rate


In 2015 the NAFT ICT commission has undergone a series of interviews related to the effects of ICT usage. We interviewed leading sociologists or psychologists, such as Francis JauréguiberryDominique Cardon or Serge Tisseron, to better understand how digital experiences were accepted and appreciated by the average person. We wanted to better understand the tension, one could even say the paradox, between an ever-growing usage of smartphone, internet, and new digital services, while at the same times there exist clear and growing “distrust signals”. We started our discussions about “fears”: fear that digital communication was cutting people from “real communication”, fear that Google was making us stupid, etc.  The conclusion from the majority of the interviews is that ICT adoption is indeed fast and widespread, and actually well received by the vast majority of users. Digital usage adds to, but does not replace real life, and most people value “real life contacts” over digital ones. This is a complex topic that would deserve a separate post. Here I will just point out some of the conclusions or recommendations because they are clearly related to learning and behavior change.

The main common idea from our experts is that the worries that are being expressed about ITC usage are the symptoms of “adaption stress”. The rate of technology change is faster than the usage rate of change, which is itself much faster than the rate at which we understand these technology changes. We live in, and we welcome for most of us, a “world of accelerated permanent change”, where our products and services are constantly “upgraded” (we hate these “updates” because we do not understand them, they usually come when we do not expect them and they are forced on us). The main worry that is a consequence of this adaptation stress is the fear of not being in charge, the lack of mastery, especially from a time management perspective. Users who are interviewed by sociologists complain that are no longer in charge of their own time any more. They see ICT usage as taking too much of their free time, with a great difficulty to reclaim control (e.g, the fear of missing out). The “digital detox” approach is a classical counter-reaction to this feeling.

The main recommendation from this workgroup is, quite logically, to spend more effort in training and explanations related to the new digital products and services. The best way to reduce the stress of “losing control” is to give back the sense of “being in charge” with practical training. For instance, “digital life hygiene”, that is the practice of digital usage control, with both temporal and spatial zones of “digital detox” deserves to be taught. Digital training works best when it is both practical (in the “learn by doing” philosophy) and rooted in the real world, using devices, real life environment and situations to embed the conceptual learning into a kinetics experience (in the tradition of Maria Montessori). This idea of “inviting the real world back into the virtual one” came in many forms. A great piece of advice for teenagers and adult alike is to read aloud a message that is about to be sent electronically (SMS, chat, email, …) if complaining is involved. Neuroscience shows that reading aloud forces the facial muscle to express emotion, which are then carried to our “mirror neurons” so that we instantly feel what the effect on the other person may be (and then possibly adjust our message). Another set of recommendations about how to build better accepted digital experience were related to emotions: how to adapt the experience to user emotions (emotional design) but also how to leverage emotions as a training tool.

We all know that users don’t read “user manual” or documentation anymore. The challenge to alleviate the “adaptation stress” is to deliver digital experiences where learning and training is part of the customer journey. This is especially true for connected devices and quantified self digital experiences, as presented in Section 2. “Digital” means that data analytics is a given: we can analyze user journeys at each step of user experiences and measure both discovery and appropriation. From this, an appropriation maturity model may be build, which can be used as a guideline for “embedded connected tutorial”. The use of IOT and connected devices gives the additional advantage of the continuous feedback loop. Still, if training is conceived as an additional “online tutorial experience”, it most often fails to deliver the engagement that is needed for behavior change. The real challenge is to design the complete digital experience as a learning journey.


4. Adding Pleasure and Learning to Digital Experiences


  


Pleasure plays a key role in learning. The diagram shown to the right is borrowed from a biology conference about learning in a complex systems conference that I attended a few years ago. All living being, from very simple organisms to humans, build their behavior from this simple cycle (among other things). This is well recognized in design. I borrow this great quote from “The A-B-C of Behaviour” : “Fun is the mean by which we retrain our brain to learn new patterns of behavior”. Fun and pleasure are introduced into a digital experience through many means, from rewards to surprise. Rewards systems are heavily used in coaching or behavior change products. Surprise is a powerful emotion to trigger fun and to facilitate learning. I refer you to Michio Kaku’s explanation about the evolutionary role of emotions in his book “The Future of Mind”, which I have already mentioned in a previous post. He sees human as hard-wired to like surprises because they help to constantly tune our planning system, which is an evolutionary advantage. Intelligent beings plan and predict about their environment; a surprise occurs when what happens (a joke, a conjurer’s trick) is not what you were expecting. Because evolution has developed this pleasure from surprises, we are wired to explore and to learn. Helen Immodino-Yang express a similar idea: “In this sense, emotions are skills—organized patterns of thoughts and behaviors that we actively construct in the moment and across our life spans to adaptively accommodate to various kinds of circumstances, including academic demands”.

Learning is also a social activity, which means we should leverage the power of communities when designing behavior change experiences. This is also well-known in the design and digital world. Seth Godin has taught us to build viral experiences, where sharing is not an after-thought that is added to increase the spread of the product, but something that is at the core of the experience: “Virality is the product”. Experimental psychology and neurosciences show that we learn by imitation. The best way to build a tutorial is to show a video of someone else doing the very thing that needs to be learned (a great insight from the workgroup mentioned in the previous section). Dan Ariely, in his best-seller “Predictably Irrational – The Hidden Forces that Shape Our Decisions”, explains the importance of social norms.  In many instances, social norms are much more powerful than money to motivate people. More generally, what behavioral sociology tells us about cognitive biases is very relevant to design engaging learning experiments towards behavior change. For instance, Dan Ariely talks about the planning fallacy, the fact that we consistently underestimate the time it will take us to complete a task. Here a digital feedback loop may prove a useful help. Another fascinating example is the “high price of ownership”: “Ownership pervades our lives and, in a strange way, shapes many of the things we do”. Once we think that we own a thing, an idea, a goal … we overvalue what it represents. This is why the “emotion of ownership” is proposed as a goal by Don Norman, which is very relevant for digital experience and why customization is such an important feature (make it your own). On the opposite side, each choice is painful because of the effort that we make for any decision. Procrastination should never come as a surprise, and choice architectures should factor in the “consequences of non-decision”.

The best way to develop a learning experience that is woven into the overall experience is to “nudge” users towards behavior change. I am referring here to the “choice architecture” concepts popularized by R. Thaler and C. Sunstein’s best-seller : “Nudge – Improving Decisions About Health, Weath, and Happiness”. This books shows very interesting of designing choices frameworks that take our cognitives biases ( anchoring, over-valuing the present versus the future, availability biais : over-valuing what we have in front of us or at the top of our mind) into account. For instance, helping people to save more is a great behavioral change challenge. The “Save More Tomorrow” program showed how to use behavioral economics to increase employee savings.  The section about “social nudges” is a source of inspiration for introducing priming into digital experiences. Experimental psychology as a lot to tell us about how to nudge and motivate. For instance, to return to the reward topic, science shows that it is better to break down into many small rewards than giving a larger one less frequently. This is why the practice of small rewards such as “badges” is so common in the digital world (combining two insights about social and frequency). The concept of “nudge” and choice architecture is also very relevant to designing “progressive onboarding”, that is precisely the embedded incremental learning experience built into a digital product. To deliver the proper nudge, the experience designer must build a “usage and learning maturity model”, which is used to transform digital analytics (cf. Section 3) into an estimate about how much was learned already. From this the user may be “nudged” with the proper “tool tips” (a tool tip is a tiny piece of information that is presented at the right time, according to the usage context).

There exists a wealth of insights that behavioral change can borrow from experimental psychology. In addition to Ariely and Thaler, it is logical to mention Daniel Kahneman and his wonderful book “Thinking, Fast and Slow  (which I have commented in a previous blog post). This book contains wonderful examples related to “the marvels of priming”. For instance, hearing about other people changes your own ability : “This remarkable priming phenomenon – the influencing of an action by the idea – is known as the ideomotor effect. … The ideomotor link also works in reverse…. Reciprocal priming tend to produce a coherent reaction: if you were primed to think of old age, you would tend to act old, and acting old would reinforce the thought of old age”. Kahneman also illustrate our aversion to loss, which is closely related to the emotion of ownership : “We should not be surprised: losses evoke stronger negative feelings than costs. Choices are nor reality-bound because System 1 is not reality-bound”. He explains a number of “fallacies” (in the sense of the “narrative fallacy” of Taleb), such as the availability biais (WYSIATI : What you see is all there is). The insights of “the law of small numbers” is very relevant to dashboard and tracking: humans are not good at analyzing small data sets, we tend to see stories and correlations everywhere. This is even true for professional statisticians: “ It was evident that the experts paid insufficient attention to sample size”.  The list of biases from System 1 (our fast thinking process, cf. “Blink” from Malcom Gladwell) is summarized on page 105; this list is quite useful to improve the design of behavior change experiences. The combination of framing (using the power of words and emotion to  build the choice architecture) and understanding decision weights (we overvalue low probability events – the table page 315 is an eye-opener) can be leveraged to “nudge” more efficiently.

5. Behavior Change as a Systemic Game


If we assemble everything together - the need for incremental learning, the necessity of pleasure, the pleasure of learning – the best digital experience that we may propose for behavior change is to propose a “game to learn about yourself”.  Technology is definitely available to help : IoT sensors may monitor the user and her environment, digital tools and user interfaces may be used to tell a story and data science may be used to generate insights that feed self-discovery, learning and surprise (i.e., learning something new). Data Science is very relevant to develop such “serious games”. Machine learning algorithms are known to provide predictive, prescriptive and cognitive knowledge from dataPredictive analysis is very useful for the playful nature of the game. It is what makes a behavior change digital experience dynamic and interactive. Even if the prediction is not always accurate, it creates a surprise element and contribute to making the experience fun. Prescriptive analytics is about providing insights. This is a heavily debated topic because as we all know, correlation is not causation. Still, experience shows that powerful insights may be drawn from data collected with multiple sources of IoT sensors. Last cognitive analytics is about helping the user learn about herself.  To build such a self-learning experience and to understand the related challenges of behavior change motivation, the book from Samantha Kleinberg, “Why – A Guide to finding and using causes”, is a great source of insights. The book is full of warnings, which are closely related to the biases evoked in the previous section, such as the following: “many cognitive biases lead to us seeing correlations where none exist because we often seek information that confirms our beliefs”; ”It’s important to remember that, in addition to mathematical reasons why we may find a false correlation, humans also find false patterns when observing data” and “Most critically for this book, the approach of interviewing only the winners to learn their secrets tells us nothing about all the people who did the exact same things and didn’t succeed”. This last quote is interesting because it emphasizes the limit of statistics and samples, in contrast with the power of personalized medicine and coaching. Samantha Kleinberg’s book is also very positive because it shows that understanding the “why” is critical for self-motivation in behavior change: “Will drinking coffee help you live longer ? Who gave you the flu ? What makes a stock’s price increase? Whether your’re making dietary decisions, blaming someone for ruining your weekend, or choosing investments, you constantly need to understand why things happen”. It is also a comprehensive source about the art of explanations, which is closely related to learning.

The game paradigm implies that you learn by doing. This also applies to learning about yourself as a system. The game becomes a search to discover new insights, as with a treasure hunt. The object of the game is not the “static you” but the “dynamic version, a system that evolves constantly”. Applied to weight loss, it means that insights are not what your weight should be ( a form of medical advice – no fun in that) but rather behaviors that make you gain unnecessary weight (learning about yourself from your experience). This type of approach is natural for regular “quantified self” practitioners, but as we noticed, they are but a fraction of the overall population. This is a missed opportunity, in a sense, since self-tracking is good for you if you have the discipline for it. There are multiple references in all kinds of disciplines, from mental health and psychology to dieting or quitting smoking, including sports coaching. Quoting from another best-seller from Gretchen Rubin: “Current research underscores the wisdom of Benjamin Franklin chart-keeping approach. People are more likely to make progress on goals that are broken into concrete, measurable actions, with some kind of structured accountability and positive reinforcement.” The challenge that a behavior change game must solve is how to leverage behavioral science to bring the benefits of self-tracking (insights and systemic self-discovery) to people who do not have the mind nor the inclination for it. The system paradigm means that the user is “in the loop” and that the game must yield actions (and reactions) from which learning may be derived. For instance, experience shows that it is easier to nudge people to fix approximated data than to enter it in the first place.

Let us conclude with the observation that systemic games for behavior change fit squarely in the field of P4 (Predictive, Preventive, Personalized, and Participatorymedicine. Prevention is the goal of behavioral change experiences; the predictive capabilities from data science are necessary to develop engaging experiences; behavioral change games are personalized by construction, operating under the assumption that we are all different when it comes to behavior change, from motivation to effects. Last, behavior change games are participatory by construction on an individual level (learning comes from acting) and may leverage the power of social and community nudges, modulo the respect of customer privacy. Personalized medicine is, actually, more concerned with small data than big data.





Saturday, September 17, 2016

The business value of code elegance in the digital age


Introduction


I was invited two weeks ago to a great event hosted by EPITA entitled “The Elegance of Algorithms”. The host of the debate was Cedric Villani and I really enjoyed this evening of thought-provoking speeches. I was delighted to be invited to join and participate, since the “elegance of programming & algorithms” has been a pet topic of mine for the last 30 years.

I got this invitation thanks to Akim Demaille who remembered that the title of the original web page of the CLAIRE programming language is “The Art of Elegant Programming”. I designed CLAIRE as an open source project in the 90s, together with François Laburthe and a support group including Akim and Stephane Hadinger, as a language that would be ideally suited to write operations research agorithms. CLAIRE is itself the result of 10 years of experience writing compilers for more complex languages such as SPOKE and LAURE. I decided that too much effort had been spent on sophisticated features that programmers did not use much, and that we should rather focus on what made the language versatile and elegant. Our new goal for CLAIRE was to write “executable pseudo-code”, with a language that could be used to teach algorithms, which I did with CLAIRE for a number of years at ENS and Jussieu. Our measure of elegance for CLAIRE was, for instance, the number of lines necessary to express – in an elegant form – the Hungarian matching algorithm. The challenge was to be as compact/efficient as possible – using a high level of abstraction, in the spirit of APL – while staying close to natural language (not the spirit of APL).

There were a number of reasons to focus on elegance 20 years ago. I was already aware that software systems are alive and that a good programming language should facilitate the constant grooming of the code. I had also learned the hard way that only what is simple and elegant survives when generations of programmers work onto a software system. However, what was only a vague intuition then is now blatantly obvious today in the digital age. Which is why I decided to jot down some of the key ideas that I developed in my speech at this EPITA event. Simply said, this post is about the monetary value of software elegance, from architecture to code.


The Business Value of Elegance


1. Software in the digital age is about flows, not assets



The key marker of software in the digital age is the continuous change of software systems; the consequence is that we need to love our code. This is very different from the “system engineering” culture of a few decades ago when we promoted a “black box” approach. If software maintenance required to edit and modify your code constantly, it should be nice to look at ! “Source code is back” and the aesthetics of programming becomes important. Elegance is what helps software systems to evolve into the hands of successive teams of programmers, when “ugly code” is encapsulated and “worked around”, producing hidden complexity and not-so-hidden cost explosion.

This world of constant adaptation is the world of agile software methods, based on iteration. However, iteration produces junk, accumulation of things that quickly becomes useless. It is a law of nature that is not specific to software. Iteration must come hand in hand with refactoring. As a metaphor, we need to tend the garden: clean up, sort and reorganize. Elegance becomes a philosophy and an architecture principle. Gardening means to grow simplicity and to let the “system’s potential” emerge. Digital gardening is similar to Zen gardening : to discover what is permanent in a changing flow.

This vision of systems as flows will intensify in the digital feature when systems are built through the collaboration of AI and human intelligence. Today already, more and more digital algorithms are grown from data through machine learning. This is why Henri Verdier told us that “data is the new code” – I refer the reader to the report on Big Data published by the National Academy of Technologies. What is developed with digital systems are no longer algorithms in the classical sense but processes that grow the necessary algorithms. However, these processes are meta-programmed with code that needs to be elegant for the very same reasons.


2. Digital software is the fruit of collaboration


The act of sharing is critical to building high quality software in a flow environment. This is why code reviews play such an important role and why pair-programming is at the heart of extreme programming. Digital software is the result of team efforts. Code reviews must be enjoyable to be sustained; elegance becomes a Darwinian virtue of code that survives these reviews. During this dinner, I went as far as saying that “elegance is the fuel of code reviews”, as an emotional, aesthetic and collaborative experience.

Digital systems leverage the “power of the crowd” through open source software. We now live in the world of software ecosystems that are built by communities. Software platforms are grown by weaving fragments that get reused through sharing. The elegance of the software fragments is the cement of the open source platforms. The more elegant the code, more eyeballs are reading and proofing it. A law of the digital age is that quality comes from the multiplication of reuse.

Elegance is a dynamic property revealed in a communication process, not a static or aesthetic judgment. This is not new, a few centuries ago, Boileau told us that “Whatever we conceive well we express clearly, and words flow with ease”. Elegance matters to make communication, hence collaboration, easier. An interesting parallel may be made with Cedric Villani’s book, “Birth of a Theorem: A Mathematical Adventure”: scientific creativity is the result of a network whose paths are the laborious efforts – mathematicians in front of their white pages or programmers in from of their screens – to progress along a task, while the nodes are the communication moments when collaboration occurs. Cedric’s book is a brilliant illustration of the importance of the collaboration network. It is, therefore, not a surprise if elegance has been considered a virtue in Mathematics for ages.


3. Today’s main challenge for software is the increasing word complexity




A key ambition of elegance is to reach simplicity. It means not only to search for simplicity of the first attempt or the first creative result, but also the constant quest for continuous simplification, in the scientific tradition of Occam’s razor. Gaston Bachelard told us that “simplicity is the result of a long process of simplification”. Simplicity received a lot of attention during our evening discussion, from a mathematical, musical or software point of view.

Simplicity is a north star of digital system because of its antifragile nature. This great concept from Nassim Taleb means to be reinforced (versus broken) by the constant flow of unexpected and adversarial events. For instance, biological live systems are antifragile whereas most mechanical systems are fragile. Simple code is antifragile – compared to sophisticated one – because it is much more prone to evolve at the same rate as the environment’s change. Digital systems require this behavior; successful platform are built through a constant flow of adaptation. There is much deeper systemic truth here, that you may start to grasp when reading “Simple Rules for a Complex World” by  Donald SullKathleen and M. Eisenhardt.

A cardinal virtue of simplicity in the digital age is the reduction of inertia. This is a core lean principle from Taiichi Ohno. There is much more here than the simple application of Newton dynamics (the lower the mass, the higher the acceleration). Since I was talking to mathematicians, I was able to refer to Pollaczek-Khinchine ‘s formula which I have quoted often in my blog. Without going into the complexity of Jackson networks, this formula expresses the virtue of simplicity (reduction of variation) to ensure a faster response time. This is one of the many case where the intuition of lean and the mathematical models coincide. This translates into a key principle for digital: to continuously clean up the technical debt, since the overall velocity is inversely proportional to the bulk of digital assets.

Conclusion


Throughout this post, I have used the word “elegance” to combine two qualities: simplicity – from a systemic perspective – and aesthetic value, measured though communication and collaboration. The three previous sections could be summarized as:

  • Elegance is necessary to break the “time barrier” – allow long-lasting software improvements
  • Elegance is necessary to break the “distance barrier” – enabling the collaboration of remote viewpoints to produce better software
  • Elegance is necessary to break the “complexity barrier” – in order to design antifragile systems.


This is, on purpose, a conceptual and slightly pedantic way to express why elegance matters. I concluded with a much more forceful anecdote taken from a previous visit to Googleplex in Mountainview, where I have enjoyed reading a few good pages of tips about test-driven development, while standing in the restrooms. This is a vivid example of what a “code loving culture” can be. The silent drama, which is happening right now, is that the gap between those companies that have understood that “software is eating the world” and those who do not is growing. This is where we have come full circle to the main topics of this blog. I will refer the reader to Octo’s great book : “The Web Giants” where almost everything that I mentioned in this post resonate with the best practices of the most advanced software companies.


Code elegance is a fascinating topic and there is much more to say. For instance, it is interesting to try to characterize what makes a code elegant. Cedric Villani started his introduction by saying that elegance in mathematics was the combination of simplicity (concision), efficiency (something that really works) and surprise. The aesthetic value of surprise is a direct consequence of our emotional species-meta-learning strategy to value what is different from we expect (cf. Michio Kaku). I will end this post with a few ideas borrowed from Francois Laburthe, with an interesting reference to SOLID:
  • Elegant design borrows heavily on abstraction and genericity. However, units (from functions or classes to module) have a clear and well defined purpose.
  • Elegant design is mindful of waste, in the lean tradition. Everything is used, and constant refactoring tracks useless replication of features.
  • Elegant design is geared toward self-dissemination because it “welcomes you” through a self-explanatory structure. This is greatly helped by reification / introspection when design elements are self-aware.
  • Elegant design is open by nature – it embraces the philosophy of “fragment weaving” mentioned early on through facilitating (hence the importance of API) and selflessness (the design principle that the code will always be used by something more important than itself).






 
Technorati Profile