PanSensic: Measuring What’s Important

Measuring What’s Important Rather Than What’s Easy

 

Darrell Mann, Paul Howarth – Systematic Innovation/Akumen UK

 

The full white paper can be downloaded here

“You get what you measure. Measure the wrong thing and you get the wrong behaviours.” John H. Lingle

‘What gets measured gets done’, to paraphrase famous measurement author, John Lingle, is an oft-used business mantra. Introducing a new measurement or measurement method is one of the most effective behaviour change strategies within any organisation. The problem is that most measurements are based on what leaders assume is possible, rather than what is necessarily right. The seeds of most change and innovation attempt failures can be shown to have been sown the moment that someone made the decision to listen to information that was merely easy or convenient to acquire. 95% of marketing surveys fall in to this category: we know we’re supposed to go and ask the customer; we just don’t know what we should really go and ask them. Or how we should ask them. Or how to listen to what they’re not trying to tell us.

The same phenomenon applies equally well to what goes on within an organisation: over 75% of all change programmes instigated within enterprises will fail to deliver a net benefit. Why? Because managers use measurements that were easy to make in order to steer the change, rather than the things that will actually help drive the business in the desired direction. Not to mention the related problem that people don’t always tell the truth, whole truth and nothing but the truth. Few employees are likely to reveal things they fear may jeopardize their status or job.

In essence, then, there are two sides to the measurement challenge that have to date confounded managers and leaders: the first is knowing what to measure, the second is knowing whether what we’ve measured has been measured accurately.

Innolytics is the science of objectively and accurately measuring the un-measurable: how much do your customers really trust you? how cool do they think you are? how big is your reserve of goodwill? How engaged are your employees? how proud of the organisation are they? How much mutual respect do different parts of the business carry? It’s also about measuring realities rather than what people think are the things we want to hear.

This white paper will describe the primary outcomes of a 10 year journey underpinning the development of this new, innolytic, generation of measurement capabilities and instruments. We’ll start with some of the underpinning foundations: knowing what to measure.

Measuring The Right Things

If we examine the sorts of things that managers typically measure inside and across organisations, a distinct hierarchy rapidly emerges. The large majority of measurements are made based on things that are visible. Not surprisingly, when we can physically see something it becomes easy to measure. It is very easy, for example, to measure the number of products that were shipped during the previous quarter. It is similarly easy to see how many customers came through the door, how much money they gave us, how many people turned up for work, how many components they manufactured or assembled, or how many patent applications they generated. All of this kind of stuff is really easy to build into spreadsheets and ERP systems. And so we do. To the extent that most businesses are managed based almost entirely on what Excel or SAP is telling us.

 

measuring-important-1

 

At the next level up the importance hierarchy come ‘interactions’. Measuring things is one thing, measuring the relationships between those things is more difficult. Largely because their level of visibility is an order of magnitude less. If the Excel spreadsheet is the de facto standard for measuring physical entities, the equivalent for the measurement of interactions are the Gantt chart, KPIs and ‘balanced scorecard’. These are the management devices that informs the team how much progress has been made; whether the review meeting happened; whether the action was completed; was the final report received; were the year’s objectives met. Generally speaking, provided an interaction can be described on a Gantt chart or in someone’s KPI catalogue, managers are generally happy that the things under their responsibility are ‘under control’.

While the construction and maintenance of Gantt charts and KPIs might represent a lot of hard work, and hence tend to convey the impression that we have done some ‘serious management’, the information they provide often has little correlation with the next level in the measurement hierarchy: outcome. It is very easy to colour the bar chart to say we held the consumer panel and then fill in the milestone triangle when we publish the analysis report. It is much more difficult to measure whether the forum or the report it produced delivered any kind of useful outcome.

The first big problem here is that, by definition, the outcomes are an end-to-end result that only occur after the fact. We might only know, for example, that a consumer is happy with our new product launch only a year or more after the first products were shipped. In this external-facing world, the time it takes for the measurement to ‘close the loop’ and provide the feedback that we’re doing the right things is usually way too long to have any impact on what we do. The second big problem is that, because outcomes are even less visible than interactions, it is difficult to begin to know how to construct a sensible way of making a measurement at all. Take an internal outcome like engagement: how engaged are your employees? A question that, having expressed it, almost any manager would be negligent if they said they wouldn’t like to know.

Diligent management teams, knowing the importance of engagement, will frequently hand the problem of measuring it over to the HR department. Who will then assemble some form of Likert-based ‘rate your level of engagement on this 1 to 9 scale’ questionnaire. Which will then typically – in the US at least, where this kind of activity tends to occur the most often – deliver the kind of result shown in the figure below:

measuring-important-2

 

This kind of picture in turn usually proves to be too challenging a result for all but the most hardy of managers. And so there is a frequent tendency to avoid making such measurements again in the future lest we become even more depressed.

Never mind the fact that there is a still more difficult level to the measurement hierarchy beyond outcome: meaning. Meaning is what ultimately comes to drive all of our actions and behaviours in life: both as customers and employees – what meaning do I derive from the products and services I purchase? What meaning does my work bring to my life? Not only is ‘meaning’ less visible than even outcomes are, but the time lag between something I buy or do and the meaning I am able to extract from it can sometimes be extraordinarily long. Bob Dylan, a notably reluctant interviewee, is frequently asked what his lyrics ‘mean’. The answer has typically been a shrug of the shoulders and a gruff, ‘ask me in twenty years’. Twenty years to establish meaning is a timescale beyond the life of the majority of organisations on the planet, and as such is the measurement of meaning is barely on the radar screen of just about all of them.

Meanwhile, the 800lb gorilla in the room in both the outcome and meaning domains, is whether the result was an accurate one in the first place. Did your employees complete that engagement questionnaire with the aim of telling you the truth? Or the sort of truth they thought you wanted to hear? Or the truth that is mostly likely to secure their annual bonus? This is the subject of our next section…

Measuring Things Right

When someone asks you what the last quarter’s sales were, or whether the design review took place, it is very easy to respond with the truth. We simply have to go look thing up in the CRM system or look at the latest version of the project Gantt charts. We might know that either or both of these questions might carry a host of awkward additional pieces of knowledge – like how many unhappy customers we didn’t serve, or that the review was a complete waste of everyone’s time and effort – but, thankfully, those things weren’t what we were asked. Measurement systems that measure the easy physical and interaction stuff very quickly tend to become excellent blame deflection mechanisms. If ‘what gets measured gets done’ is a key management aphorism, ‘what gets measured, quickly gets corrupted’ isn’t too far behind. When a manager tells a team member, ‘don’t let me catch you doing that again’, it almost guarantees that the team will indeed not be caught again. It also, in most organisations, guarantees that whatever it was that was being done is still going on, but now in such a way that the management team will indeed never find out that it is being done.

When it comes to measuring the important outcome and meaning stuff, we get right to the heart of what makes us human. And unless there is a high level of trust, we’re very unlikely to share with others what we’re actually thinking. Most organisations understand the importance of soliciting and listening to the ‘voice of the customer’, but most too, have no idea how to get to the unspoken stuff, the stuff that’s unpleasant, difficult, insulting, or just plain embarrassing to talk about:

 

measuring-important-3

 

The more enlightened marketers and anthropologists have learned some of the lessons of those that managed to solicit more accurate information from customers. ‘Visiting Gemba’ for example, says to go and observe the customer in their environment. Key word being observe:

 

measuring-important-4

 

Better, but the main problem still exists. We’d love to be able to hear what’s been said in the red thought bubbles, but the rationalizing parts of the speaker’s brain mean we’ll only ever get to hear the spoken blue bubbles.

Better still – a lesson learned by organisations that have understood the importance of ‘outcome’ as a customer driver – is to focus questions and observations of customers on soliciting outcome-oriented knowledge. This, in theory at least, allows us to overcome the problem highlighted by Henry Ford in his famous quotation, ‘if I’d asked the customer, they would’ve asked for a faster horse’. Breakthrough things like cars get invented when organisations understand that the outcome desired by the customer is to get from A to B faster, safer and more comfortably. The new problem is that there is still a bunch of outcome related stuff the customer can’t or won’t tell us about. Again, typically the embarrassing or difficult stuff we are all reluctant to share with any kind of outsider:

 

measuring-important-5

 

All of which brings us to innolytics. Or, the science of successfully eliciting the difficult unspoken red thought bubbles; the actual outcomes and meaning that ultimately drives the behaviour of customers and the people that work inside our organisations.

In parallel with this paper, we have published a number of other White Papers describing the underpinning science behind our newly realized ability to capture this kind of behaviour-driving ‘hidden’ information (Reference 1-5). In short, all of these papers, and the decade of research behind them, centre around five main, unique, measuring-what’s-important things:

  1. Capturing Metaphor: our rationalizing pre-frontal cortex is extremely good at filtering our behaviour-driving primal thoughts before words emerge from our lips or get typed into our next email, but it is not capable of ‘lying’ about the metaphors we use. Metaphors – which we all typically use at the rate of six a minute – are, in other words, an excellent mirror of what we’re actually thinking.
  2. Capturing Narrative: humans are story-telling animals (‘homo narrans’); we learn from stories and we communicate meaning through stories, and so soliciting inputs from people in the form of stories is an excellent way of capturing what they’re actually thinking rather than what they think you want to hear. Telling stories allows us to in effect play the role of ventriloquist and thus deflect potential harm to ourselves – ‘it wasn’t me that said it, it was the dummy’. Stage ventriloquists are able to get away with saying things that the person behind the dummy would never be allowed to articulate. Same when we tell a story about a good or bad experience we observed at work – the characters in that story don’t implicate us in any way. In addition to this externalization characteristic, perhaps more importantly, like what happens when we use metaphors, our brains simply aren’t fast enough to simultaneously construct a story and lie at the same time.
  3. Capturing Thinking Styles: American psychologist, Clare Graves, was the person who has contributed most to our understanding of the characteristics of what he described as the ‘mature’ adult (Reference 6). Graves identified a number of discontinuously different thinking styles that distinguish one type of person from another. These thinking styles have little correlation with the intelligence (IQ, EQ or any of the other 20 or so ‘Q’s we know about) of a person, describing instead the basic structure of the information contained within their brain. It turns out that, without knowing where a person is on the scale of different thinking styles, it is very difficult to make meaningful sense of what that person is trying to say. Knowing this, we have had to build tools that can automatically detect which Thinking Style a person is operating in at the time that they say something.
  4. Capturing Life Journey Stage: in a similar manner to thinking style, also very significant in extracting meaning from what a person is saying iinvolves understanding their life journey stage context. A typical life journey is driven by what Kierkegaard described as the ‘will to meaning’. Everything we do is ultimately driven by this innate search. The journey happens, though, through a series of jumps from one discontinuously different stage to another. The perspective of a naïve ‘orphan’-archetype person, to take an extreme example, is vitally different from the ‘warrior’ person who knows (or thinks they know) exactly where they are trying to get to. Unless we know which of these (or other) states a person is in when we are speaking to them, we are very likely to misinterpret what they are saying to us. Both in terms of the spoken and unspoken words they do or don’t use.
  5. Capturing Contradictions: contradiction – paradox, conundrum, trade-off, conflict, whatever we choose to call it – lies at the heart of all insight, change and innovation. Success happens when we uncover and resolve contradictions. Hence listening for and being able to elicit them in peoples’ lives is fundamental to working out how best to serve their outcome and meaning needs.

Beyond these theoretical foundations, the rest of the innolytics capability derives from the technologies that come with the emerging ‘Big Data’ world born of the Internet and social media mass-communication phenomena. Being able to ‘scrape’ enormous quantities of social media content is already coming to drive a lot of marketing activity. Alas, in true garbage-in-garbage-out fashion, if you don’t know how to scrape the data meaningfully, it simply remains as data. All noise and no signal. Signal – the only useful stuff – is about being able to read between the lines. Innolytics, in this sense, is quite literally the science of reading between lines in order to capture meaning.

So?

One of the principle manifestations of the capability that innolytics delivers to people within organisations – whether leader, manager, marketer or designer – is to be able to assemble a dashboard containing all the things that are important as opposed to the stuff that is merely easy to record. Take a second to think about all the things you’d really like to know about your organisation or your customers. Imagine all the things you’d love to be able to measure – all the outcome and meaning based stuff – and build it all into your own bespoke dashboard of intangibles and invisibles:

 

measuring-important-6

 

The only real problem here is putting aside all of your pre-conceptions about what is possible to measure. Rather than ‘what can we measure?’ the right question to be asking in the innolytics world is ‘what would I love to be able to measure?’ Or, what would the perfect dashboard look like?

Perfect, of course, never happens. That’s not the purpose of the word. It’s real value is in encouraging us to think about direction: Only if I know what direction perfect exists, can I start moving myself towards it. And only when I start moving towards it am I able to refine what perfect means to me.

As such, a typical innolytics project involves a number of stages. Here are the typical first ones that will allow an organisation to build its first meaningful between-the-lines dashboard:

 

measuring-important-7

 

The most encouraging (we hope) thing for new users of the capability is that it’s very easy to get started: organisations are, after all, crammed to the rafters with existing knowledge and information – verbatims from consumers, staff feedback, etc. Terabytes of the stuff. The only problem is it is highly unstructured and fundamentally hasn’t been designed to maximize our ability to see clearly between the lines. There is inherently some ‘between the lines’ content however, and it is this that we typically look to scrape in order to get a first-cut look at what’s happening in a given situation. Having this knowledge, allows a first version of a dashboard to be generated and allows us – where necessary – to begin formulating and circulating narrative-based questions that will permit the enhancement of that dashboard and turn it into a resilient management tool that can be used to design the future of the enterprise.

Business success ultimately boils down to the ability to sense what is happening and making an appropriate response, faster and more effectively than competitors can achieve:

 

measuring-important-8

 

Innolytics is all about helping organisations to sense and interpret the right things at the right time, and provide teams with the wisdom they need to design, decide, align and respond in the requisite way. It is, we think, a major step into the future of enterprise.

 

References

  1. Mann, D.L., Howarth, P., ‘Jupiterμ: Closing The Say/Do Gap’, IFR White Paper series, 2012.
  2. Systematic Innovation E-Zine, ‘Hearing What Is Really Being Said: Part 1 – Thinking Styles’, Issue 127, October 2012.
  3. Systematic Innovation E-Zine, ‘Hearing What Is Really Being Said: Part 2 – Vital Friends’, Issue 131, February 2013.
  4. Systematic Innovation E-Zine, ‘Hearing What Is Really Being Said: Part 3 – Life Journey Stage, Issue 132, March 2013
  5. Mann, D.L., ‘Automated Capture Of Conflicts And Contradictions’, IFR White Paper series, 2013.
  6. Graves, C.W., ‘The Never Ending Quest’, ECLET Publishing, 2005.

 

“..when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind. It may be the beginning of knowledge, but you have scarcely, in your thoughts, advanced to the stage of science, whatever the matter may be. “ Lord Kelvin

 

©2012, DLMann, PHowarth, all rights reserved

Leave a Reply

Please use your real name instead of you company name, and no keyword spam.