Yet even among those who have access to good actionable metrics, I’ve noticed a phenomenon that prevents taking maximum advantage of data. It’s a condition I call datablindness, and it's a painful affliction.
Imagine you are crossing the street. You constantly assess the situation, looking for hazards and timing your movements carefully to get across safely. Now imagine the herculean task that faces those who are blind. That they can function so well in our inhospitable modern life is impressive. But imagine if a blind person had to navigate the street as follows: whenever they wanted to know about their surroundings, they had to ask for a report. Sometime later, a guide would rattle off useful information, like the density of cars in the immediate vicinity, how that density compares to historical averages, the average mass and velocity of recent cars. Is that a good substitute for vision? I think we can all agree that it wouldn’t get most people across the street.
That’s what most startup decisions are like. Because of the extreme unknowns inherent in startup situations, we are all blind – to the realities of what customers what, market dynamics, and competitive threats. In order to use data effectively, we have to find ways to overcome this blindness. Periodic or on-demand reports are one possibility, but we can do much better. We can achieve a level of insight about our surroundings that is much more like vision. We can learn to see.
I got a powerful taste of datablindness recently, as I’ve started to work with various large companies as partners in setting up events, speeches, and other products to sell around the Lean Startup concept. Yet, whenever I find myself transitioning responsibility for one of these events to these third-parties, I have this sudden sensation of loss. I suddenly lose my ability to judge if our marketing programs are being effective. I start to get very fuzzy on questions like “are we making progress towards our goals?” In other words, I’m experiencing datablindness.
What’s happening? Mostly, I’m no longer being hit over the head with data.
For example, a recent event I held started with a customer validation exercise (actually, this example is fictionalized for clarity). I had it all set up to a jury-rigged SurveyMonkey-PayPal minimum viable product. It was pretty ugly, the marketing and design sucked, and I was embarrassed by it. Yet it had one huge advantage. Whenever someone decided to buy a ticket, I got an email immediately letting me know. So throughout the process of taking deposits and then selling seats, I was getting constant impossible-to-ignore feedback about how I was doing. For example, I quickly learned that when I twittered about the event, more often than not I would make a sale. Yet, when I tried other forms of promotion, I’d have to accept their failures when the emails failed to come. True, this wasn’t nearly as good as a true split-testing environment, but it was powerful nonetheless.
Now that I put on events with official hosts and sponsors, my experience is different. Of course, I can still get access to the data about who’s signing up and when – and a lot more analytics, to boot – but I have to ask. Asking imposes overhead. When I get a response, when someone tells me “hey, we had 3 more signups” I’m never quite sure if those are the same three signups I heard about yesterday, and this person just has somewhat stale information, or if we had three new ones. And of course, if I twitter about the workshop on a Friday afternoon, I won’t know if that had any impact until Monday – unless I want to be a pain and bother someone on their weekend. There are lots of good reasons why I can’t have instantaneous access to this data, and each partner has their own. I wonder if their internal marketing folks are as datablind as I feel. It’s not a pleasant sensation.
Let me give another example (as usual, a lightly fictionalized composite) drawn from my consulting practice. This startup has been busy transforming their culture and process to incorporate split-testing. I remember a period where they were suffering from acute datablindness. The creators of split-tests were disconnected from the results. So the product development team was busy creating lots of split-tests for lots of hypotheses. Each day, the analytics team would share a report with them that had the details of how each test was doing. But for a variety of reasons, nobody was reading these reports. The number of active experiments was constantly growing, individual tests were never getting completed. This had bad effects on the user experience, but much worse was the fact that the company was expending energy measuring but not really learning.
The solution turned out to be surprisingly simple. It required two things. First, we had to revise the way the reports were presented. Instead of a giant table that packed in a lot of information about the ever-growing list of experiments, we gave each experiment it’s own report, complete with basic visualizations of which variation was currently more successful. (This is one of the three A’s of metrics: Accessible). Second, we changed the process for creating a split-test to integrate it in with the team’s story prioritization process. The Product Owner would not mark an experiment-related story as “done” until the team had collected enough data to make a decision about the outcome of the experiment relative to their expectations. Since only a certain number of stories can be in-progress at any one time, these experiment stories threaten to clog up the pipeline and prevent new work from starting. That’s causing the Product Owner and team to spend more time with each other reviewing the results of experiments, which is allowing them to learn and iterate much faster. Within a few weeks, they have already discovered that huge parts of their product, which cause a lot of extra work for the product development team due to their complexity, are not affecting customer behavior at all. They’ve learned to see this waste.
Curing datablindness isn’t easy, because unlike real blindness, datablindness is a disability that many people find refreshingly comfortable. When we only have selective access to data, it’s much easier to be reassured that we’re making progress, or even to fall back on judging progress by how busy our team is. For a lean startup, this lack of discipline is anathema. So how do we reduce datablindness?
- Have data cause interrupts. We have to invent process mechanisms that force decision makers to regularly confront the results of their decisions. This has to happen with regularity, and without too much time elapsing, or else we might forget what decisions we made. When the incidence rate is small, emails or text messages are a great mechanism. That’s why we have operations alerts trigger a page, but it can also work for other customer events. I’ve often wanted to wire up a bell to sales data, so that when we make a sale, we literally hear the cash register ring.
When the volume is too high for these kinds of tricks, we can still create effective interrupts. Imagine if the creator of a new split-test received a daily email with the results of that test, including the computer’s judgment of which branch was winning. Or imagine an automatic system that caused the creator of a new feature to get daily updates on its usage for the first three weeks of it being live. Certainly our marketing team should be getting real-time alerts about the impact of a new promotion or ad blitz. - Require data to justify decisions. Whenever you see someone making a decision, ask them what data they looked at. Remember that data can come in qualitative as well as quantitative forms. Just the act of asking can have powerful effects. It serves as a regular reminder that it’s possible to make data-based decisions, even if it’s not easy. When you hear someone say that they think it would have been impossible to use data to influence their decision, that might be a signal to investigate via root cause analysis.
My experience is that companies that ask questions about how decisions get made are much more meritocratic than those that don’t. Any human organization is vulnerable to politics and cults of personality. Curing datablindness is not a complete antidote, but it can provide an alternative route for well-intentioned people to advocate for what they think is right. - Use pilot programs. Another variation on this theme is to consistently pilot new initiatives before rolling them out to full-scale release. This is true for split-testing features, but it’s also true for marketing programs or even operations changes. In general, avoid make big all-at-once changes. Insist on showing that the idea works in micro-scale, and then proceed to roll it out on a larger scale. There are a lot of advantages to piloting, but the one that bears on datablindness is this: it’s extremely difficult to argue that your pilot program is a success without referring back to the expectations that got it funded in the first place. At a minimum, the pilot team will have to consult a bunch of data right before their final “success” presentation. As people get more and more used to piloting, they will start to ask themselves “why wait until the last minute?” (See Management Challenges for the 21st Century by Peter Drucker for more on this thesis.)
Eric, what you've termed 'datablindness' seems to be a common affliction in fields which experience a sudden rise in the volume of data they have access to.
ReplyDeleteA similar kind of datablindness happened in the US intelligence community in the decades following the launch of photoreconnaissance satellites.
In May of 1998, India detonated several nuclear weapons at a test site near New Delhi. When a Deputy Secretary of State official called the CIA to find out more, he was shocked to realize they had little to no information about the event.
"Spy satellites produced far more raw information than the government's battalion of overworked analysts could handle. The CIA relied so heavily on [these]... that it failed to cultivate informants" or human intelligence (Secret Empire, Philip Taubman, p. 358).
This kind of human intelligence is just as important to businesses as it is to spying. Data should never be too distant -- either via latency or accessibility -- from the people who understand what it means.
As to point #1, another thing you can do is simply scale the incoming messages with the volume. In the beginning, every action counts. Later 100 actions will be the same level of interest as 1 earlier, and 1000 later still.
ReplyDeleteThis way you can still get semi-regular notification and keep a sense of activity, without becoming overwhelmed.
A little math formula and a script can count the number of actions and decide whether to send every action or every thousand.
And here is the howto for your sales bell:
ReplyDeletehttp://tinkerlog.com/2007/12/04/arduino-xmas-hitcounter/
Have fun! :-)
Great post and great ideas . . . love it
ReplyDelete