How to measure everything

Find opportunities — and win them.

Last byte | A conversation with Douglas Hubbard, the creator of applied information economics, a methodology for measuring things others say cannot be measured.

Douglas Hubbard is the creator of applied information economics, a methodology for measuring things others say cannot be measured. He said he has been involved in projects in which his information technology clients were quick to label things as intangible that he knew could be measured. In his new book, "How to Measure Anything: Finding the Value of Intangibles in Business" (John Wiley & Sons), Hubbard suggests that virtually anything can be measured, including the value of information itself. He recently spoke with Associate Editor David Hubler about common misunderstandings regarding measurement and the lessons they hold for the IT community. In fact, there is a stranger irony. There is a formula for the value of information that is almost 60 years old now. It was first developed back when the original developers of game theory were defining decision theory. I've heard a number of IT executives say the problem with IT is that it doesn't produce widgets, it produces information, and there's no way to compute the value of information. That leaves some of the biggest benefits of IT off the table because they don't know how to compute its value. We have a tendency in business and government to sit around and try to hash out problems with measurements. All too often, however, we'll blackball a measurement because we've identified some problems with it. And we routinely dismiss measurements because of assumptions not based on any calculations at all. People make those assumptions all the time without doing any math. They're assuming that because there will be error, the result would be invalid. I would say, of course, there is always error. The relevant question is: Does the assumption contain more or less error than a calculation would have? AIE is the synthesis of several quantitative methods originally designed to facilitate IT decisions. Since then, it's been used elsewhere, too. When we do analyses, we're constantly calculating the value of additional information at every step. You learn that there could be dozens of variables in a business case, but only a few of them need to be measured because the rest have little bearing on the decision-making process. We also measure the wrong things. Take the benefits of IT projects, for example. There are benefits such as the value to public health and improvement of productivity. Often, it's the latter that is measured whereas the much bigger and more uncertain benefit ? like the impact on public health ? might not be measured because it seems intangible. That almost occurred on a number of Environmental Protection Agency IT systems I analyzed. A recent statistical analysis of the World War I German ace's 80 kills attempted to assess whether he was really that good a fighter pilot or did chance play a big role in his phenomenal success? The analysis assessed the number of planes flying, the experience of the pilots and the number of encounters and concluded that at least one pilot could have had as many kills by chance alone.The study determined only that he probably was in the top one-third of pilots and was also somewhat lucky. The Red Baron Effect could be an important driver in a lot of management promotions decisions if one candidate completes three or four outstanding projects in a row and there are 100 candidates for the promotion. Just by chance, someone is going to fall into that promotion. That probably explains a few people we all know. By taking a sample from an area or group, you can estimate the number of whales in the ocean or butterflies in a rain forest, things you can't measure with certainty. It is similar to some statistical methods used to estimate the number of unauthorized intrusions that go undetected in a system. You can estimate the number of bugs in computer code and even the number of typos in a manuscript that no one has found yet. My statistics on that in my book were not far off, by the way. I haven't tried one since grade school, but it's much more likely that the person who wins is lucky than particularly knowledgeable about measurements.

We have a tendency in business and government to sit around and try to hash out problems with measurements. - Douglas Hubbard



Q: Doesn't it seem counterintuitive that IT professionals would believe there are aspects of technology that cannot be measured?

Hubbard:

Q: Why do people discount the value of measurements?

Hubbard:

Q: You've developed a measurement method called applied information economics. What is that?

Hubbard:

Q: Can you give an example of measuring the wrong things?

Hubbard:

Q: What is the Red Baron Effect and how does it apply today?

Hubbard:


Q: What is the value of estimates as opposed to precise figures?

Hubbard:

Q: You must be very good at estimating the number of jelly beans in a jar at carnivals and charity affairs.

Hubbard:

NEXT STORY: Gary Arlen | Tailor made