Issue 8, Sunday March 5th, 2000.
Subscription just renewed (ouch), the Harvard Business Review continues to yield more value per page than any other publication we read. In Meeting the Challenge of Disruptive Change (March-April 2000), Christensen and Overdorf say its no wonder that innovation is so hard for established firms they employ highly capable people, then set them to work within processes and business models that doom them to failure. Heres a collection of vignettes.
Three factors affect what an organisation can and cannot do, the authors say: resources, processes and values.
Access to abundant, high quality resources (both tangible people, equipment, technology, cash , and intangible information, brands, relationships ) increases an organisations adaptability.
Processes, the patterns of interaction, coordination, communication and decision-making used to transform resources into products and services, are meant not to change and therein lies a dilemma.
Values are defined broadly as the standards by which employees set priorities The larger the organisation, the more important it is that managers train employees to make choices (value judgments) that are consistent with its strategic direction and business model, the authors say, and a key metric of good management is the extent to which such clear, consistent values have permeated the organisation. Alignment and deployment, those Baldrige cornerstones, come to mind. Were getting deeper into the dilemma, though.
What dilemma? Well, consistent, broadly understood values also define what an organisation cannot do. They limit the ability to innovate. Two sets of values are particularly relevant, say the authors.
The first are about acceptable gross margins. If your business model mandates gross profit margins of 40%, say, and a key value (decision rule) is to reject opportunities that yield less then 40%, youll struggle to commercialise projects in low-margin markets which is where much of e-commerce lives.
The second relates to the size of new opportunities. Because stock prices represent the discounted present values of future earnings, most managers feel compelled to maintain a constant rate of growth. To grow 25%, a $40m company has to find $10m in new business next year. A $40b company has to find $10b. Opportunities that excite small companies in emerging markets, say dont stir the big boys.
In new organisations, resources matter most people particularly. But over time, capabilities shift to processes and values, and eventually to culture a powerful management tool that enables employees to act autonomously, but causes them to act consistently. The further along this path, the more difficult it is to innovate.
So what are the answers?
Processes are not as adaptable as resources, and values even less so. When an organisation needs new processes and values because it needs new capabilities managers have to create new organisational space. Either by fencing off new structures within existing corporate boundaries; by spinning off bits of the organisations to start afresh; or by acquisition, buying in the required capabilities ready-made.
This paper reinforces our belief that the Baldrige criteria should be used as a test, not a template especially in the new economy. Greenfield spin-offs, racing to capture markets which may be changing every day, cant afford to obsess about process. Quality is still the key to success, but its all about the people things leadership, strategy, how the organisation deals with recruitment and retention, rewards and incentives and about customer and market focus. Christensen and Overdorfs argument is that processes, values and culture will follow.
Ten minute master class
What do those numbers you rely on really mean? This all started on the Deming Electronic Network, a list moderated by Jim Clauson (email@example.com to join). Someone asked what charts a CEO should see, and I offered a BaldrigePlus Exhibit based on material from IBM Rochester a Baldrige winner.
The exhibit, to explain, is IBM Rochesters Quality Dashboard, a table of 19 key performance measures, grouped into 7 areas (customer satisfaction, software performance, hardware performance, service, delivery, administration and image), each reported as a single number each quarter, with color-coded quality and status columns (in the quality column red indicates an adverse trend, yellow a flat trend or limited data, green an improving trend; and in the status column red means not on track to attain plan, yellow plan attainment at risk and green tracking to plan).
Lots of people asked for copies (emailed as a PDF), including Steve Prevette (Yes, the guy who single-handed took on the entire six sigma army in Newsletter 7, see below), who commented, in passing, if this is a table of numbers, I would recommend review of Dr Wheelers Understanding Variation book.
My reply yes it is a table of numbers led to a discussion about the value of composite numbers in dashboard-like tables. Steves case is that single element indicators are much better than composites, because of the built-in tradeoffs (some advertent, some inadvertent) in consolidated indicators. And because composite numbers are opaque.
To determine whether you have a trend you need a control chart, Steve says. Thats the best way to show how variable your numbers are, and how reliable your forward projections.
Steve is a Deming advocate and considers that approaches like IBM Rochesters contravene the 11th of Demings 14 points for management: Eliminate quotas or work standards, and management by objectives or numerical goals; substitute leadership.
Hes offered a Deming-acceptable approach to Baldrige questions like How do senior leaders set, communicate, and deploy organizational values, [and] performance expectations And What are your two-to-five year projections for key performance measures and/or indicators? Include key performance targets and/or goals, as appropriate.
Many companies use a Management by Objective approach, Steve says, where numerical targets are set for all organizational indicators. However, those that follow Deming's principles cringe (we believe rightfully so, he says) at the use of numerical targets.
Steve says that having chosen an indicator (how to choose indicators is a separate, and big, issue go to Metrics in Exhibits), Fluor Hanfords next step is to gather the data preferably 25 or more points on a graph, plotted at fixed time intervals. No averages or aggregates just the raw data.
Trend lines and control limits are also plotted. The result is a Statistical Process Control (SPC) chart. Voila, Baldrige current levels and trends data.
Good or bad trends are investigated to find the special causes. Improving trends are reinforced kept going as long as possible. Adverse trends are corrected, and the lessons learned are used to improve the process being measured. We keep plotting the data on the control chart, waiting for the trend to stabilize at a new level.
Usually there are no trends, and in those cases Fluor Hanfords analysts gather comparative data (per Baldrige), query our customers, query ourselves, and answer the question is current performance acceptable? We look at both the average (level), and the variability of the data to answer this question. If current performance is acceptable, then our goal becomes maintain current performance, and our two to five year projection is that things will remain the same. If current performance is not acceptable, then we set our goal at achieve a significant improvement (as determined by SPC criteria). We must examine the common causes at work in the process over the long term, and determine the process changes that will give us the improvement.
Study of proposed process changes may allow a projection of expected benefit, Steve says. But do not use this projection as a target or criteria for success. The success criteria is the trend on the control chart, no matter how large or how small.
The process repeats. As we gain improvements what may have been acceptable last year, may become unacceptable this year, and a focus area for the next improvement. After each improvement, we also ask the question is improvement needed to see if we need to find yet another process improvement.
If a company chooses to adopt this method, it will meet both the Deming management theory, and the Baldrige criteria. More information on these methods is available here.
Steve is with ESH (environment, safety and health) at Fluor Hanford. His department has selected 10 key indicators (including five chosen by the US Department of Energy), and these are reported to their CEO. The ten indicators known as the Fluor Hanford ESH Executive Scorecard are all SPC charts, and this redrafted version of one of them was included in the email newsletter, just so that you know what they look like. These are raw data actual numbers. Theyre not averaged, smoothed, or aggregated in any way. Steves case these are the sort of data a CEO should see.
Six sigma III
Should have known better. Introducing discussions about statistics is a like acting with animals or children give them half a chance and theyll be front stage! Last issue, Steve Prevette took the six sigma troops to task for playing fast and loose with their stats, pointing out that not all distributions are bell shaped, and that because skewed and discontinuous distributions can have much higher 'failure rates' at the same sigma level the Tchebychev Inequality should be used.
Prevette is using the wrong inequality, emailed Grant Blair (GrantBlair@aol.com), Camp and Meidel extended the inequality with the following modification: 1/2.25 n-squared, which gives 1/81th. This means 98.8% of the data from a non-normal (unimodal) distribution will be within +/-6 sigma limits under the worst-case conditions required for six sigma control (non-normal process with a 1.5 sigma drift). This is also consistent with Don Wheeler's empirical rule #3 Given a homogenous set of data: ... Part 3: Approximately 99-100% of the data will be located within a distance of 3 standard deviations on either side of the mean.
Uh oh, I said to Steve, this looks like a DEBATE. Didnt put him off for a minute. I am glad to see the Camp-Meidel extension brought up, he replied, I figured I was going out to the edges of the statistical knowledge of most folks just by trying to bring in the Tchebychev inequality (which is the more general case). Camp-Meidel is only valid for a continuous distribution, which is monotonically decreasing from a single mode [are you still with us, folks?]. Even 1 in 81 is nowhere close to the commonly stated parts per million of the 6 sigma advocates.
By the way, said Steve, the references I use are Shewhart's Economic Control of Quality of Manufactured Product, and Acheson Duncan's Quality Control and Industrial Statistics. Both reference Tchebychev (and yes, the C-M extension) when explaining the basis for control charting.
My guess is that the black belt brothers find all this a bit tiresome. Its not really about numbers, they mumble, its about attitude. Stretching for really ambitious improvement, applying resources long-term, using skilled positive deviants (see Newsletter 4) to continually show everyone there is a better way, signposting success with big dollar numbers.
Its seductive, it works, but it sure aint Deming!
Oh, and if you need an update on what 'Deming' is, go to the top of the Exhibits list