Airline crashes and warnings for the adventure industry

This week’s Maclean’s Magazine cover story is a provocative piece on airline reliability and pilot error. Without intending, the author also provides a concrete example of Perrow’s Normal Accident theory, and parallel warnings for the adventure industry.
In Cockpit Crisis, author Chris Sorensen cites 50 commercial airline crashes in the past five years, all categorized as ‘loss of control’ crashes (as opposed to mechanical failure or collision, for example).  Each of these crashes, according to the reports, contained an element of pilot error.
“They simply shouldn’t have happened,” he quotes an airline investigator as saying. “In many incidents, the airplane has gone into a stall and every automated safety procedure kicked in, but the pilots failed to recognize the situation and failed to recover.”
The author continues “Why is this happening? Some argue that the sheer complexity of modern flight systems, though designed to improve safety and reliability, can overwhelm even the most experienced pilots when something actually goes wrong.” And this, in a nutshell, is the Normal Accident theory in action.
Coined by Charles Perrow in his highly influential book Normal Accidents: Living with High-Risk Technologies (Basic Books, NY, 1984), he convincingly argues that accidents need to be expected and considered ‘normal’ in any complex and tightly coupled activity. While his work centres on high risk technologies such as nuclear power plants, oil refineries and airlines, his principles are instructive to any activity with exposure to risk.
Coupling and complexity are two measures by which to assess an operation or system’s reliability. Complexity refers to the predictability of what happens if something goes wrong. For much of the adventure industry, the exposure to risk is fairly discrete, and the mechanism is relatively predictable. Complexity, though, works back through the systems that house it. A fast paced, multi-program operation would be considered more complex than a single trip mom-and-pop. When something goes wrong, it has perhaps unpredictable impact on other clients, programs, or areas of the operation. Most operations are likely more complex than they think, as airline disruptions, forest fires or road closures all have a direct impact and unforeseen consequences.
Coupling refers how much slack is in the system. Consider it a sliding scale. A typical low risk adventure activity has plenty of slack, such as backcountry ski touring. There is time to plan and adjust as needed. When things start to go wrong, however, slack starts to disappear, options become limited, and events start to unfold quickly. A group that finds themselves in avalanche terrain sees slack start to disappear, and if caught in a slide, there are no longer any options. Back to the Maclean’s article, while on autopilot there is some slack in the system. When the plane somehow goes into stall, events are tightly coupled. Combined with complexity, the situation often outstrips an individual’s ability to correct it.
On individual ability, Sorenson’s article contained a chilling line, when applied to the adventure sector. “…The focus in recent years has, perhaps myopically, been on simplifying and speeding up training regimes, secure in the knowledge that planes have never been smarter or safer.” Guide training, too, has shortened dramatically over the past 20 years. It is a relative few operations that still require a long apprenticeship, instead fast tracking new guides to positions of responsibility. The ‘smarter and safer’ is (myopically) applied to the program’s safety track record. Back to Normal Accident theory, any complex and tightly coupled operation (of which all operations contain to at least some extent) needs to expect the worst case scenario, and what’s worse, may not recognize when it is upon them. As the airline expert warned previously, “the pilots failed to recognize the situation and failed to recover.”
In the appendix in the new Managing Risk book I propose a System Complexity Index – a means by which to measure an adventure operation’s complexity and coupling – and more importantly strategies to minimize both. We’ve had great success applying this index to individual operations and providing a means to build system and program resiliency.