Over recent weeks I have been listening to a number of episodes by Take To The Sky Podcast and watching some episodes of Aircrash Investigation (as it’s known on Disney+ in the UK). Naturally, one of the things that both look to discuss is the probable cause of the crash, and often this can end up with discussions about the degree to which the cause was ‘human error’ (more often than not, that being the cockpit crew). But, as I tweeted in relation to one of the episodes of Take To The Sky, I would suggest that all accidents are human error.
The dividing line between ‘human error’, ‘mechanical error’, and ‘computer error’, for example, is really not as straight forward as we often like to assume.
Machines and computers do not create themselves (yet). Somewhere in the process at least one human is involved. There is programming, input, decisions (or sometimes a lack of the appropriate amount of any of those). The human element is always there, somewhere. So if a computer doesn’t do the right thing, part of the reason is that a human hadn’t programmed it correctly or looked at ways to avoid the problem escalating into a larger problem, to give two examples.
Let me give a non-aviation example to help illustrate the point. One of my father’s favourite examples to demonstrate the weakness of computers relates to two clocks that tell the wrong time. The scenario is as follows: one clock is consistently 10 minutes slow, the other clock has stopped completely. Which is clock is better? Apparently a human would choose the first one as, once they are aware of the problem, they can take action so they know the correct time. The computer, on the other hand, would allegedly choose the second clock since it will tell the correct time twice a day (if it is an analogue clock that doesn’t differentiate between a.m. and p.m.). But the problem with this story/joke is that it overlooks that the computer was programmed to think in this way by a human. The human could just have easily programmed it to make the same calculation and reasoning that the person who chose the first clock did.
Building upon this, in a lesson that was taught to me well when I visited the ANA Safety Education Center in Tokyo, accidents are the result of a combination of factors that come together. There will not be one smoking gun. By extension, there’s almost certainly no accident where a human could not have designed some safety system to prevent the accident from happening. Whether such a system would be installed is often complicated by the cost impact, but all-too-often the problem can be that we look to learn from previous accidents rather than think about how to avoid future accidents. Of course learning from previous accidents is essential, but this ‘Tombstone Imperative’ (to use the title of Andrew Weir’s excellent book) is too restrictive. We need more human input into thinking about the possible consequences of things that could go wrong and how it could be avoided.
Of course, some accidents are so large that they become considered a ‘disaster’. The dividing line, as I have discussed in my book Dealing with Disaster in Japan and other publications such as Disaster Narratives by Design: Is Japan Different? is not clear. But very often (and I do this in some of my teaching) we separate disasters into ‘natural disasters’ and ‘man-made disasters’. But (as I also point out in my teaching), the division is misleading. Almost all disasters are, by definition, human-made as it was the human response or preparedness (or lack thereof in both cases) that turned the accident into a disaster (as I have also touched on in my post “The Impossible” – Brilliant Disaster Movie). And in this respect there can also be cultural reasons for disasters – as I have discussed in my academic work and in the post Disasters by the Book – How Japanese may be Designing Disasters.