This essay discusses the apparently logical proposition that if risk can be identified and controlled, industrial disasters are preventable. It first examines the concepts of ‘risk’, ‘identification and control’, ‘disaster’ and ‘preventable’ before examining the nature of the industrial disaster through a systems approach; it will be shown that a disaster can be deconstructed in order to present a series of ‘hooks’ on which preventative action could be taken.

However, the nature of the system and organizational culture in which it operates prohibits those lessons from being applied. Furthermore, not only are there limits to lessons, or isomorphic learning, the nature of industrial technology is such that accidents are unavoidable. The essay concludes by arguing that the efforts of risk managers should not be focused so much on preventing disasters, but on withstanding them. There is ‘no clear and commonly agreed definition of what the term “risk” actually means.

’ (Hood and Jones, 1996: 2).The 1992 Royal Society Report outlines the debate over whether risk should be analyzed using quantitative or qualitative methods. Favoured by engineers and mathematicians, risk may be understood quantitatively as ‘a combination of the probability, or frequency, of occurrence of a defined hazard and the magnitude of the consequences of the occurrence. ’ (Warner, 1992: 4). However, this approach has been criticized for being too narrow because it ‘imposes unduly restrictive assumptions about what is an essentially human and social phenomenon. ’ (Pidgeon et al.

, 1992: 89).Rather, one should understand the ‘risk’ archipelago as being comprised of technological, social and natural hazards, which overlap and create hybrid hazards, to be understood through the multiple sub-disciplines of study (Hood et al. , 1992; Hood and Jones, 1996). To that end risk is better understood as a social-technical phenomenon through broader systems, organizational and cultural modes of study. Indeed, engineers now promote the requirements of a ‘clear appreciation of both technical components and social aspects of risk in order to manage that risk successfully.’ (Royal Academy of Engineering, 2003).

The terms ‘identified’ and ‘controlled’ should be understood as being part of the risk management spectrum. This term means different things to different people depending on their background, but if our understanding of ‘risk’ is broadened, then so it should be of risk management. Accordingly, this paper understands identification and control as ‘regulatory measures’ intended to ‘shape the development of [and] …cop[e] with risk’ (Hood and Jones, 1996). The meaning of the term ‘disaster’ is also subject to much debate.The nature of disasters is discussed below, suffice to say that when a system breaks down, it is the catastrophic effects of which we refer to as a disaster (Dombrowsky, 1995).

Industrial disasters are understood as those which are techno-centric and but have human interaction and occur in a plant or factory setting (Richardson, 1994). Deconstructing the nature of socio-technical disasters through a systems approach would suggest that the root causes of disasters, and thus the associated risks, can be identified when one examines the human and technical system interaction.Barry Turner’s disaster incubation model suggests that disasters develop over several stages: during the ‘incubation period’, risks are misunderstood and warning signs ignored, a precipitating event then triggers the onset of disaster. Turner argues that using this systems model, risk managers should be prompted to look for latent risks or failures, taking into account human and technical errors (Turner [1978] cited in Module 1, Unit 2: 2.

2).Other theorists provide modelling: Bill Richardson profiles socio-technical disasters and discusses planning tools such as disaster sequences, signals and triggering events which could be used as a framework for a ‘…greater understanding of the nature of the socio-technical disaster type of crisis [which] might lead to managerial action towards “defusing” the present [disaster]. ’ (Richardson, 1994). This systems approach seeks to identify lessons in order to predict future disasters within industry because of the isomorphic nature of these systems.From an organizational perspective, Toft and Reynolds (2005 cited in Module 1, Unit 5: 5. 6) argued that although disasters are low frequency events when viewed in the context of one organization, managers could benefit from isomorphic foresight if they viewed incidents which occurred across the whole industry and learned from one another, where organizations and/or operations are similar.

Thus, given the availability of theoretical models and empirical evidence, it would appear to be a rational assertion that industrial disasters could be prevented because industry could learn from its own experiences.However, there are a number of barriers to this, both in general and specifically, due to limitations on isomorphic learning. The first general issue of reductionism (Elliott, 2000). There may be a tendency to take a simple approach to causes of disasters (Richardson, 1994) reducing them to simplistic activity or blame; this diverts attention away from emergent properties, or previously unforeseen system interactions (Elliott, 2000) and inhibits a holistic approach which would otherwise consider the range of political, economic and social systems.Further, a reductionist approach can deliberately attempt to mask significant system interactions or properties in search of simple and easy-to-use conclusions (Module 1, Unit 3: 3.

10). This can be a result of a particular world view, or organizational culture (see Sagan below). A reductionist approach cannot provide the scope or detail to fully maximize isomorphic learning.A second general issue is the assertion that disasters are so inherently complex that it is impossible to achieve a total understanding of what has occurred (Module 1, Unit 5: 5.

 12; Perrow, 1999), although that should not prohibit any useful analysis. Thus, if learning from hindsight is key to risk management, an incomplete picture of hindsight will further impact on mitigation of disasters, let alone prevention. There are also specific barriers to isomorphic learning which also counter the assumption that organizations always seek to implement measures to mitigate/prevent disasters.In his landmark thesis based on research into the Kings Cross disaster, Brian Toft found reluctance among employees to acknowledge shortcomings and argues that there exists a culture of denial of failure within organizations. If they cannot learn from their own mistakes, they will not learn from mistakes of others and as a result preventative/mitigation measures may not be implemented. Toft cites that one reason for this may be a culture of blame and secrecy in reaction to the threat of litigation against the organization/individuals (1992, cited in Module 1, Unit 6: 6.

 18).Simon Bennett (2001) adds to this discourse through the study of architectural lessons in response to terrorist bombings. He demonstrated failure of hindsight after US Embassies were bombed in Kenya and Tanzania in 1998, despite the Inman Commission making a number of recommendations following the 1983 bombing of the US Embassy in Beirut. He cites possible reasons including: loss of momentum as memory fades and politics moves on; prohibitive financial constraints; and desensitization, or emotional immunity to catastrophe.Although terrorist-architectural in nature, this is further evidence of failure to apply hindsight learning so it must be considered in an industrial context. Disasters can be modelled to provide frameworks against which empirical evidence identifies lessons for the future, but these lessons may not be implemented due to the failure of hindsight learning and to convert lessons into active learning (Toft, 1990).

There is however an argument that regardless of analyses and notwithstanding failure to learn, industrial disasters cannot be prevented due to the inevitability of accidents.Charles Perrow (1999) claims that disasters cannot be prevented [through learning] because disasters are not identical and have infinite failure modes: ‘[m]ost high-risk systems have some special characteristic, beyond their toxic, explosive or genetic dangers, that makes accidents in them genetic, even “normal. ”’ (Perrow, 1999: 4). This is due to interactive complexity and tight coupling within systems determining that accidents are inevitable in those systems, because they cannot be foreseen or prevented.Perrow’s Normal Accident Theory (NAT) is not without its critics; Dulac, Leveson and Marais (2004) criticize Perrow for being overly pessimistic and suggest that there are mechanisms other than redundancy to mitigate against accidents.

It is beyond the scope of this paper to fully debate the validity of NAT, however there are some compelling arguments for its utility: Anke Mussig (2009) uses NAT as a framework to investigate the causes of the recent financial crisis and provides useful insight. At the very least, NAT cannot be discounted as a theory which adds to the growing body of evidence against the possibility that industrial disasters can all be prevented.This body of evidence is further bolstered by Scott Sagan (1993) who, in his study of nuclear weapons, extended NAT and called for even more ‘pessimism’ as inevitability was also due to organizational, economic and cultural reasons. Goals, targets and self interest means that safety is only one of a number of competing objectives; when failure occurs, personnel deny responsibility or engage in faulty reporting to serve non-safety objectives (an example of passive-active learning disconnect).By coupling Perrow and Sagan’s arguments, we can discern a socio-technological based thesis (although Perrow also commented on decision-making in his garbage can theory) as to why industrial disasters are not preventable.

Theories need evidence; whilst theories should be considered critically, the authors cited above have of course conducted empirical research to formulate their arguments.Given that risk – and therefore risk management – has been understood since the ancient Greek times (Bernstein [1996] cited in Module 1, Unit 5: 5. 4), a glance through history (Richardson, 1994: 44; HSE, 2011) shows that industrial disasters and accidents have continued to occur; even if one disagrees with NAT and limitations of hindsight learning it is difficult to argue that all industrial disasters are preventable. If one reaches the conclusion that despite efforts of risk management all industrial disasters cannot be prevented, then an alternative course of action must be considered as to ‘what to do’.Risk management is full of practical options (see Dulac et al. , 2004), but there is an overarching set of opposing views consisting of those who suggest that disasters can be addressed by adhering to a doctrine of anticipationism: applying causal knowledge of failure to ex ante actions, in other words hindsight learning; or to an alternative doctrine of resilience: cannot predict complex failures so seek to withstand them.

(Hood et al, 1992: 159; Hood and Jones, 1996: 9). From a resilience perspective, David Collingridge (1996) argues that, as technological complexity grows, one cannot predict future events so ‘decision makers can never relax in the assurance that they have identified the very best option: any choice may be shown to be mistaken by future events that surprise decision-makers. ’ (Collingridge, 1996: 45).Whilst commentators do not argue to the absolute polars, one can assert that: if one cannot predict the future fully then one cannot be satisfied that they have taken all possible steps to prevent a disaster, so resilience must be considered.

In conclusion, despite the best efforts of risk management, it is not possible to prevent all industrial disasters.Despite being able to model and learn from socio-technological disasters, there are plausible barriers to active learning. Furthermore, there is a body of theory and evidence to suggest that disasters are inevitable for a combination of socio-technological reasons so one should adopt an outlook of resilience. The theories used here are all themselves subject to debate and criticism, but at the most basic level it would be most unsafe to assert an absolute declaration that all industrial disasters are preventable, given the empirical evidence as researched by others.Whilst one could address each and every strand of the failure of hindsight learning and NAT arguments and charge industrial organizations to address them, the reality is that of competing priorities; whilst one or several organizations may comply, one cannot guarantee that all will, therefore one cannot prevent all disasters.

A further reality is that technology continues to increase in complexity and there will always be a human interface (for further info, see the human vs automaton debate), either of mistake or design. The socio-technological construct is at the heart of why one cannot prevent all industrial disasters.