Long tail uncertainty distributions in novel risk probability classification

by Oliver Schwabe, John Erkoyuncu and Essam Shehab


Successful engineering, manufacturing, supply and service of advanced aerospace products benefits from the effective capture, predication and reduction of risk probability. Based on the analysis of the risk probability of 15,624 group wide largely unrelated enterprise risk management entries at Rolls-Royce plc., an aerospace manufacturing and service company, non-random patterns of probability in approx. 70% of aggregated risk profiles were identified, whereby approx. 40% of these exhibit long tail (leptokurtic) characteristics. Future research is recommended to identify relevant parametric risk probability variable (relationships) and to determine whether risk probability can be predicated.

1-minute pitch

            Paper presenter
Name: Oliver Schwabe
Organization: Cranfield University
Email: o.schwabe@cranfield.ac.uk

2 thoughts on “Long tail uncertainty distributions in novel risk probability classification

  1. You state that the common approach to assess impact before probability begs the question whether higher impact risks are systematically assigned higher probabilities in order to prioritize their potential mitigation actions, whereby the mitigation of impact appears to be given priority over the reduction of probability.

    With some of the ‘scandels’ involving multinational companies in recent years this may be understandable. Could it also be that the impact in itself was not high enough to provoke a mitigation action and that this was balanced by higher risk scores (human element to prevent failure)?

    What would be a good approach to take the ‘human element’ out of the equation?

  2. Hello Edwin – good point. In the risk management community we are having multiple discussions around the impact of human bias on risk assessment – we are unsure about what might need “correction” since overall everyone feels comfortable that the “relative” assessment overall is good. One key measure we are looking at however is to create a better “narrative” for assessing probability – at the moment there are only a few buckets (i.e. Very High or Very Low) without any explanation of when to chose what. The impact categories in the relevant scoring schemes are extremely detailed though. Having a common narrative for the assessment would then help “even out” the probability scores assigned and perhaps influence some behavior. In parallel we are working to help people work more on phrasing threats as opportunities. Overall though the profiles I have been seeing in the aggregated probability scoring help point to outliers that may well be triggered by human bias in various forms – the profiles seem to describe general behavior / paradigms in the community and hence may help to “normalize” risk item scoring that does not “fit”. In the end though we should also remember that human bias can also reflect tacit knowledge and at least deserves thoughtful consideration. If you are interested more in how narrative can reduce bias please do contact me and I can share a few good links.



Ask a Question