Event Interval Probability

Event interval probability (EIP) is a newly developed data analysis method using statistics, reliability and probability theory and Monte Carlo simulation to identify and quantify change in the systems that generate events of interest, like failures, accidents, injuries, etc. The analysis objective is to recognize statistically significant change in the system that is generating the events at the very earliest opportunity. This can be upon even a single failure. Identification of statistically significant reliability degradation provides an opportunity to intervene with corrective action that avoids future failures or accidents, when otherwise the significance of the events would be overlooked or underappreciated. It is applied to individual events and contiguous groups of events, such as the Boeing 737 MAX first crash, second crash, first and second crash combined, and risk of a third crash. Low Poisson probability values reject a null hypothesis that the events are random events in a homogeneous Poisson process, such as the ideal steady-state repairable system. When the null is rejected on the basis of probability values, the alternative hypothesis is accepted – events are not simply random failures or accidents in an otherwise reliable system but the failure interval(s) are statistical proof that the system is unreliable or unsafe. Unreliability probability distributions are then developed with Monte Carlo computer simulation that provide risk of continued operation of the unreliable system. Initially developed to support reliability in the process industries, it has been applied to commercial aircraft accidents and crashes. The method’s value in risk-based decisions is demonstrated in peer-reviewed technical papers and videos on aircraft systems ranging form the DC 6 in 1947 to the more recent Boeing 737 MAX – the greatest engineering failure in the past 30 years. Nearly 1,000 fatalities from statistically unreliable aircraft could have been avoided by EIP applied to both main events and precursors, but this newly developed method was unknown so obviously unused. The method should be used in the future on all in-service safety critical systems and other systems for which events are highly undesirable.

PMF Series (Probability Mass Function Series)

PMF Series (Probability Mass Function Series) converts simple, commonly available system data into a contiguous sequence of probability mass functions that characterizes system probabilistic performance over all time intervals. A small quantity of raw data are configured in a new and unusual, but logical, number system designed for this particular purpose. The number system provides a vast number of random variable values for availability and reliability. With computer processing, a dense series of empirical probability distributions is formed. Risk-based decisions regarding system availability and capacity are then made by incorporating this probability data. High-level raw data are always available for any important system and the process is accurate and simple. Results approach exactness with increasing computational time. It is now practical to apply probabilistic risk assessment to many system availability and capacity applications where risk assessment has never yet been considered.