Language selection

Search


On the Functional Failure Concept and Probabilistic Safety Margins: Challenges in Application for Evaluation of Effectiveness of Shutdown Systems

Abstract of the technical document presented at:
The 2015 International Congress on Advances in Nuclear Power Plants (ICAPP 2015)
Nice, France
May 3–6, 2015

Prepared by:
Dumitru Serghiuta and John Tholammakkil
Canadian Nuclear Safety Commission

Abstract

Safety assessments frequently result in a complex set of requirements for the design and operation of a system. Meeting the requirements is typically judged to mean that there is no undue risk. "Undue risk" remains unquantified in deterministic safety assessments. The presumption is that meeting the requirements guarantees adequate protection; i.e., that the (unquantified) risk is acceptably low. Probabilistic risk assessment (PRA) has traditionally been used to complement the deterministic assessment by quantifying the risk and determining its main contributors. However, while the impact of redundancy has been explicitly modelled and quantified, the PRA does not explicitly take safety margins into account. This makes it difficult to judge the quantitative impact of changes in margins on plant risk.

A review of the current status and trends has indicated that level 3 reliability approaches and application of the "functional failure" concept – in the area of quantification of margins and evaluation of the impact of changes in margins on risk – is gaining popularity and interest for application by the industry and regulators. A functional failure is defined as the inability of a system to perform its mission because of deviations from its expected behaviour . Within a reliability physics framework, a functional failure occurs whenever the applied "load" exceeds the "capacity". In the current PRA, the functional failure probability is assigned as zero for success sequence and as unity otherwise, based on results of deterministic analysis or engineering judgments for comparison of the characteristic value of load and capacity. However, even in the success sequence, there is always a possibility that functional failure will occur because the load and capacity have distributions or probability density functions due to uncertainties associated with them. On the contrary, in the core damage path, there is a possibility that the functional failure probability will become less than unity.

The use of the level 3 reliability approach and the concept of functional failure probability could provide the basis for defining a safety margin metric, which would include a limit for the probability of functional failure. This would be in line with the definition of a reliability-based design, where the probability of failure is less than some acceptable value. It can also allow a quantification of level of confidence, by explicit modelling and quantification of uncertainties; and provide a better framework for representation of actual design and optimization of design margins within an integrated probabilistic-deterministic model under the frequency-consequence constraints and the deterministic defence-in-depth requirements.

Using the concept of functional failure probability, the effect of changes in the effectiveness of a protective system on the level of safety, or conversely on risk, can be quantified by introducing functional failures in a risk-informed procedure or a full PRA. As a first step and to investigate how a practical procedure would work in an application, a limited feasibility study has been conducted. The study looks at using a stochastic-deterministic approach based on representation of uncertainties by subjective probabilities for evaluation of bounding values of functional failure probability and assessment of probabilistic safety margins for a postulated CANDU large-break loss-of-coolant (LBLOCA) event.

This paper discusses some key challenges identified in the practical application of functional failure concept for evaluation of probabilistic safety margins. A risk-informed formulation is first introduced using the "Swiss cheese" model. Among key challenges in realistic estimations of probabilities of exceeding a postulated (regulatory) limit are:

  • Predictive confidence of analysis tools – When validated models are applied beyond the limited range of validation experiments, how can the confidence in these results be quantified? Most of the current computer codes are based on models developed
    30–40 years ago to meet different expectations. These may not be suitable for or satisfy the exigencies of current attempts to quantify the accuracy and precision, and the development of new models takes time. A metric to somehow quantify the trade-off between fidelity-to-data (quantified accuracy), robustness-to-uncertainty and predictive confidence is necessary.
  • High dimensionality of uncertainty space – The number of parameters to be considered in uncertainty analysis could make the calculations intractable. Current formal procedures for selection of key phenomena and parameters may not be suitable for complex applications involving multidisciplinary simulations at the subsystem and system level.
  • Uncertainty quantification and sensitivity analysis &ndashl; Guidelines for developing an adequate framework that includes both these complementary components are needed.
  • Selection of statistical model and algorithm – The possibility of heavy tails, reliance on statistical inference and the interpretation of (frequentistic) hardware failure probability versus (epistemic) functional failure probability require justification of the selected statistical model and algorithm.

It is concluded that more research is needed in this area and deterministic-probabilistic approaches may be a reasonable intermediate step.

To obtain a copy of the abstract’s document, please contact cnsc.info.ccsn@cnsc-ccsn.gc.ca or call 613-995-5894 or 1-800-668-5284 (in Canada). Please provide the title and date of the abstract.

Page details

Date modified: