Variability Uncertainty Risk Characterization Method
102 from Gombas et al. 2003, the variability in the ability of a Listeria population to grow in a
cheese at random linked to strain to strain variability and to cheese to cheese variability, the specific ability of a population of Listeria to grow linked to variability in time and temperature
of storage, and the variability in the number of cells per serving when a portion, which varies in size, is taken from a whole wheel of soft-ripened cheese. Such heterogeneity in the exposure
leads to heterogeneity in the risk per serving: the risk per serving varies over a reference population of servings.
Uncertainty Sources
Uncertainty about how the risk per serving varies arises from our lack of perfect knowledge, and it may be related to the model used to characterize the risk, the parameters used to provide values
for the model, or both. In some cases, we can reduce uncertainty by obtaining better information, but this may not always be possible. Having uncertain results implies that one might make a less-
than-optimal risk decision because one may expect one outcome but something quite different might actually occur Thompson 2002.
Sources of uncertainty include model uncertainty, data uncertainty and estimator uncertainty. Model uncertainty includes
• how one represents, summarizes or simplifies physical phenomena;
• how one represents methods to sample information from physical phenomena;
that is, the umbrella of model uncertainty includes the basic notion of how one infers from sample to sampling population and how one extrapolates from
sampling population to reference population the population that the risk assessment is interested in; and,
• how we represent the sampling distribution for the model’s basic outputs.
Data uncertainty includes •
inference from small samples via a particular model to the sampling population from which the data come; and,
• lack of clear definition of the sampling population and lack of clear description
for how the data were sampled from that sampling population.
103 Estimator uncertainty arises since simulations generate only simulation sample estimates of the
summary statistics of risk outputs’ distributions that we use to summarize the risk output distribution.
8.3.2. Implementing Variability and Uncertainty Separation
Indeed, the whole model is a mathematical combination of model inputs. Most of the inputs are not known perfectly; rather, quantifiable uncertainty is associated with the “best estimate” of
these parameters. Similar to how a Monte-Carlo simulation transfers the variability in model inputs to model outputs, it is also possible to transfer the uncertainty associated with each input,
so that the simulation produces also a measure of the amount of uncertainty around the risk outputs’ variability. A second-order Monte-Carlo simulation Frey 1992 was built to enable
measurement of the uncertainty of the summary statistics for each of the risk output’s distributions. The simplified process is:
1 to derive a parametric or empirical distribution of uncertainty for each uncertain
parameter; 2
to draw one value for each of these uncertain parameters from these distributions; 3
to derive a typical 1-dimensional Monte-Carlo simulation using these values, considered as if fixed. This simulation leads to a distribution of variability of the risk output
conditional on the set of particular values of the uncertain parameters. Various statistics mean, quantiles are evaluated from the empirical distribution to characterize this variability
distribution; 4
to loop on the 2
nd
and 3
rd
steps a large number of times say n
u
.
At the end of the process, n
u
typical 1-dimensional Monte-Carlo simulations have been performed, leading to n
u
sets of distributions for each of the risk outputs and n
u
sets of their summary statistics, i.e. n
u
means, n
u
quantile 0.01, …, n
u
quantile 0.99.
We summarize the result of the second-order Monte Carlo simulation using the median, 0.025
th
and 0.975
th
quantile uncertainty distribution of the n
u
estimations of the summary statistics of the risk outputs’ variability distributions. That gives a credible interval uncertainty interval for
104 each risk output summary statistic: a credible interval for the mean of the risk per serving
variability distribution; a credible interval for the 95
th
percentile of the risk per serving variability distribution; etc.
Summary statistics uncertainty about how the risk outputs’ distributions and so, those distributions’ summary statistics change across the uncertainty about inputs converge to an
expression of our uncertainty about the risk output’s distribution in large enough simulations. Thus, this second-order Monte-Carlo simulation allows evaluation of the uncertainty around
estimates of the risk or any other output. For example, we illustrate with Figure 19: we describe how the risk per serving varies black, uncertainty about the whole distribution light grey and
the uncertainty blue about a particular reference point solid, vertical line in how the risk per serving varies over some reference population. Note nevertheless that largely only a part of the
overall uncertainty is measured here, i.e. a part of the data uncertainty.
Figure 19: Illustration of second-order Monte-Carlo results.
8.3.3. Relative Sizes of Variability and Uncertainty in modeled Risk Outputs
It is useful to measure and compare the contributions of uncertainty and variability to the final risk outputs. To accomplish this, Ozkaynak et al. 2009 proposed some metrics to compare the
order of magnitude of the uncertainty compared to the variability Figure 20. Given -
A, the median uncertainty distribution of the n
u
medians variability distribution; -
B, the median uncertainty distribution of the n
u
95
th
percentiles of variability;
105 -
C, the 95
th
percentile uncertainty distribution of the n
u
medians variability distribution;
- D, the 95
th
percentile uncertainty distribution of the n
u
95
th
percentiles of variability
Ozkaynak et al. 2009 proposed as measures of the variability and uncertainty: -
the Variability ratio = BA, -
the Uncertainty Ratio = CA and, -
the Overall Uncertainty Ratio = DA.
Figure 20: Illustration of the measure of variability and uncertainty Ozkaynak et al. 2009.