THE SCIENTIFIC METHOD AND RISK MANAGEMENT

Fritz A. Seiler
Sigma Five Associates
Albuquerque, NM, USA

Joseph L. Alvarez
Auxier & Associates
Knoxville, TN, USA 

ABSTRACT

The need for a competent, consistent use of the Scientific Method in risk assessment, and of an analogous set of "Good Practices" in risk management is discussed. Whenever the Scientific Method was applied, science has sooner or later been successful in identifying the best model; and what we call modern science is the set of models which has resulted from a consistent application of this process. In risk assessment, the same method should be applied to all models which are needed for the estimation of a risk. This has yet to be done in a uniformly competent and consistent manner. In risk management, however, there is no analogue to the Scientific Method, although there is little doubt that a set of "Good Practices in Risk Management" is needed. It is proposed here, that this set of rules be based on the same basic rationales as the Scientific Method. Two of its mainstays should be Honesty and Transparency, and the third should be a risk manager's Analogue to the Hippocratic Oath of the medical profession: Honesty, in stating clearly the meaning of the scientific results as well as detailing which items in the decision making process are based on science and which items are based on societal considerations; Transparency, in describing the decision making process in such a concise manner that other decision makers can duplicate every step of the process; and the Analogue of the Hippocratic Oath, in treating people and the environment with the same consideration that doctors give to their patients. Following these three rules would give risk management an ethical base and would seem to be a pre-requisite for generally acceptable risk management decisions. 

INTRODUCTION 

Ever since René Descartes published his Discourse on the Method of Properly Conducting One's Reason and of Seeking the Truth in the Sciences in 1637 (1), the dramatic progress of science has been increasingly dependent on a strict application of the Scientific Method. Although there have been a few setbacks in which the appropriate model was initially rejected, sooner or later the theory most appropriate to the state of knowledge has won recognition. Thus in the past, the method has been invaluable, and today large numbers of models are tested successfully every year by confirming or contradicting their predictions. This procedure is essential for the sound progress of any science, because in some way, it is a kind of scientific accountability process, a method of self-regulation.

In this manner, appropriate models are identified, and false assumptions, invalid or outdated models can and will be weeded out. Thus, all predictions of the special and general theory of relativity up to the present have been confirmed, and some erroneous claims such as the existence of N-rays and of cold fusion have been rejected (2-4). Generally applied, the Scientific Method is responsible for the set of models which make up modern science, and keeps improving its area of applicability. In recent decades, however, the increased application of conservatism to risk assessment and management has lead to a "play-it-ultra-safe" paradigm, and also to increased attempts to dispense with the Scientific Method altogether (3). In fact, there are many disciples of the social sciences who quite openly advocate doing risk management without scientific input, based on the blatantly political rationale which equates perception with truth. As the Scientific Method is the way by which science ensures accountability, this attempt to decide scientific questions by unscientific means can only be interpreted as an attempt to evade accountability. As long as risk assessment makes scientific claims, it has to be subject to the scrutiny of the Scientific Method. 

It should be borne in mind, however, that contrary to risk assessment, decision making in risk management has to take into account a number of additional, mostly societal concerns. Also, some risk management needs will influence the risk assessment process in the form of particular questions to be asked and concerns to be addressed. However, the actual scientific questions must still be solved in a purely scientific context. The dose-response relationship of cancer induced by radiation, for example, is not a societal but a purely scientific problem. In addition, all risk assessment questions that should be asked, should be addressed, regardless of the societal input; and finally, all models and input assumptions should be subjected to a careful scrutiny according to the Scientific Method. 

Defensible risk management, however, not only has to use the best possible science, but has to fulfill in addition several other important requirements. On the one hand, many societal concerns and value judgements have to be taken into account, and on the other hand, decision making has to be made under the severe financial constraints of the present day. Thus, the funds available for environmental restoration have to be allocated according to societal priorities. As some of these demands run counter to each other, decisions making is much more difficult than conducting a simple cost-benefit evaluation, such as a cost-per-life-saved calculation. The cost in money and in human lives, and often also the time needed to effect a remediation have to be balanced against the lives or health effects expected to be saved by the action contemplated. What is often overlooked in evaluating the "best decision", is that the risk management team also has to be accountable for its decisions. Only too often, the "clean it up, whatever it costs" rationale is used, a rather convenient position which completely absolves the decision makers from any responsibility. Similarly, based on a congressional mandate, EPA passes environmental regulations "according to public demand," and then defends excessive requirements as being "in the public interest". This circular argument leads to a situation in which the responsibility is spread out over many organizations and authorities, and in the end, nobody is held accountable.

It is the purpose of this paper to discuss the application of the Scientific Method in the context of risk assessment and management, to elucidate the role of uncertainties in its application, and to discuss risk management in terms of a set of "Good Practices." 

THE SCIENTIFIC METHOD 

Over the centuries since 1637, Descartes' method (1) has evolved into the Scientific Method, which is essentially a set of "Good Practices" used in the pursuit of "Good Science" (4-9). So, contrary to some widespread beliefs, the Scientific Method involves considerably more than hypothesis testing, which is just the tip of the iceberg. In fact, practicing "Good Science" involves a number of additional requirements, actually preconditions, for meaningful hypothesis testing. Even though these pre-conditions are based mostly on common sense and ethics, it may be useful to review them here, using a summary in the form of six requirements given by Lett (9). The first five are pre-conditions that need to be fulfilled in order to make the sixth meaningful.

The first requirement demands that in order to make a model with predictive capabilities, there must be information in the form of a data set which is sufficient for the experimental determination of the theoretically predicted value, or as the case may be, for the determination of the parameters of possible models. Often, several models are possible, and a decision between them is possible only if more data are available in an area where the model predictions differ, or if additional mechanistic information allows a selection. The second requirement is that the data set, or at least some of its critical information, can be replicated in other experiments done by other scientists. In many cases, such as cold fusion (2-4), this procedure has prevented costly mistakes. The third requirement is that the data evaluation needs to address all the information available, not only that portion which is favorable to a particular concept. As an example, some of the work regarding "Global Warming" and the "Ozone Hole" has been demonstrated to be deficient in this respect as well as others (10-12). The fourth requirement is that the model should follow in a logical manner from the data, and not from some preconceived notion. An exception are theoretical models which are derived ab ovo from secure or hypothetical theoretical concepts, and are then tested by a comparison with experimental data. The fifth requirement is that scientific honesty should be the main force driving the attempt to represent the information available by a model. Thus all arguments for and against the model should be clearly addressed by its proponents (13). When these five preconditions are not fulfilled, the corresponding concepts and models have little or no meaning, and a test of any model predictions is a futile exercise. When these preconditions are fulfilled, however, the verification of a model or a hypothesis by experimental data is a meaningful sixth step. 

One of the most important aspects of the model verification is the evaluation of the errors of both prediction and measurement (14-18). The corresponding uncertainty analysis, however, is still one of the weakest aspects of today's risk assessments (19,20). In the framework of the Scientific Method, the uncertainty analysis of every quantity should be viewed as a careful investigation of the limits of our knowledge (19-21). Only too often, it is treated more like a routine exercise ("report some errors and then you are done"), and is based on preconceived notions and some rather careless assumptions (20). This cavalier attitude can lead sometimes to underestimates, but more often to gross overestimates of the errors (19,20). What is needed are the best possible estimates for both the risk and its error, and a careful interpretation of these results. 

HYPOTHESIS TESTING

Once the preconditions are fulfilled, hypothesis testing becomes the final and decisive step of the Scientific Method. The test is a comparison of the predicted and the experimental value, and the question is whether the two values are, within their errors, compatible with each other. If their difference is considerably larger than their errors, the values are clearly incompatible. If the two values lie closer together, a statistical test may have to be used to establish the significance of the difference. In this paper, we shall restrict consideration to normal distributions of the quantities evaluated; for other distributions, appropriate tests will yield basically the same results. For two normally distributed quantities, the t-test can be used (22). For two quantities x i with standard errors D x i , the test condition for a significant difference is,

 

(1)

Here t 1 - a * is the critical t-value for a confidence level 1 - a , and if the condition is fulfilled, the difference is significant at the confidence level chosen. This well-known inequality is reproduced here only to show the dependence on the errors of the two quantities. The 'sums of squares structure' of the denominator leads to extraordinary sensitivity of the result to the larger of the two errors (14,20). The competence and care taken with the uncertainty evaluation of both quantities are thus of considerable importance for the integrity of the t-test for the significance of the difference. 

In many risk assessments and for a variety of reasons, the actual value of the risk cannot be confirmed directly by experiment. The fact that individual risks can be written as a product of several risk factors can then be used to advantage. For a product of risk factors, it is possible to verify the risk by verifying each factor separately and as accurately as possible. A small risk can often be verified in this manner. Nevertheless, some difficulties still arise in the verification of an extremely small risk, particularly if the product includes a single, extremely small risk factor that causes the risk to be small. In health risk assessments, the basic question is often whether a particular excess risk is different from zero. Excess risks are defined with respect to the risk r* at a given dose d*, such as the background risk which occurs at the background dose. Any excess dose, positive or negative, will lead to an excess risk, positive or negative. By definition, the excess risk is zero for the dose d*, a fact which can be used as the null hypothesis for the excess risk at nearby doses. The predicted value is therefore zero, and for normal distributions, the t-test in Eq. (1) becomes a simple Z-test, where Z is the standard normal variate. The test is then, 

 

(2)

where r x and D r x are the excess risk and its error, and Z* 1 - a is the critical Z value for the confidence level 1 - a. If it is found that the test quantity Z x is smaller than the critical Z-value for the confidence level chosen, the quantity is still compatible with zero and the null hypothesis remains in force. From a purely scientific point of view, therefore, any money and effort spent to decrease such a non-existing risk is not only wasted but this action is hardly recommended in a time of limited budgets because it takes funds away from projects that demonstrably save lives.

Here, it is important not to fall into the logical trap of interpreting the absence of a finding as a finding of absence. This is not what the t-test does; what it does is to find either that the null hypothesis can be rejected or that it cannot be rejected, all at a certain level of confidence. So a negative finding means only that the null hypothesis cannot be rejected; the result does not imply anything about what may be there or what may not be there. As a result of this test, we are simply recognizing the fact that we have no knowledge of the magnitude or sign of the excess risk except that its absolute value is smaller than some upper bound. The only leeway in this evaluation is the choice of the confidence level 1 - a. If the outcome of the test is clear cut, the rejection of the null hypothesis can be made with confidence. However, if the outcome is uncertain and depends on the minutiae of the test, as for instance in the statement: "The effect is not significant at the 95% confidence level, but it is significant at the 90% level," the outcome of the test is in doubt and inspires little confidence. Any action based on it is not likely to inspire much confidence in a positive outcome of the risk avoidance measure contemplated. Consequently, if we assume that the test was done competently, we have no knowledge of how to respond to that putative risk, and any statement that goes beyond that cannot be based on scientific evidence. 

It is often hard for scientists, politicians, and regulators to admit that, although we know that the risk is small, we do not know just how small it is, but this is precisely what the Scientific Method requires. As a consequence of the honesty requirement, we have to state clearly and unequivocally what a model can and cannot do, and therefore what we know and what we do not know.

RISK MANAGEMENT

In risk management, decision makers cannot operate from a purely scientific point of view. For example, risk managers may have to re-evaluate the statement: "The effect is not significant at the 95% confidence level but significant at the 85% confidence level," with a different set of criteria. They also have to take into account both societal considerations and value judgments, as well as financial considerations. Here, they are at a disadvantage compared to risk assessors, because both in risk management and decision theory there is no counterpart to the Scientific Method. So the question arises: What are "Good Practices" in risk management? The answer is essentially that the rationale should be the same as the basic rationale of "Good Science." For this discussion, the six requirements of Lett (9) can be reduced essentially to two: Ethics and Transparency. Although different people have different ethical imperatives and valuations, it is important to take them all into account when making a risk management decision. Hand in hand with the ethical requirement goes the demand for transparency. Just as the Scientific Method requires that other scientists must be able to reproduce every step in a measurement and its subsequent evaluation, "Good Practices" in risk management should require that another decision maker should be able to retrace every step in the decision process and see exactly how the decision was reached.

These requirements can often be incorporated in an explicit and transparent manner into a cost-risk-benefit analysis (23). If both cost and benefits include societal concerns and their valuations, and if the risks of the contemplated actions are also included, the resulting net benefit can be evaluated in a transparent manner. The demand for a positive net societal benefit can then be met, if a careful uncertainty analysis is carried out. In this context, there are always many risk assessors claiming that, in a societal context, a numerical uncertainty analysis cannot be done. This claim is mostly due to a widespread ignorance of the corresponding process of uncertainty analysis, or to an unwillingness to having one's judgement "hemmed in" by numbers. Simply put, if you can express something in numbers, you can also give the uncertainty associated with it because it defines the limits of your knowledge. With a positive attitude on all sides and using an appropriate elicitation process (24), these obstacles can be overcome and the numbers given. Thus the net benefit of a contemplated action can be classified into essentially three categories: positive net benefit, a net benefit compatible with zero, and a negative net benefit. For positive and negative net benefits, the numerical values are much larger than their uncertainties, and the decision whether or not to go ahead is simple.

As always, problems arise when the numerical value of the net benefit is comparable to its error. Then an appropriate statistical test can be used to decide whether or not the net benefit is zero. As in the usual statistical tests, one difficulty here lies in the selection of the confidence level for the test. The usual 95% or 90% confidence levels for the potential rejection of the null hypothesis are not likely to satisfy the needs of the decision makers, they need more flexibility. Depending on the cost and the remedial action risk involved, a confidence level in the range of 50 - 80% may be defensible. If the action can be done cheaply, quickly, and at a low risk, a confidence level of 50% may be defensible. If the action is expensive or involves a considerable risk to the persons involved, a level of 90% or more may be required.

Apart from these problems, the decision makers are essentially faced with four questions: The more tangible two are: What is the price to be put on the loss of a human life and on damage to cultural and religious sites; the less tangible two arise with respect to the monetary value of public concerns or anxiety, and the price tag to be put on a Type II error in the analysis (The risk is indeed nonzero, but it is small and masked by the uncertainties)? The first two have been evaluated many times and there are ranges for such values (25,26). The third presents a real difficulty because the monetary valuation has to be made in the face of the scientific evidence of a risk which is compatible with zero. There is thus little doubt that whatever determines the decision is not based on science. The fourth question presents more options. If the cost and effort of the decision maker's intended action are not too high, the confidence level can be lowered, in some cases to values as low as 50%. This action can be based on the rationale that an even chance of being right or wrong does not cost too much. If the outcome at the confidence level chosen still indicates compatibility with zero, it will be difficult to justify any risk management action. If the outcome then indicates a nonzero risk at that confidence level, then a low-cost-low-effort action can be defended until better data becomes available. In fact, part of the contemplated action can be to fund more work to produce more or better data. 

When the contemplated action is expensive and puts other person's lives at risk, a higher level of confidence is required. If the risk is different from zero and the cost-risk-benefit evaluation yields a positive net benefit, then a risk abatement procedure can be justified. However, if the risk is still compatible with zero, the question may now be: How much money can we spend and how many other lives can we put at risk, just to allay some unfounded fears? It is here that an ethical dilemma arises: a small fraction of the money, spent on unnecessarily large or complex remediation projects in order to avert a few highly theoretical fatalities, could be used to save - with a proven track record - many lives in the real world. A typical example would be a nationwide effort to vaccinate all children in the U.S.; something we still do not do consistently, and would save real lives. It is this aspect which makes some risk management actions poor choices and underlines the need for a set of "Good Practices in Risk Management". In some cases, we must substitute "unethical" for "poor choices", if the risk management action knowingly diverts funds from a project known to save lives. 

Another reason for establishing that set of "Good Practices" is the growing list of failures of expensive remediations to achieve their goals. This kind of management failure is not limited to risk management but is a general trend that besets many efforts to change a complex system. The most difficult aspects seem to be the setting of defined goals, the selection of a broad enough perspective to include side-effects, particularly for the long term, and an insufficient consideration of the interrelationships within the system. The German cognitive anthropologist Dietrich Dörner showed in a brilliant series of computer simulations, just how easy it is, with the best of intentions, to fail to achieve one's goal and potentially create a situation which is worse than before (27). One of the keys to a successful management of complex systems was found to be a continuous questioning and attempts at verification of the hypotheses and models used for the actions and interactions in the system. Clearly, this is a procedure analogous to the Scientific Method. Risk management would do well to include the lessons learned in these experiments into the set of "Good Practices." 

ETHICS IN RISK MANAGEMENT 

Risk managers are working for the good of the public and the environment when they attempt to repair damages incurred in some event or process. In this respect, their aims are quite similar to those of the medical profession and they have, therefore, similar ethical obligations. In the third century B.C., the Greek physician Hippocrates, the "Father of Medicine", formulated the ethical imperatives of his profession, known today as the Hippocratic Oath. If the oath is taken at all these days, it is no longer used in the original, somewhat antiquated version, yet it is still the foundation of current medical ethics. One of the most important requirements of the original oath is: "primum non nocere" or "First of all, do no harm", but in view of the invasive techniques of modern medicine, this dictum had to be modified to require that "no net harm" be done. This is one of the important similarities to the actions inherent in environmental remediation. Similar to a medical operation, remediation activities do cause initial harm, but they are planned to lead to a net benefit in the end. So a risk manager's equivalent to the Hippocratic Oath could be:

"I will do what, according to my ability and judgement, I consider best for the benefit of human health and the environment, and I will do no net harm to either."

Whatever the exact form chosen for the equivalent oath, there is one important consequence: Any risk management action that causes more harm than good is not ethical. 

Fulfilling this ethical imperative may not be as easy as it sounds. For one thing, it will generally not be an easy task to determine the net benefit, because it is composed of different quantities , measured in different units and multiplied by different weights to achieve the same units (26,28). These weights are mostly due to societal considerations and carry considerable uncertainties due to the spread in individual valuations of the risk managers. Another problem of remediation is the systems management aspect mentioned before. Quite clearly, the risk management effort has to have a realistic, clear-cut goal. The ethical requirement of doing more good than harm, leads to the condition of successfully attaining the goal set. As Dörner has shown, managing such as system is a complex operation, far more difficult than is usually assumed. He has shown that one of the most difficult problems is an aspect for which the German theoretician of war, Carl von Clausewitz had to coin the then new German term "Methodismus" (29). Translation into English results, according to Webster's Dictionary, in the secondary use of the word "methodism": an excessive use or preoccupation with methods, systems and the like. In this sense, the term methodism will be used here to denote a fixation on a few methods and trying to fit real-world situations to one of these methods by focusing on whatever similarities there are. The results of Dörner's experiments show that this kind of thinking is likely to lead to failure. 

Failure is even more likely if a "conservative" approach to risk assessment is used. To have a good chance at control of a system, the best possible risk function is required as part of the control function. This requirement excludes all risk functions with a known bias, and a careful analysis of all others in order to eliminate as much as possible the influence of systematic errors. This is important, because a system with a biased control function has a high chance of being difficult if not impossible to control, particularly if there is also a sizeable random error. A practical simile would be a car with wheels out of alignment (bias) and a loose steering assembly (random error). Taking an action when only insufficient control is possible, is like ordering a driver to take that car for a fast drive down a twisting mountain road. Nothing but "best estimate" for the risk and its error should be used. 

Even with the best possible risk functions, control may still be difficult to achieve if the system used for the analysis is too limited in scope. What is needed is a system of minimum size that is still able to describe all the variables necessary. In order to select the treatment approach it is then best to focus more on the differences between the ideal situation of the method and the situation at hand rather than the similarities (27), and then adapt one of the methods to the situation (29). However, this is not the way that environmental remediation is done at present; actually the usual approach to remediation, as required by regulations, is a classical case of methodism. It, therefore, carries within itself the seeds of failure and a high risk of doing more harm than good. 

DISCUSSION 

As other human activities, both risk assessment and risk management need a set of rules, of standards which define not only the state-of-the-art but also define a set of minimum requirements for the execution of a particular job. For risk assessment, the set of rules is well tried, has been refined continuously, and is embodied in the Scientific Method; for risk management, such a set of rules has yet to be discussed and defined. As pointed out above, its most basic requirements are honesty, transparency, and a kind of Hippocratic ethics; all other rules then follow logically.

There is an additional consideration, however, that derives from the fact that risk assessment contributes one, and only one, of many aspects to the process of risk management and the related decision making. Since risk assessment and risk management follow two quite different sets of rules, the two processes should be kept strictly apart. This separation is almost mandatory when the requirement of transparency is taken seriously. Clearly, the information requirements of the risk management process have to be fulfilled by the risk assessment, but these demands are in the form of numerical data, for example, in the form of curves of statistical significance of health effects as a function of the confidence level. The actual assessment process has to fulfill the demands of the Scientific Method, just as the decision making has to fulfill its rules of "Good Practices." 

Unfortunately, some people are, either on purpose or unintentionally, blurring the boundary between risk assessment and risk management by injecting imprecise, valuation-charged notions into the scientific evaluation process, and then presenting the result as if it were based on science. In fact, such corrupted results are a convenient way impose certain preconceived notions on the decision process. It is here that the honesty requirement of the "Good Practices" must lead to clear-cut statements about the parts of the decision which are based on science and those which are not. 

One of the results of this paper is the realization that, when there is insufficient confidence for the existence of a risk, there should also be little confidence in the success of the corresponding risk avoiding practice. It is important to understand that this statement directly contradicts the "there is no safe level" paradigm. As long as the significance level a for the null hypothesis of zero risk is more than 50%, then the probability 1 - a of a useful application for the corresponding remediation is less than 50%; not exactly good odds for the risk management decision. If there is no scientific support for an action, any option for an "improvement" of the situation should be clearly labeled as being based on considerations which are not scientific. Such an action has a large chance of leading to total failure because the control function is badly known, if at all. A system with a highly uncertain control function has a high chance of being difficult if not impossible to control. 

Practical experiences with managing complex systems and recent experimental investigations using simplified computer simulated systems, demonstrate that the ususal management approaches tend to lead to failure. Usually, the environmental systems to be managed are substantially more complex than assumed. As an example, a plutonium contamination in a sediment of a riparian environment is often treated as a simple problem in the removal of a contamination. This kind of tunnel vision often leads to far more serious long term damages to the environmental system than the initial total risk to human health and the environment. More sophisticated holistic approaches will have to be used to handle such risk management problems competently and efficiently. 

REFERENCES 

  1. R. DESCARTES, "Discourse on the Method of Rightly Conducting the Reason and Seeking the Truth in the Sciences." (1637). Reprinted in: R. DESCARTES, Discourse on the Method and the Meditations. Penguin Books, New York (1968).
  2. P.W. HUBER, "Galileo's Revenge: Junk Science in the Courtroom." Harper Collins, New York (1991).
  3. G. HOLTON, "Science and Anti-Science." Harvard University Press, Cambridge, MA (1993).
  4. M.W. FRIEDLANDER, "At the Fringes of Science." Westview Press, San Francisco, CA (1995).
  5. K. POPPER, "The Logic of Scientific Discovery." Harper & Row, New York (1968).
  6. T.S. KUHN, "The Structure of Scientific Revolutions." University of Chicago Press, Chicago (1970).
  7. M. BUNGE, "Causality and Modern Science." Dover, New York (1979).
  8. F.A. SEILER and J.L. ALVAREZ, "The Scientific Method in Risk Assessment," Technol. J. Franklin Inst. 331A, 53-58 (1994).
  9. J. LETT, "A Field Guide to Critical Thinking," The Skeptical Inquirer, Winter (1990).
  10. H.W. ELLSAESSER, "A Rational View On Stratospheric Ozone." Technol. J. Franklin Inst., 332A, 67-76 (1995).
  11. H.W. ELLSAESSER, "The Global Warming Scare," Technol. J. Franklin Inst. 332A, 45-52 (1995).
  12. S.F. SINGER, "The Ozone-CFC Debacle: Hasty Action, Shaky Science." Technol. J. Franklin Inst. 332A, 61-66 (1995).
  13. R.P. FEYNMAN, "Surely, You're Joking, Mr. Feynman." Bantam Books, New York (1986).
  14. F.A. SEILER, "Error Propagation for Large Errors." Risk Anal. 7, 509-518 (1987).
  15. A.M. FINKEL, "Confronting Uncertainty in Risk Management, a Guide for Decision Makers." Washington, D.C.: Center for Risk Management, Resources for the Future (1990)
  16. S. BRANDT, "Statistical and Computational Methods in Data Analysis." North Holland Publishing Co., Amsterdam, Holland (1976).
  17. S. RABINOVICH, "Measurement Errors: Theory and Practice." AIP Press, Woodbury, NY (1995).
  18. J.R. TAYLOR. An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements. 2nd ed. University Science Books, Sausalito, CA (1997).
  19. F.A. SEILER and J.L. ALVAREZ, "Toward A New Risk Assessment Paradigm: Variabilities, Uncertainty, and Errors," Technol. J. Franklin Inst., 332A, 221-235 (1995).
  20. F.A. SEILER and J.L. ALVAREZ, "On the Selection of Distributions for Stochastic Variables," Risk Anal., 16, 5-18 (1996).
  21. F.A. SEILER and J.L. ALVAREZ, "Definition of a Minimum Significant Risk," Technol. J. Franklin Inst., 331A, 83-95 (1994).
  22. R.E. WALPOLE and R.H. MYERS, "Probability and Statistics for Engineers and Scientists." Macmillan, New York (1985).
  23. U.S. DEPARTMENT OF ENERGY. "Cost/Risk/Benefit Analysis of Alternative Cleanup Requirements for Plutonium-Contaminated Soils On and Near the Nevada Test Site." (Draft). U.S. Department of Energy Report DOE/NV-UC-700 (Feb. 1995).
  24. H. OTWAY and D. von WINTERFELDT, "Expert Judgment in Risk Analysis and Management: Process, Context, and Pitfalls." Risk Anal. 12, 83-93 (1992).
  25. A. FISHER, L.G. CHESNUT, and D.M. VOILETTE. "The Value of Reducing Risks of Death: A Note on New Evidence." J. Policy Analysis and Management, 8, 88-100 (1989).
  26. M.W. MERKHOFER and R.L. KEENEY. "A Multiattribute Utility Analysis of Alternative Sites for the Disposal of Nuclear Waste." Risk Anal. 7, 173-194 (1987).
  27. D. DÖRNER, "The Logic of Failure." Addison-Wesley, New York, NY (1997).
  28. J.J. MODER and S.E. ELMAGHRABY," Handbook of Operations Research," Vol. 1, Sect. III-3, Value Theory, New York, Van Nostrand Rheinhold (1978).
  29. C. von CLAUSEWITZ, "On War", Book 2 "On the Theory of War", Chapter 4, "Method and Routine", Princeton University Press, Princeton, NJ (1984).

 

 

BACK