VALUES IN SCIENCE: An Introduction*
Douglas Allchin
ABSTRACT. Values intersect with science in three primary ways. First, there are values, particularly epistemic values, which guide scientific research itself. Second, the scientific enterprise is always embedded in some particular culture and values enter science through its individual practitioners, whether consciously or not. Finally, values emerge from science, both as a product and process, and may be redistributed more broadly in the culture or society. Also, scientific discoveries may pose new social challenges about values, though the values themselves may be conventional. Several questions help guide disciplined inquiry into ethics and values.
1. Introduction
A fundamental feature of science, as conceived by most scientists, is that it deals with facts, not values. Further, science is objective, while values are not. These benchmarks can offer great comfort to scientists, who often see themselves as working in the privileged domain of certain and permanent knowledge. Such views of science are also closely allied in the public sphere with the authority of scientists and the powerful imprimatur of evidence as "scientific". Recently, however, sociologists of science, among others, have challenged the notion of science as value-free and thereby raised questions--especially important for emerging scientists--about the authority of science and its methods.
The popular conceptions--both that science is value-free and that objectivity is best exemplified by scientific fact--are overstated and misleading. This does not oblige us, however, to abandon science or objectivity, or to embrace an uneasy relativism. First, science does express a wealth of epistemic values and inevitably incorporates cultural values in practice. But this need not be a threat: some values in science govern how we regulate the potentially biasing effect of other values in producing reliable knowledge. Indeed, a diversity of values promotes more robust knowledge where they intersect. Second, values can be equally objective when they require communal justification and must thereby be based on generally accepted principles. In what follows, I survey broadly the relation of science and values, sample important recent findings in the history, philosophy and sociology of science, and suggest generally how to address these issues (this essay is adapted from Allchin, 1998).
2. Values in Science and Research Ethics
The common characterization of science as value-free or value-neutral can be misleading. Scientists strongly disvalue fraud, error and "pseudoscience", for example. At the same time, scientists typically value reliability, testability, accuracy, precision, generality, simplicity of concepts and heuristic power. Scientists also value novelty, exemplified in the professional credit given for significant new discoveries (prestige among peers, eponymous laws, Nobel Prizes, etc.). The pursuit of science as an activity is itself an implicit endorsement of the value of developing knowledge of the material world. While few would tend to disagree with these aims, they can become important in the context of costs and alternative values. Space science, the human genome initiative, dissection of subatomic matter through large particular accelerators or even better understanding of AIDS, for instance, do not come free. Especially where science is publicly funded, the values of scientific knowledge may well be considered in the context of the values of other social projects.
From the ultimate values of science, more proximate or mediating values may follow. For example, sociologist Robert Merton (1973) articulated several norms or "institutional imperatives" that contribute to "the growth of certified public knowledge" (see also Ziman 1967). To the degree that public knowledge should be objective, he claimed, scientists should value "preestablished apersonal criteria" of assessment. Race, nationality, religion, class, or other personal or social attributes of the researcher should not matter to the validity of conclusions--an ethos Merton labeled 'universalism'. Merton's other institutional norms or values include organized scepticism, disinterestedness (beliefs not biased by authority--achieved through accountability to expert peers), and communism (open communication and common ownership of knowledge). As Merton himself noted, these norms do not always prevail. Still, they specify foundational conditions or proximate values that contribute to the development and certification of knowledge in a community (more below). Specific social structures (such as certain reward systems or publication protocols) that support these norms thus form the basis for yet another level of mediating values.
Other proximate or mediating values that promote the ultimate goal of reliable knowledge involve methods of evaluating knowledge claims. These epistemic values include controlled observation, interventive experiments, confirmation of predictions, repeatability and, frequently, statistical analysis. These values are partly contingent. That is, they are derived historically from our experience in research. We currently tend to discount (disvalue) the results of any drug trial that does not use a double blind experimental design. But such was not always the case. The procedure resulted from understanding retrospectively the biases potentially introduced both by the patient (via the placebo effect) and by the doctor (via observer effects). Each is now a known factor that has to be controlled. The elements of process (both methods of evaluation and institutional norms), of course, are central to teaching science as a process.
While the pursuit of scientific knowledge implies a certain set of characteristically "scientific" values, the relevance of other values in the practice of science are not thereby eclipsed. Honesty is as important in science as elsewhere, and researchers are expected to report authentic results and not withhold relevant information. Ethics also demands proper treatment of animals and humans, regardless of whether they are subjects of research or not (Orlans 1993). Science is not exempt from ethics or other social values. Knowledge obtained by Nazi researchers on hypothermia and the physiological effects of phosgene, for example, may pass tests of reliability, but the suffering inflicted on the human subjects was unwarranted (Caplan 1992; Proctor 1991). Hence, we may still debate whether it is appropriate to use such knowledge (Sheldon et al. 1989). Similar questions might be asked about U.S. military studies on the effects of radiation on humans. Again, social values or research ethics are not always followed in science (see, e.g., Broad and Wade 1982), but they remain important values. The disparity between the ideal and the actual merely poses challenges for creating a way to achieve these valued ends--say, through a system of checks and balances. Protocols for reviewing research proposals on human subjects, for monitoring the use and care of laboratory animals, or for investigating and punishing fraud each represent efforts to protect wider social values in science.
The topics or ends of research, as much as the methods or practice of science, are also the province of ethical concern and social values. Weapons research, even if conducted according to Merton's norms and its results evaluated using scientific standards, is not ethically idle or value-neutral. --Nor is research into better agricultural methods aimed to alleviate hunger or low-cost forms of harnessing solar or wind energy in poor rural areas. In each of these cases, the researcher is an ethical agent responsible for the consequences of his or her actions, good or bad. Again, appeal to science is no escape from ethics. Where the consequences are clear, the frequent distinction in science between "pure" and "applied" research is not ethically significant. Many conservation biologists, for example, are well aware of the values inherent in their "basic" research and sometimes shape and deploy the content of their science in a politically self-conscious way (Takacs 1996). Where debates about research arise--say, about transplanting fetal tissue or gene therapy--there are real conflicts about social values; the question of the ultimate value or ethics of research in these areas can neither be resolved by science alone nor disregarded by scientists in these fields as irrelevant.
3. Values Entering Science
Science proceeds through the agency of individuals and--not unexpectedly, perhaps--individual scientists express the values of their cultures and particular lives when they engage in scientific activity. For example, in cultures where women or minorities have been largely excluded from professional activity, they have generally been excluded from science as well. Where they have participated in science, they have often been omitted from later histories (e.g., Rossiter 1982; Kass-Simon and Farnes 1990; Manning, forthcoming). The line demarcating science and society can be fuzzy in practice.
More deeply, however, the conclusions of science at many times and in many places have been strongly biased, reflecting the values of its practitioners (in striking contrast to Merton's norm of universalism). For example, late 19th-century notions of the evolution of humans developed by Europeans claimed that the skulls and posture of European races were more developed than 'Negroes' (Gould 1981). In a progressive view of evolution (adopted even by Darwin himself), persons of African descent were deemed inferior intermediaries on an evolutionary scale--as "proven" by science. When theories about evolution changed to suggest that "less-developed" or neotonous (more childlike) skulls were "more progressive", conclusions from the same data reversed, preserving "scientifically" the superior status of the scientists' race (Gould 1977). Facts were shaped to fit preexisting judgments and values about race. Likewise, female skulls, skeletal anatomy and physiology were taken by male scientists as evidence of women's "natural" role in society. The "scientific" conclusions, which reflected the values of the men, were taken to legitimate social relations that continued to privilege males (Fee 1979; Schiebinger 1990; Smith-Rosenberg and Rosenberg 1973). Perhaps such values should not enter science, but they do.
Values about race and sex, however, have not been the only values to shape science. The phrenology debates in Edinburgh in the early 19th century followed instead class differences (Shapin 1979). Today, notions about biological determinism, especially about the role of genes in governing specific behaviors, follow similar patterns, where some persons appeal to science to try to justify economic disparities as products of nature rather than as the exercise of power (Lewontin, Rose and Kamin 1984). By contrast, disagreement between Boyle and Hobbes over the vacuum pump in the late 17th century was guided in part by values about governance and the role of the sovereignty in the state (Shapin and Schaffer 1985). Even natural history museum dioramas of animal groupings designed by researchers have reflected cultural values about nuclear families and patriarchy (Haraway 1989, pp.26-58). While we may now characterize all these cases as examples of "bad science", they exemplify how values can and do enter science and shape its conclusions. Moreover, one must always bear in mind that in their own historical context, these examples were considered "good" science.
While the role of values in these cases can seem obvious from our perspective, it may not be appropriate for us to interpret the scientists as exercising their values deliberately or consciously. To interpret the entry of values into science in cases such as these, one must focus on individual cognitive processes. That is, one must examine the thought patterns of particular agents rather than either abstractly reconstructed reasoning or the influences of a diffusely defined "culture". Especially valuable is the notion of cognitive resources: all the concepts, interpretive frameworks, motivations and values that an individual brings from his or her personal experience to scientific activities (Giere 1988, pp.213-21, 239-41). Cognitive resources affect how an individual notices certain things, finds some things as especially relevant, asks questions or poses problems, frames hypotheses, designs experiments, interprets results, accepts solutions as adequate or not, etc. As a set of resources or tools, a person's cognitive orientation will both make certain observations and interpretations possible while at the same time limiting the opportunity for others (see also Harding 1991). Succinctly, a person's scientific contributions will be shaped by the domain of his or her resources or values.
An individual's cognitive resources will be drawn from his or her culture, limiting what any one person can contribute to science. Further, because each person's biography and intellectual training are unique, cognitive resources will differ from individual to individual, even within the same culture. Hence, one may well expect disagreement or variation in interpretation in any scientific community. Far from being an obstacle to developing consensus, however, the variation of a community can be a valuable resource. That is, only conclusions that are robust across varying interpretations will tend to be widely perpetuated (Wimsatt 1981).
Indeed, variations in cognitive resources can be critical to isolating and correcting error. For example, in the 1860s through 90s anthropologists had developed numerous ways to measure skulls and calculate ratios to describe their shapes. In what Fee (1979) described as "a Baconian orgy of quantification", they developed over 600 instruments and made over 5,000 kinds of measurements. Despite three decades of shifting theories, falsified hypotheses and other unsolved paradoxes, the conclusions of the craniologists--all men--remained the same: women were less intelligent. At the turn of the century, however, two women began work in the field. They showed, among other things, that specific women had larger cranial capacity that even some scientists in the field, and that the margin of error in measurement far exceeded the proposed sex differences--and they strengthened their work with statistical rigor. Here, the women's perspective may have been no less biased or guided by values, but their complementary cognitive resources, with the interests of women, were critical to exposing the deficits in the men's studies. This example illustrates that if science is "self-correcting", it does not do so automatically. Identifying and remedying error takes work--and often requires applying contrasting cognitive resources or values. The possibly paradoxical conclusion is that one should not eliminate personal values from science--if indeed this were possible. Instead, the moral is: "the more values, the better". Contrasting values can work like a system of epistemic checks and balances.
The many cases of bias and error in science have led to more explicit notions of the social component of objectivity. Helen Longino (1990), for example, underscores the need for criticism from alternative perspectives and, equally, for responsibly addressing criticism. She thus postulates a specific structure for achieving Merton's 'organized skepticism' ('2). Sandra Harding (1991) echoes these concerns in emphasizing the need for cognitively diverse scientific communities. We need to deepen our standards, she claims, from "weak objectivity", based merely on notions of evidence, to "strong objectivity", also based on interpreting the evidence robustly. Both thinkers also point to the role of diversity of individuals in establishing relevant questions and in framing problems, thus shaping the direction of research more objectively. In this revised view, science is both objective and thoroughly "social" (in the sense of drawing on a community of interacting individuals). Fortunately for science educators, the classroom is an ideal location for modeling this kind of collective activity.
The role of alternative values in exposing error and deepening interpretative objectivity highlights the more positive role of individual values in science. Even religion, sometimes cast as the antipode of science, can be a cognitive resource that contributes positively to the growth of knowledge. For example, James Hutton's theological views about the habitability of the earth prompted his reflections on soil for farming and on food and energy, and led to his observations and conclusions about geological uplift, "deep time', the formation of coal, and what we would call energy flow in an ecosystem (Gould 1987; Allchin 1994). Likewise, assumptions about a Noachian flood shaped William Bucklands's landmark work on fossil assemblages in caves, recognized by the Royal Society's prestigious Copley Medal. Other diluvialists drew attention to the anomalous locations of huge boulders, remote from the bedrock of which they were composed (though they supposed the rocks were moved by turbulent flood waters, we now interpret them as glacial erratics). These discoveries all had origins that cannot be separated from the religious concepts and motivations that made the observations possible. Values entering science from religion--or from virtually any source--can promote good science. As suggested above, however, they sometimes also need to be coupled with mechanisms for balancing then with complementary values.
4. Values Exported from Science
Just as values of a society can enter science, so, too, can values from the scientific enterprise percolate through society. The most dramatic redistribution of values may be the values of science itself. To the extent that science (and technology) are perceived as successful or powerful, things associated with them can gain authority or value. Commericial advertising, for example, can draw on the images of science to promote certain products as expressions of "scientific" research or as superior to competing products. The "scientific" nature of the comparison can even dominate over the values on which the comparison itself rests. The conclusions of science themselves are accorded an image of value. One can see the ethical implications where conclusions that themselves draw on social values (such as those regarding race, sex, class, culture, etc.) are given the imprimatur of scientific authority, thereby reinforcing preexisting distributions of power without justification.
The most dramatic social influence of scientific values, however, may be the image of science itself as a model for all problem-solving. Science (or technology) is sometimes viewed, first, as the panacea for all social problems and, second, as the exclusive or primary means for objectivity, even where other values are involved. Not all problems are amenable to scientific approaches, however, and a narrowly scientific or "technocratic" view can forestall solving problems in the appropriate realm. Garrett Hardin (1968) noted, for example that "the population problem has no technical solution". That is, population pressure is fundamentally an ethical challenge about the freedom to bear children in the context of limited global resources. Neither better agricultural efficiency nor reproductive control technology can avert a "tragedy of the commons". Instead, we must reach some consensus about the ethics of an individual's use of common resources and how we may enforce such collective judgments about reproductive rights or privileges.
We often need to integrate scientific values with other ethical and social values. Science can help identify unforseen consequences or causal relationships where ethical values or principles are relevant. In addition, individuals need reliable knowledge for making informed decisions. One archetypal hybrid project is risk assessment. Scientists can articulate where, how, and to what degree a risk exists, for example. But other values are required to assess whether the risk is "acceptable" or not. Communicating the nature of the risk to non-experts who participate in making decisions can thus become a significant element of science. Where one expects scientists or panels of technical experts to solve the problem of the acceptability of risk, science is accorded value beyond its proper scope--and others abdicate their responsibility in addressing the sometimes more difficult questions of value. Likewise, those who do not address the facts of the matter fail in their responsibility to make an informed decision. Facts and social values function in concert.
As noted above, the values of science may also be applied inappropriately as a model for decision-making. While quantification is often an asset for science, for example, it does not address all the ethically relevant dimensions of technological risk. Cases of risk assessment, in particular, require addressing questions about the distribution of risk among different persons and about the autonomy of accepting risk. Efforts to reduce the problem to a single numerical scale (and then to minimize risk) can obscure the central issues. What matters socially and ethically is the meaning more than the magnitude of the risk (e.g., Sagoff 1992). A "scientific" approach to solving global warming, for example, might easily focus on cost-effective means of reducing greenhouse gas emissions, diverting attention away from the historical sources of the problem and the ethical need for accountability and remedial justice. Cases of uncertainty pose special problems for applying scientific values. Scientists generally refrain from advocating claims that cannot yet be substantiated. Ethically, however, one often wishes to hedge against the possibility of a worst case scenario (major floods, nuclear melt-downs, ozone depletion, etc.)--even if the actual expected consequences are not yet proven. In cases of uncertainty, scientific values about certified knowledge ("assume nothing unproven") and ethical values about communal action ("assume the worst") can diverge (see Shrader-Frechette 1991). One task in teaching is clearly to articulate the limited domain of scientific values and how they integrate with other values.
Controversies over the flouridation of public water supplies exemplify well some of the potential problems and confusions about the role and value of science in social policy (Martin 1991). Both sides of the debate appeal to science as an authority. In each case, however, the science is presented as simple and unproblematic, though complexities and uncertainties exist. In addition, science is treated as the final arbitrer, though research indicates that there are both benefits (associated with reducing tooth decay) and risks (associated with flourosis and cancer). In this case, the benefits and risks are not commensurable, and no scientific assessment of the ultimate value of flouridation is possible without the expression of further values. In this case, as in others, the scientific value of empirical certainty can be confused with what science can sometimes actually provide. Even technical certainty does not exclude other, non-scientific values from contributing to resolving disputes about public policy.
Finally, scientific knowledge and new technologies can introduce new ethical or social problems, based on preexisting values. Many medical technologies allow us to express our values in preserving life and health. At the same time, however, they can bring other values into consideration. With the advent of hemodialysis and organ transplants, for example, their limited availability combined with the existing value of fairness in generating a new problem: ensuring fair access to treatment. Subsequently, ethicists developed new solutions for allocating scarce medical resources (e.g., Rescher 1969). Similarly, ecological knowledge--say, about pesticides, heavy metals, toxic chemicals and other pollutants--has educed conventional values about prudence and respect for life in reshaping our values about waste, consumption, modes of production and our relationship to the environment (see, e.g., Des Jardins 1993; Newton and Dillingham 1994). Science does not create these new values. Rather, it introduces novel situations which require us to apply old values in significantly new ways. An awareness that scientific research is typically coupled with new concerns about ethics and values was reflected, for example, in decisions to couple the human genome initiative with funding of research on the humanistic implications of the project.
Some technologies affect values more indirectly. Medical technologies that help sustain life have confounded our traditional definitions of 'life' and 'death' and the values associated with them. New reproductive technologies, likewise, pose challenges for existing concepts of 'parent' and 'family' (Kass 1985); the potential of human cloning forces us to assess more deeply the concept of genetic identity; and the abilities of computers make us rethink concepts of 'intelligence'. All these innovations challenge us to rethink what it means to be human, just as Copernicus, Darwin and Freud did in earlier centuries. Paradoxically, perhaps, in solving some problems, science and technology can introduce new problems about values that they cannot solve. Yet these consequences are a part of a complete consideration of science and its context in society.
5. A Retrospective and Prospective View
What does all this information mean to a scientist hoping to increase our reservoir of reliable and useful knowledge? How does one proceed, both in professional and public spheres? How does one analyze values in specific cases? As in science, the effective investigator has a toolbox of methods--here, a repertoire of questions. Relevant questions include:
Who are the stakeholders? What are their interests? Are they involved in the decision-making?
What are the forseeable consequences (possibly remote or hidden)? What are the alternatives? Is the worst case scenario acceptable?
What intentions or motives guide the choice?
What are the benefits? What are the costs?
Who benefits? Who risks or pays the costs? (Who is upstream, choosing benefits? Who is downstream, experiencing the consequences?) Would you be willing to accept any consequence of this action falling on yourself?
What would be the outcome if everyone acted this way?
Scientists may need to remember that public discussion of values and ethics requires justification, just as much as in any scientific argument. Sound ethical conclusions are based on general principles--not on one person's "feelings", lifestyle or ideological values. Moral must be publicly endorsed. Ethical principles, in turn, are based on careful reasoning, specific evidence and commonly shared emotions. The willingness to experience the consequences of one's actions and the ability to universalize a decision (noted above) are two common ways to "test" whether principles are ethical. A good touchstone for justifying an ethical value (as it is in science) is a good critic: reasons must be strong enough and draw on principles general enough to convince someone with an skeptical or opposing perspective. Ethics, no less than science, aims at objectivity.
REFERENCES
Allchin, D.: 1994, "James Hutton and Phlogiston", Annals of Science.
Allchin, D. 1998, "Values in Science and in Science Education". In B.J. Fraser and K. G. Tobin (eds.), International Handbook of Science Educatio (Kluwer Academic Publishers, Dordrecht) 2:1083-1092.
Broad, W. & Wade, N.: 1982, Betrayers of the Truth, Simon and Schuster, New York.
Caplan, A. (ed.): 1992, When Medicine Went Mad, Humana Press, Totowa, New Jersey.
Des Jardins, J.R.: 1993, Environmental Ethics, Wadsworth, Belmont, California.
Fee, E.: 1979, "Nineteenth-Century Craniology: The Study of the Female Skull", Bulletin of the History of Medicine 53, 415-33.
Giere, R.: 1987, Explaining Science, University of Chicago Press, Chicago.
Gould, S.J.: 1974, "Racism and Recapitulation". In Ever Since Darwin, W.W. Norton, New York, pp.214-21.
Gould, S.J.: 1981, The Mismeasure of Man, W.W. Norton, New York.
Gould, S.J.: 1987, Time"s Cycle, Time"s Arrow, Harvard University Press, Cambridge, Massachusetts.
Haraway, D.: 1989, Primate Visions, Routledge, New York.
Hardin, G.: 1968, "The Tragedy of the Commons", Science 162, 1243-48.
Harding, S.: 1991, Whose Science? Whose Knowledge?, Cornell University Press, Ithaca, New York.
Jones, J.M.: 1981, Bad Blood, Free Press, New York.
Kass, L.: 1985, Toward a More Natural Science, Free Press, New York.
Kass-Simon, G. & Farnes, P. (eds.): 1990, Women of Science: Righting the Record, Indiana University Press, Bloomington.
Lewontin, R.C., Rose, S. & Kamin, L.J.: 1984, Not in Our Genes, Pantheon Books, New York.
Longino, H.: 1990, Science as Social Knowledge: Values and Objectivity in Scientific Inquiry, Princeton University Press, Princeton.
Manning, K.: forthcoming, "Gender, Race and Science", Topical Essays Project, History of Science Society, Seattle.
Martin, B.: 1991, Scientific Knowledge in Controversy: The Social Dynamics of the Flouridation Debate, State University of New York Press, Albany.
Merton, R.: 1973, "The Normative Structure of Science". In The Sociology of Science, University of Chicago Press, Chicago, pp. 267-78.
Newton, L.H. & Dillingham, C.K.: 1997, Watersheds 2, Wadsworth, Belmont, California.
Orlans, B.: 1993, In the Name of Science: Issues in Responsible Animal Experimentation, Oxford University Press, Oxford.
Proctor, R.: 1991, Value-Free Science?: Purity and Power in Modern Knowledge, Harvard University Press, Cambridge, MA.
Rescher, N.: 1969, "The Allocation of Exotic Medical Lifesaving Therapy", Ethics 79, 173-86.
Rossiter, M.: 1982, Women Scientists in America: Struggles and Strategies to 1940, Johns Hopkins University Press, Baltimore.
Sagoff, M.: 1992, "Technological Risk: A Budget of Distinctions". In D.E. Cooper and J.A. Palmer (eds.), The Environment in Question, Routledge, New York, pp. 194-211.
Schiebinger, L.: 1990, "The Anatomy of Difference: Race and Gender in Eighteenth Century Science", Eighteenth-Century Studies 23, 387-406.
Shapin, S.: 1979, "The Politics of Observation: Cerebral Anatomy and Social Interests in the Edinburgh Phrenology Disputes". Sociological Review Monographs 27, 139-78.
Shapin, S. & Schaffer, S.: 1985, Leviathan and the Air Pump, Princeton University Press, Princeton.
Sheldon, M., Whitely, W.P., Folker, B., Hafner, A.W. & Gaylin, W.: 1989, "Nazi Data: Dissociation from Evil. Commentary", Hastings Center Report 19(4), 16-18.
Shrader-Frechette, K.S.: 1991, Risk and Rationality, University of California Press, Berkeley.
Smith-Rosenberg, C. & Rosenberg, C.: 1973, "The Female Animal: Medical and Biological Views of Woman and Her Role in Nineteenth-Century America", Journal of American History 60, 332-56.
Takacs, D.: 1996, The Idea of Biodiversity: Philosophies of Paradise, Baltimore: Johns Hopkins University Press.
Wimsatt, W.C.: 1981, "Robustness, Reliability and Overdetermination". In M. Brewer and B. Collins (eds.), Scientific Inquiry and the Social Sciences, Jossey-Bass, San Francisco, pp.124-63.
Ziman, J.: 1967. Public Knowledge, Cambridge University Press, Cambridge.
*This text is based on Douglas Allchin, "Values in Science and in Science Education," in International Handbook of Science Education, B.J. Fraser and K.G. Tobin (eds.), 2:1083-1092, Kluwer Academic Publishers (1988).
SHiPS Ethics Main Guide
|