Science and Uncertainty

articlesArrowBack

DividerScience and Regulation

The Precautionary Principle and the Limits of Science

M. MacGarvin
University of Aberdeen, Department of Zoology, Culterty Field Station, Newburgh, Ellon, AB41 OAA, United Kingdom.

ABSTRACT: In this paper I set out to do three things. First I outline a case example where severe limits in our understanding have required a shift towards a precautionary policy. The example is international marine pollution policy in the North East Atlantic, including the North Sea and the Baltic Sea. The limits to our understanding discussed here are those associated with biological monitoring where this is intended to ensure the health of the marine environment. I argue that our very limited understanding of marine ecology prevents us from determining the effects of contamination upon the marine ecosystem.

I then take a step back to try and draw some general principles about the precautionary principle and its application to environmental contamination issues. I comment on the discussion of whether the precautionary principle is a scientific or political concept, trying along the way to unravel some of the very human aspects that are tangled up in this debate. I then argue that it is important to understand that incorrect and incautious conclusions arise from an overlapping hierarchy of three categories; scientific ignorance, scientific uncertainty and corrupted science.

Finally I make some suggestions about the implementation of the precautionary principle and, within this, the role of environmental science. I suggest that the response varies depending on whether one is dealing with corruption, uncertainty or ignorance. I conclude that we must find processes that only allow natural substances to pass beyond our total control into the environment, and that they should do so at only a small fraction of natural, local, flux levels. If we are to achieve this for the global community it will require a revolution in manufacturing processes, for which we will need to take our inspiration from the remarkably problem-free chemical processes in living organisms. Monitoring will then have a secondary role, of recording the reduction and maintenance of contaminants to low levels, removing from it the predictive burden that it is unable to bear.

INTRODUCTION

Until about a decade ago it was thought that the North East Atlantic was so vast that it would be little affected by human activities. However this belief was shaken by the discovery that contamination was by no means rapidly diluted and dispersed from inshore waters and coastal seas such as the North Sea and the Baltic (Stebbing, 1992). This lead to the emergence of the precautionary principle, which first appeared internat-

ionally in the First Ministerial Conference on the North Sea, held at Bremen in 1984, and which was used at the Second (1987) and Third (1990) Conferences to justify cuts in contaminants such as synthetic chemicals, heavy metals, and nutrients, and the cessation of practices such as ocean incineration, because the burden of suspicion against them made it prudent to limit or prevent such activities.

Despite the reference to precaution in these fora, it is implemented at a fairly superficial level. It is assumed, in principle, that we are capable of judging levels of marine contamination that have no significant effects on the ecosystem. The problem is thought to be the degree of uncertainty surrounding safe levels of certain substances and this is tackled by 'giving the environment the benefit of the doubt', and by identifying areas for further research that are expected to provide results in the short term. A typical outcome has been the setting of reduction targets for contaminant inputs, usually of 50 or 70%, over a ten year period (Anon, 1990).

A HOUSE BUILT ON SAND: THE PROBLEMS WITH BIOLOGICAL MONITORING

At one time, when the main interest of marine contamination was to ensure that food from the sea was safe for human consumption, it was possible to do this without any detailed knowledge of the ecosystem. But when the goal shifts to protection of the ecosystem itself we are faced either with monitoring all species - clearly impossible - or of searching for and selecting certain species that act as linchpins in the foodweb.

But determining the existence of any such linchpin species is extremely challenging. It requires a detailed knowledge of the interactions of many different species at different stags of their life cycle, about which we are largely ignorant. The implementation of biological monitoring programmes by bodies such as ICES (International Council for the Exploration of the Sea) and PARCOM (Paris Commission) has therefore lagged well behind those for the measurement of the levels of contaminants and the determination of 'safe' levels of contamination in marine foodstuffs for human consumption (McIntyre & Pearce, 1980; ICES Advisory Committee on Marine Pollution, 1985; MacGarvin & Johnston, 1988; Stebbing, Dethlefsen & Thurberg, 1990; Hoogweg, Ducrotoy & Wettering, 1991; MacGarvin & Johnston, 1993; MacGarvin 1994).

There at least four areas that severely challenge our ability to predict the effect of our actions in marine ecosystems; confounding effects, limited knowledge of marine ecology, wider uncertainties in theoretical ecology, and factors such as chaotic fluctuations whose significance we may not be able to resolve until many decades of data are collected.

CONFOUNDING EFFECTS

Modern ecological methods emphasise the importance of rigorous experimental and statistical design that allow different hypotheses to be isolated and tested (Peters, 1991). High natural variability in areas such as the North Sea make it extremely difficult to determine human effects (Gunkel, 1994), and to make matters worse many different human activities take place. But research by different groups almost invariably concentrate on single issues, such as fishing or contamination, without attempting to untangle the interactions. In addition research is confounded because the gathering of basic information during this century has been coupled with unprecedented and increasing human effects. As a result we can have no clear idea of how the natural ecosystem would function, nor do we have true controls with which to compare areas affected by human activities.

THE COMPLEXITIES OF MARINE ECOLOGY

Toxicologists assume that it is possible to monitor just a few species, using these as indicators for the whole ecosystem, and that the choice of such species presents few difficulties. Up until the early 80s the principle might perhaps have been partly supported by research on temperate rocky intertidal areas, whose population dynamics were then regarded as the best understood of any marine habitat (Underwood & Denley, 1984). This research indicated the existence of just a few 'keystone' species, where changes in their population levels resulted in a cascade of effects throughout the food-web. Alteration of the population levels of most other species, it was thought, had little effect (Dayton, 1984).

However keystone species need not be particularly common or obvious; instead their discovery requires a detailed programme of population manipulation and exclusion. As a result the number of established cases remains relatively small, albeit with a global distribution (Paine, 1971; Dayton et al., 1974; Menge, 1976; Lubchenco & Menge, 1978). The original work of Paine (1966) in an intertidal area in Washington State remains the most well known. Here the exclusion of the starfish Pisaster ochraceus resulted in a dramatic sequence of changes, over the space of two years, from a diverse community to one dominated by the mussel Mytilus californianus. But even this work does not mean that a species such as Pisaster can be used as a universal indicator - later work showed that it is, in the words of Paine (1980) 'just another species' at another location because of the absence of M. californianus.

On soft-bottomed habitats the type of manipulation experiments that are necessary to explore community regulation are far more difficult, and there are very few candidates for keystone species. Meanwhile the difficulties of experimental manipulation in open water is an important reason why ecologists such McGowan & Walker (1979) and Dayton (1984) were forced to concede that little progress has been made in resolving the 'paradox of the plankton' (that vast numbers of species apparently find ways of coexisting in this simplest possible of habitats) pointed out over 30 years ago by Hutchingson (1961).

And while the current emphasis on physical processes in planktonic ecology gives another dimension to the analysis (e.g. Mann & Lazier, 1991), it is not clear that this guarantees a resolution of the paradox.

In reality the few species used for biological monitoring are not keystone species. Instead they are selected because they conveniently absorb contaminants (mussels), are quick and easy to test (the oyster bioassay) or need in any case to be monitored for human health (commercial fish species). While the exclusion of keystone species is understandable, it means that monitoring programmes intended to protect marine habitats do not have a firm scientific foundation.

Moreover, the assumption that most non-keystone species play minor roles in ecosystems and, by implication, can be ignored by biological monitoring programmes must also be questioned. Species may be innocuous because they are kept rare by predators, parasites or disease, and can increase dramatically when introduced into a new habitat free of such constraints, as a host of terrestrial, freshwater and marine examples testify. It is possible that contaminants (or other human activities) could also adversely affect the dynamics, resulting in outbreaks of species which then disrupt the ecosystem. More complex effects could occur involving a web of species. In either case work based on individual species may give no hint of human involvement.

Finally, this concept of keystone species was nested within a wider model of rocky shore community structure that predicted that physical survival would be the most important factor on exposed coasts, predator control would be the most important force in sheltered areas, with competition between species on the same trophic level playing the most important role at sites with intermediate physical exposure (Menge & Sutherland, 1976). But this synoptic model has also been found wanting. A growing unease that its conclusions were simplistic were given focus by Underwood et al., (1983) and Underwood & Denley (1984). Their work on intertidal barnacles indicated that the number of adults was determined by the number of larvae that survived their plankton phase and settled onto the rocks - which varied hugely from one year to the next. Perhaps the most remarkable feature is that many others before were aware of such huge annual variation in rocky shore recruitment, but had instead simply seen it as a nuisance to be filtered out of the results, rather than the key feature. The result of this and other work (Gaines & Roughgarden, 1985; Lewin, 1986; Roughgarden, 1989) is that the ecology of rocky shores, once considered to be a showcase for marine ecology - indeed all ecology - has gone back to reconsider basic principles (Underwood & Denley, 1984).

THEORETICAL ECOLOGY

The reassessment of rocky shore ecology has been just a small part of a general stock-taking by theoretical ecologists three decades after the subject changed direction in the 1950s to take a more rigorous scientific and mathematical approach. The 1980s saw a increasing sense of frustration at the apparent inability to produce a grand unifying theory. Heated debates occurred, at first over the relative importance of competition or natural enemies in the regulation of populations, and then increasingly over flaws in experimental design, statistical analysis, and the formulation and testing of hypothesis in general (Strong et al., 1984; Roughgarden et al., 1989; Peters, 1991). Underlying this has been a more profound mood change in theoretical ecology, a feeling that the new methods have not delivered the goods expected of them 30 years ago. Major journals now carry editorials asking why ecology, unlike biochemistry or the physical sciences, hasn't solved the problems it set itself (Lawton, 1991). Other influential ecologists such as Roughgarden (1989) and Kareiva (1989) have concluded that the science lacks solid foundations, and that for the time being ecologists should give up any idea of forming grand unifying theories, and concentrate instead on far more narrowly defined studies.

The fact that theoretical ecologists, working in far easier fields than marine ecology, are now asking such searching questions of their methods highlights how unreasonable it is to expect that we can predict the effect of human actions upon marine ecosystems with any accuracy.

CHAOS

The possibility of chaotic population fluctuations is yet another unwelcome complication for applied scientists. Assessing the significance of chaotic population fluctuations will be formidable, because it will require new data, carefully gathered, extending over perhaps hundreds of generations. No research programme it seems, however vast, can do anything to speed the gathering of such information.

The uncertainties are highlighted in an analysis by Godfray & Blyth (1990) of the population fluctuations of copepods. The data was gathered by the Continuous Plankton Recorder (CPR) in the seas around the British Isle for over 40 years - one of the most comprehensive data sets available. Yet sophisticated mathematical decoding techniques were unable to determine whether the fluctuations in their numbers during this period were due to random events, or a simple (but unknown) cause that resulted in chaotic fluctuations. The run of data is simply too short for such techniques. The implication is that one might have to gather data for hundreds of years before being able to (possibly) determine whether a factor such as increased nutrient concentrations has an effect on an ecosystem!

Any of these aspects sets severe limits on what we can expect to predict about the effect of contaminants in marine environments. Taken together they remove the scientific justification for basing pollution policy on the attempt to find 'safe' levels of contamination. Yet even the most recent assessments such as the forthcoming Quality Status Report of the North Sea (Anon, 1993), while in places moving towards an acknowledgment of these problems, still responds in terms of the need for more research on key groups of organisms, making the assumption that conclusive results can be obtained, and that this can be done in a relatively short time.

THE PRECAUTIONARY PRINCIPLE: ESTABLISHING THE NATURE OF THE BEAST

So it is clear that there is still considerable scope for error in the NE Atlantic, an area that is accepted as having taking the lead in establishing the precautionary principle. Bearing these arguments in mind I now shift from the specific to the general, and from problems to solutions. But before one can work out how to implement the precautionary principle it is obviously important to have a good understanding of the nature of the beast. In this section I touch on the definition of the precautionary principle, move on to the debate over whether the principle is scientific or political in nature, and conclude by trying to establish a clearer idea of the various sources of incautious decisions.

DEFINING THE PRECAUTIONARY PRINCIPLE

Uncertainties of the type just described for the NE Atlantic has lead to the recognition of a need for precaution, which is defined in Chamber's Dictionary, 1983 edition, as 'a caution or care beforehand: a measure taken beforehand' in this case before demonstration of a (potential) problem. As the precautionary principle applies to more than environmental contamination, for example to fisheries and habitat protection, this seems a satisfactory general definition.

A specific definition, mainly geared for physical disturbance comes from various NE Atlantic fora. Helsinki Convention (HELCOM) for the Baltic defines it as the need 'to take preventative measures when there is reason to assume that substances or energy introduced, directly or indirectly, into the marine environment may create hazards to human health, harm living resources and marine ecosystems, damaging amenities or interfere with other legitimate uses of the sea even when there is no conclusive evidence of a causal relationship between inputs and their alleged effects.' (HELCOM, 1992).

IS THE PRECAUTIONARY PRINCIPLE A SCIENTIFIC OR POLITICAL CONCEPT?

There has been a small but vigorous debate, principally amongst the pages of the Marine Pollution Bulletin and the New Scientist about whether the precautionary principle is a scientific or political concept (Grey 1990a, b; Johnston and Simmonds 1990,1991, Mayer and Wynne 1993; Milne 1993). I must confess that I find the argument itself rather dull: what is more interesting are the presumptions that lead the participants to make the points that they do.

The heart of the matter

Strictly speaking one might argue that science is neutral, a matter of observable fact. Well this may be true, but if so then it applies only to pure science. Once the science is applied, then it gains other aspects.

Regulatory bodies turn to scientists for advice, for example for safe levels of contamination in the marine environment. There are uncertainties here, and a level of judgement. If an individual states that their conclusion is based on a scientific judgement, then they must accept that the precautionary principle applies to science. If they believe that they are going beyond the science in an effort to be helpful to policymakers, then it is not science.

Personally, I would primarily consider the precaution as a political principle or, more accurately, a policy or managerial tool for dealing with uncertainty. But I must say that I find the debate about whether the precautionary principle is scientific or not a rather sterile one. What is more interesting is perhaps the assumptions and starting points of the participants in the debate.

Background assumptions

Some scientists have a deep rooted unease about the precautionary principle. To them it seems in some way wishy-washy, pandering to uninformed opinion, or running against a scientific method of dealing with environmental concerns. The feeling might be summed up along the lines of 'Trying to find a scientific solution may not be perfect, but its the best we have: abandon a rational approach and who knows where irrational tides will take us'.

The mistake here, I think, is assuming that a rational course is synonymous with the scientific approach adopted in environmental monitoring. It is also 'good science' to refuse to speculate beyond observed facts - an approach that can be accommodated by the precautionary principle. Indeed in other fields of science making claims without systematically eliminating the alternative hypotheses - which in effect is what marine scientists trying to set safe contamination levels are doing - is regarded as unsound.

For this one need look no further than the ecological issues outlined in the first section of the paper. Were one to present a paper on marine community ecology dynamics to a theoretical journal such as Ecology or American Naturalist, claiming that changing nutrient levels in the North Sea had no significant effect on plankton community structure, yet fail to show how other factors were eliminated or which species were the key components, rapid rejection could be expected. Yet such claims or assumptions are made as a matter of course by the working groups associated with the international regulatory framework within the NE Atlantic - a far more crucial event, so far as the environment is concerned. Something seemed badly wrong with our sense of priorities!

Another aspect is that words often come with a surprising amount of baggage, which may result in misunderstanding amongst the participants of a debate. When it is stated that the precautionary principle is a political rather than a scientific concept this should not have a pejorative content. But some people do not consider the politician's influence on environmental policy in a positive light, and interpret the precautionary principle accordingly. Perhaps a better term might be that the precautionary principle is primarily a (rational) policy or a managerial concept.

A second example of problems with words may arise from the debate between scientists and environmentalists over the science vs. politics issue, with the environmentalists asserting that this is a scientific principle. One aspect possibly colouring this debate is that in the past one sometimes successful tactic of some who sought to retain the old status quo within various regulatory bodies was to seek to stigmatise the environmentalist's basic objections about the burden of proof as unscientific and unquant-

ifiable, and therefore to be excluded from consideration. This may make environmentalist sensitive about the label of 'non-scientific', particularly if they believe that such practices still have power. The message for environmentalists is that rational arguments are not necessarily scientific, and for all of us that it helps if one understands the preconceptions of one's debating partner.

A HIERARCHY OF INCAUTIOUS DECISIONS

Perhaps rather more informative than the science vs. politics debate is to examine the various ways in which mistakes come to be made. It seems that one can divide these into three categories ignorance, uncertainty, and corrupted science. These tend to shade into one anther, and environmental problems tend to progress through these three categories.

Ignorance

Mistakes arising from ignorance (used here in a strictly technical sense) represent the deepest and most crucial aspect of the need for the precautionary principle - although this aspect currently receives little appreciation.

Science has to proceed by making certain assumptions, and early assumptions become deeply embedded in the theory. There is nothing wrong in this per se, indeed it is the only way to carry out science. But it does mean that adjustments that are required at this level require a paradigm shift, and that this often only comes about after a major and inarguable failure in prevailing explanations.

Bryan Wynne (Wynne, 1992) has provided one example of this. As a result of the Chernobyl disaster the soils of Cumbria, England, were contaminated with radioactive caesium. The levels were predicted to fall rapidly, but in fact persisted for several years. Upon investigation it turned out that the parameters buried in the models were based on experiments carried out, for different reasons, in the 1960s on clay based soils. The affected soils in Cumbria were peat based, which have a different binding capacity for caesium, hence the unexpected effect.

There are many other examples. The issue is not that the scientists involved are incompetent; they may well have been making what appeared to be the soundest of judgements. Who would have believed that DDT would have had the effects that it did, bearing in mind the toxicity testing that went before? Who would have believed that substances as apparently inert as CFCs - indeed they were used for that very reason - would turn out to have such devastating effects in the upper atmosphere?

Ignorance provides a severe challenge to policymakers. Many of the pressing environmental pollution problems that we now face were created from ignorance. The issue is how we prevent analogous situations, about which we currently know nothing, arising in future.

Uncertainty

This is the aspect that people usually have in mind when they refer to the precautionary principle for example in the new North Sea Quality Status Report's discussion of potential problems from contaminants that require further research (Anon. 1993). It usually emerges only when clear signs of a problem become apparent, sparking off a research programme to identify the cause.

The nature of the debate that then ensues has been outlined in some detail in the first section of this paper. Superficially, at least, the problem is the practical one of establishing just how much toxicity testing one has to do, ranging from those who see it as the making of relatively minor adaptations of existing monitoring programmes, through to those, including myself, who argue that, in order to safely employ this method, one would have to do a vast amount of testing on many chemical and biological species, alone and in combination, which would be impractical in terms of effort and cost. At a more detailed level, as we also saw in the first section, issues of uncertainty tend to be tangled up with those of ignorance.

Corrupted science

Again, this term is used in a technical sense, describing a situation where a scientific stance is driven by criteria set out with the field. At its most extreme scientific caution is overruled by commercial or political concerns. One example of this is the doubts that existed among technical experts over the safety of pressurised water reactors such as Three Mile Island, which are widely believed to have been overridden by the pressing commercial need to see a return on investment.

At a more mundane level research funding may be dominated and driven by what a government department is prepared to fund, either through political considerations or through inertia. At an individual level it may prove understandably difficult for influential researchers to change long held beliefs even though the evidence no longer supports their conclusions.

Problems of corrupted science tend to occur as part of the end game of environmental politics, when the technical arguments have already been lost.


Paper presented at the Precautionary Principle Conference, Institute of Environmental Studies, University of New South Wales, 20-21 September.

Back...

Divider