6. Ranking Puget Sound Indicators

Terminology and Concepts

Ecosystem assessment indicator

Technically robust and rigorous metric used by scientists and managers to understand of ecosystem structure and function

Improving indicator

Indicator that is increasing faster in the short-term but slower in the long-term than an index that captures aggregate changes in multiple indicators

Lagging indicator

Indicator that is increasing slower in the short- and long-term than an index that captures aggregate changes in multiple indicators

Leading indicator

Indicator that is increasing faster in the short- and long-term than an index that captures aggregate changes in multiple indicators

Other considerations

Indicator evaluation criteria that make an indicator useful, but without which an indicator remains scientifically informative

Ranking scheme

Approach used to weight indicator evaluation criteria

Slipping indicator

Indicator that is increasing faster in the long-term but slower in the short-term than an index that captures aggregate changes in multiple indicators

Vital sign indicator

Scientifically meaningful, but simple, metric that can generally inform the public and policy makers about the state of the ecosystem

The matrix of ecosystem indicators and indicator evaluation criteria provides the basis for ranking indicators. However, ranking indicators requires careful consideration of the relative importance of evaluation criteria. The importance of the criteria will certainly vary depending on the context within which the indicators are used and the people using them. Thus, ranking requires that managers and scientists work together to weight criteria. Failure to weight criteria is, of course, a decision to weight all criteria equally.

As an example of how our matrix could be used to rank indicators, we compare two food web indicators, ratfish/flatfish and jellyfish, using different weighting schemes. We provide these examples simply as an illustration, not to advocate one weighting scheme versus another.

One could begin by scoring each indicator as 1.0 when there is peer-reviewed evidence that that it met a criterion. When there is non-peer reviewed or ambiguous evidence that an indicator meets a criterion we give it a score of 0.5. When it does not meet a criterion, it receives a score of 0.

Equal weights: In this first scheme, we weight all criteria equally. In this case, ratfish/flatfish get a score of 10.5, while jellyfish score a 10 (out of a possible 19).

New monitoring programs: Imagine, however, a case in which the availability of historical data is less important (e.g., when considering a new monitoring program). In this instance, one might wish to ignore data considerations such as “historical data available”, “broad spatial coverage”, “continuous time series”, and “variation understood”. In this scheme, the ranking of the indicators reverses with jellyfish scoring 9.5, while ratfish/flatfish score 8.5 (out of 15).

Discounting importance of peer-review: Our initial weighting discounts indicators that were not supported by peer-reviewed evidence. It is conceivable that in some settings practitioners might wish to equally weight non-peer and peer reviewed evidence. In this case, because much of the evidence supporting the data criteria for ratfish/flatfish is not in peer-reviewed literature, the score for this indicator would increase to 14.5 (out of 19).

Whatever ranking scheme is used, our matrix can serve as a useful starting place for sorting through large numbers of indicators. By carefully ranking indicators in a manner consistent with specific management and policy needs, and choosing to focus on high-ranked indicators for each attribute, a winnowing of indicators naturally takes place.