Section 4. Evaluation of Potential Indicators for Puget Sound

1. Indicator selection and organization

We began our evaluation of indicators by compiling a list of available indicators. To build on previous efforts, we selected indicators from three sources: a 2008 report titled, “Environmental Indicators for the Puget Sound Partnership: A Regional Effort to Select Provisional Indicators (Phase 1);” the PSP Action Agenda; and the 2009 PSP Technical Memoranda, “Identification of Ecosystem Components and Their Indicators and Targets,” and “Ecosystem Status and Trends” (Puget Sound Partnership 2008a, O’Neill et al. 2008, Puget Sound Partnership 2009b, Puget Sound Partnership 2009c). Further, a small number of indicators were identified through a review of the regional literature (e.g. Wiley and Palmer 2008, DeGasperi et al. 2009) and were also included on the list of available indicators.

The authors of the “Environmental Indicators for the Puget Sound Partnership” report reviewed over 100 documents to create a list of more than 650 indicators that had been proposed or used in Puget Sound and Georgia Basin (O’Neill et al. 2008). Using a set of screening criteria, they reduced the list to approximately 250 indicators that were “good,” or “potential.” Further, there was a set of indicators, which were of, “possible future,” value but were not considered for use in that evaluation because they did not have existing data. However, they were included in our evaluation. Finally, there was a small group of indicators indentified that were not evaluated in the 2008 work. These were also included in the PSSU process.

The PSP Action Agenda listed a subset of environmental indicators, which had been selected based on a review by the PSP Science Panel (Puget Sound Partnership 2008a). This list of 102 indicators was included in our evaluation process to ensure completeness.

In 2009, the PSP began a separate indicator selection process specifically guided by the Open Standards for the Practice of Conservation (Conservation Measures Partnership 2007, Puget Sound Partnership 2009b) which included the development of Focal Components and Key Attributes through a series of workshops. As summarized in the 2009 Technical Memorandum, “Identification of Ecosystem Components and Their Indicators and Targets,” the process resulted in the identification of over 160 indicators, including many associated with the Built Environment, Working Marine Industries, Working Resource Lands and Industries, Nature Oriented Recreation, and Aesthetics, Scenic Resources, and Existence Values (Puget Sound Partnership 2009b). These indicators were included in our evaluation, unless they had been previously evaluated and found to be theoretically unsound (O’Neill et al. 2008).

In a parallel effort, the PSP Technical Memorandum, “Ecosystem Status and Trends,” reported on a set of 43 indicators (Puget Sound Partnership 2009e). A subset of these were used in the 2009 State of the Sound report. All were included for consideration.

Finally, with specific regard to the indicators of Water Quantity, the literature identifies well over 150 unique indicators, which can be utilized to track various aspects of the hydrologic flow regime (see Olden and Poff 2003). Instead of individually evaluating each indicator, a literature review was undertaken to identify issues of potential concern in the Puget Sound region (see Section 5.5) and the results of that literature review were used to focus the choice of Water Quantity indicators for further evaluation.

The entire set of indicators was combined and redundant indicators removed, yielding a composite list of over 250 preliminary indicators for evaluation. The indicators were then organized according to the Key Attributes of our framework (see Figure 2 in Section 3). Our initial organization was based solely on expert opinion and recommendations. The process identified several indicators that could be appropriately categorized under more than one Key Attribute. However, the evaluation process allowed for the reorganization or reassignment of indicators based on the results of the review of the literature.

Once organized, each individual indicator was evaluated against a set of evaluation criteria, as described below. Importantly, the aim of this process was to support the science-policy processes of the PSP by evaluating the degree to which indicators meet

2. Indicator Evaluation Criteria

There exist nearly as many guidelines and criteria for developing and selecting individual indicators as there exist indicators. The summary of criteria for relevant and reliable indicators builds on the recommendations in the indicator report to the PSP (O’Neill et al. 2008), and is based on (Kurtz et al. 2001, Landres et al. 1988, Niemeijer and de Groot 2008, Rice and Rochet 2005, Harwell et al. 1999, Doren et al. 2009, O’Connor and Dewling 1986, Noss 1990, Jackson et al. 2000, Rice 2003, Jorgensen et al. 2005, Jennings 2005). These criteria apply to indicators of ecosystem state, the focus of this chapter. However, the approach and criteria we develop here is immediately transferable to the rigorous evaluation of driver and pressure indicators as well.

We divide indicator criteria into three categories: primary considerations, data considerations, and other considerations. Primary considerations are essential criteria that should be fulfilled by an indicator in order for it to provide scientifically useful information about the status of the ecosystem in relation to PSP goals. Data considerations relate to the actual measurement of the indicator. Data considerations criteria are listed separately to highlight ecosystem indicators that meet all or most of the primary considerations, but for which data are currently unavailable. Other considerations criteria may be important but not essential for indicator performance.

Other considerations are meant to incorporate non-scientific information into the indicator evaluation process. Ecosystem indicators should do more than simply document the decline or recovery of ecosystem health, they must also provide information that is meaningful to resource managers and policy makers (Orians and Policansky 2009). Because indicators serve as the primary vehicle for communicating ecosystem status to stakeholders, resource managers, and policymakers, they may be critical to the policy success of EBM efforts, where policy success can be measured by the relevance of laws, regulations, and governance institutions to ecosystem goals. Importantly, policy success does not necessarily produce effective management since it is possible to be successful at implementing poor policy. Nonetheless, advances in public policy and improvements in management outcomes are most likely if indicators carry significant ecological information and resonate with the public.

It should be noted that all of the criteria listed need not be weighted equally, nor is it necessary to meet all of the criteria for an indicator to be valuable or of use for a specific application. Scientifically credible indicators should meet the “primary considerations” we outline below, and that further selection and evaluation be based on local needs and guided by the data and other considerations. A discussion of potential ranking is in Section 5.6.

The criteria we used are as follows:

Primary considerations

  1. Theoretically-sound: Scientific, peer-reviewed findings should demonstrate that indicators can act as reliable surrogates for ecosystem attribute(s)
  2. Relevant to management concerns: Indicators should provide information related to specific management goals and strategies.
  3. Responds predictably and is sufficiently sensitive to changes in a specific ecosystem attribute(s): Indicators should respond unambiguously to variation in the ecosystem attribute(s) they are intended to measure, in a theoretically- or empirically-expected direction.
  4. Responds predictably and is sufficiently sensitive to changes in a specific management action(s) or pressure(s): Management actions or other human-induced pressures should cause detectable changes in the indicators, in a theoretically- or empirically-expected direction, and it should be possible to distinguish the effects of other factors on the response.
  5. Linkable to scientifically-defined reference points and progress targets: It should be possible to link indicator values to quantitative or qualitative reference points and target reference points, which imply positive progress toward ecosystem goals.
  6. Complements existing indicators: This criterion is applicable in the selection of a suite of indicators, performed after the evaluation of individual indicators in a post-hoc analysis. Sets of indicators should be selected to avoid redundancy and increase the complementary of the information provided, and to ensure coverage of Key Attributes.

Data considerations

  1. Concrete: Indicators should be directly measureable.
  2. Historical data or information available: Indicators should be supported by existing data to facilitate current status evaluation (relative to historic levels) and interpretation of future trends.
  3. Operationally simple: The methods for sampling, measuring, processing, and analyzing the indicator data should be technically feasible.
  4. Numerical: Quantitative measurements are preferred over qualitative, categorical measurements, which in turn are preferred over expert opinions and professional judgments.
  5. Broad spatial coverage: Ideally, data for each indicator should be available in all PSP Action Areas.
  6. Continuous time series: Indicators should have been sampled on multiple occasions, preferably without substantial time-gaps between sampling.
  7. Spatial and temporal variation understood: Diel, seasonal, annual, and decadal variability in the indicators should ideally be understood, as should spatial heterogeneity/patchiness in indicator values.
  8. High signal-to-noise ratio: It should be possible to estimate measurement and process uncertainty associated with each indicator, and to ensure that variability in indicator values does not prevent detection of significant changes.

Other considerations

  1. Understood by the public and policymakers: Indicators should be simple to interpret, easy to communicate, and public understanding should be consistent with technical definitions.
  2. History of reporting: Indicators already perceived by the public and policymakers as reliable and meaningful should be preferred over novel indicators.
  3. Cost-effective: Sampling, measuring, processing, and analyzing the indicator data should make effective use of limited financial resources.
  4. Anticipatory or leading indicator: A subset of indicators should signal changes in ecosystem attributes before they occur, and ideally with sufficient lead-time to allow for a management response.
  5. Regionally/nationally/internationally compatible: Indicators should be comparable to those used in other geographic locations, in order to contextualize ecosystem status and changes in status.

3. Indicator Evaluation Process

After constructing the framework, the explicit definition of the evaluation criteria, and the selection and organization of the individual indicators, each indicator was evaluated individually. Our intent was to assess each indicator against each evaluation criterion by reviewing peer-reviewed publications and reports. We chose this benchmark because it is consistent with the criterion of peer-review used by other chapters of the Puget Sound Science Update, and it is a criterion that is relatively easy to apply in a consistent fashion. However, we do recognize the value of non-peer reviewed documents as well as the opinion of expert panels. Consequently, where we found such documentation, we include it, while noting that it is not peer-reviewed. The result is a matrix of indicators and criteria that contains specific references and notes in each cell, which summarize the literature support for each indicator against the criteria. We reiterate here that our goal is to review and evaluate indicators that could inform the policy-science process underway in the Puget Sound Partnership. We do not recommend a final indicator portfolio.

Some specific points on the evaluation process:

  1. The intent of including references was to provide sufficient evidence that the indicator met (or failed to meet) each of the specific evaluation criteria. Based on the references, an independent evaluator should be able to understand the important points of the process.
  2. As is the standard for the entire PSSU, we required references to be peer-reviewed publications or reports. Internal agency documents were included when it was clear that there had been an explicit peer-review process.
  3. There was a preference for literature based on studies conducted in the Puget Sound region.
  4. The evaluation notes were meant to be of sufficient detail to allow an independent evaluator to understand the basis for conclusion, when it was not otherwise obvious from the references.
  5. Each of the indicators was evaluated against a specific Key Attribute, which they were meant to describe. If, however, the detailed evaluation indicated that the indicator better described a different Key Attribute, then the individual reviewing that indicator was given the liberty to reassign the indicator.
  6. In some instances no references were found relating an indicator to a specific criterion. These cells were left blank.
  7. Some of the Data Considerations were evaluated by a simple yes/no response when the conclusion was obvious (e.g., concrete, historical data, operationally simple, numerical, spatial coverage, continuous).

Certain criteria proved to be problematic during the evaluation. These included:

  1. Relevant to Management Concerns. It was not always obvious to a reviewer if a particular indicator was relevant to management concerns. Management concerns were not always clearly documented or lacked specificity. Often, PSP background documents were referenced based on the presumption that they accurately reflected management concerns.
  2. Understood by Public and Policy Makers. There is a lack of literature documenting the degree to which citizens or their representatives understand the meaning or intent of specific ecosystem indicators (or ecological concepts). The evaluation of an indicator under this criterion is often presumptive and may vary depending on the reviewer.
  3. Cost Effective. The value of the information from an indicator was difficult to determine. Cost effectiveness may be measured by the value of decisions made based on the new information from the indicator. This is difficult because not only are decision scenarios complex and difficult to evaluate on a cost basis, but it is also difficult to predict the range of potential decisions that could be made based on the new information. Further, cost effectiveness may be measured by the opportunity cost of choosing one indicator over another. Assuming that the suite of indicators (and information) is limited, the value of choosing one indicator over another is not only related to the new information gained, but also the cost of the information lost by not collecting data for other indicators.
  4. Complements Existing Indicators. It was necessary to have a complete suite of indicators in order to evaluate the complementarity and/or redundancy of each of the indicators. As mentioned above, this criterion should be applied in a post-hoc analysis.

Key point: Indicators should be evaluated using widely accepted and transparent criteria. This chapter used criteria derived from the vast literature on ecosystem indicators, which were divided into three groups: 1) Primary considerations are essential criteria that should be fulfilled by an indicator; 2) Data considerations relate to the actual measurement of the indicator; 3) Other considerations criteria may be important but not essential for indicator performance.


Next Step: Evaluations were focused on the presence or absence of peer-reviewed evidence that an indicator met each criterion. Thus, we did not evaluate the rigor of the evidence. An important next step will be to carefully review the evidence and distinguish between weak and strong evidence.


Conservation Measures Partnership. 2007. Open Standards for the Practice of Conservation, Version 2.0.

DeGasperi, C.L., et al. 2009. Linking Hydrological Alteration to Biological Impairment in Urbanizing Streams of the Puget Lowland, Washington, USA. Journal of the American Water Resources Association 45(2):512-533.

Doren, R.F., et al. 2009. Ecological indicators for system-wide assessment of the greater everglades ecosystem restoration program. Ecological Indicators 9:S2-S16.

Harwell, M.A., et al. 1999. A framework for an ecosystem integrity report card. Bioscience 49(7):543-556.

Jackson, L.E., J. Kurtz, and W.S. Fisher. 2000. Evaluation guidelines for ecological indicators. EPA/620/R-99/005. U.S. Environmental Protection Agency, Office of Research and Development, Research Triangle Park, NC. 107 p.

Jennings, S. 2005. Indicators to support an ecosystem approach to fisheries. Fish and Fisheries 6(3):212-232.

Jorgensen, S.E., R. Costanza, and F.L. Xu. 2005. Handbook of ecological indicators for assessment of ecosystem health. Boca Raton (FL): CRC Press.

Kurtz, J.C., L.E. Jackson, and W.S. Fisher. 2001. Strategies for evaluating indicators based on guidelines from the Environmental Protection Agency’s Office of Research and Development. Ecological Indicators 1(1): 49-60.

Landres, P.B., J. Verner, and J.W. Thomas. 1988. ECOLOGICAL USES OF VERTEBRATE INDICATOR SPECIES - A CRITIQUE. Conservation Biology 2(4):316-328.

Niemeijer, D. and R.S. de Groot. 2008. A conceptual framework for selecting environmental indicator sets. Ecological Indicators 8(1):14-25.

Noss, R.F. 1990. Indicators for monitoring biodiversity: a hierarchical approach. Conservation Biology 4(4):355-364.

O'Connor, J.S. and R.T. Dewling. 1986. Indices of marine degradation: their utility. Environmental Management 10(3):335-343.

Olden, J.D. and N.L. Poff. 2003. Redundancy and the choice of hydrologic indices for characterizing streamflow regimes. River Research and Applications 19(2):101-121.

O'Neill, S.M., C.F. Bravo, and T.K. Collier. 2008. Environmental Indicators for the Puget Sound Partnership: A Regional Effort to Select Provisional Indicators (Phase 1). Summary Report. National Oceanic and Atmospheric Administration. Seattle (WA).

Orians, G.H. and D. Policansky. 2009. Scientific Bases of Macroenvironmental Indicators. Annual Review of Environment and Resources 34:375-404.

Puget Sound Partnership. 2008a. Puget Sound Action Agenda, Protecting and Restoring the Puget Sound Ecosystem by 2020. Olympia (WA).

Puget Sound Partnership. 2009b. Identification of Ecosystem Components and Their Indicators and Targets. Puget Sound Partnership.

Puget Sound Partnership. 2009c. Ecosystem Status and Trends. Puget Sound Partnership.

Puget Sound Partnership. 2009e. Ecosystem Status & Trends, in A 2009 Supplement to State of the Sound Reporting. Puget Sound Partnership.

Rice, J.C. and M.-J. Rochet. 2005. A framework for selecting a suite of indicators for fisheries management. ICES J. Mar. Sci. 62(3):516-527.

Wiley, M.W. and R.N. Palmer. 2008. Estimating the impacts and uncertainty of climate change on a municipal water supply system. Journal of Water Resources Planning and Management-ASCE 134(3):239-246.