Each year Federal and State government agencies spend in excess of $1 billion to monitor the quality of water. The programs are conducted to assess status and trends in water quality, to identify and rank existing and emerging problems, to design and implement resource-management programs, and to determine compliance with regulatory programs. Although the data that are collected by government agencies are useful to the individual organizations that sponsor the program, future data users within the same organization and data users outside the collecting agency typically find it difficult to use existing data with confidence. The reasons for this situation are many:

A related problem for those providing data is that many regulatory and nonregulatory programs specify the methodology to be used in analyzing water samples. Although this provides each monitoring program with a measure of comparability, there has been virtually no methodologic consistency between programs. Therefore, data providers must respond to requests for different methods for determining the same constituent, often within the same measurement range. This program-specific approach to water-quality monitoring inappropriately and inefficiently increases the demands on limited resources while reducing the utility of water-quality information available.

Intergovernmental Task Force on Monitoring Water Quality (ITFM) has as its principal objective the development of an integrated, voluntary nationwide strategy for water-quality monitoring. Implicit is the collection of comparable data of known quality. This appendix presents the approach of the ITFM Data Collection Methods Task Group (Task Group) to the collection of samples and analysis of environmental data in a manner that produces comparable data and permits the merger of data from many sources into definable data sets to address the needs of the user community. In this appendix, sampling, sample handling, field and laboratory methods, and data qualifiers that are used to describe these activities are considered, and an institutional framework to encourage evaluation and implementation of the component principles of data comparability is proposed. Data comparability is defined by the ITFM as the characteristics that allow information from many sources to be of definable or equivalent quality so that it can be used to address program objectives not necessarily related to those for which the data were collected.

Achieving data comparability and communicating the characteristics of the data that permit assessment of comparability (utility) by a secondary user are the key technical issues to be addressed. The issues involved in achieving data comparability to maximize data utilization are consistent with operating in a well-defined quality system. Methods and procedures need to be fully described, validated, and performed by competent practitioners, and performance needs to be evaluated against a reference. These requirements are equally applicable to field and laboratory data and physical, chemical, and biological measures. However, the extent to which they can be applied varies significantly and is discussed in the following sections.

Prelaboratory Practices

Samples must represent, as closely as possible, the water-quality characteristic or biological community that is being evaluated. In the last few years, there has been a renewed recognition that prelaboratory sample-collection methods can result in dramatically different concentrations of analytes and other water components being delivered to the analyst. Although this is not new or contrary to most people's intuition, the problems associated with prelaboratory techniques, such as sampling, sample process, preservation, containers, and shipping conditions, can be expected to increase. Laboratory equipment and techniques continue to be developed that push the detection limits of target analytes well below concentrations that can result from contamination introduced in prelaboratory processes. For example, particulates from the sampling environment can result in elevated concentrations of dissolved trace metals. Different filtration pressures and volumes can result in dramatically different constituent concentrations measured from the same sample by using the same pore-size filter. Thus, more skill will be required to perform the prelaboratory work as demands increase to measure lower and lower parameter concentrations more precisely.

For many analytes, the equivalency of prelaboratory techniques must be demonstrated if the results reported by different groups are to be compared. There is no longer any question that the individual sample collector must continually demonstrate competence in prelaboratory techniques if the resulting analyses are to be internally reliable or comparable with the results of other groups. For chemical measurements, analytes of this nature generally include constituents measured in concentrations of less than 10 micrograms per liter. Demonstrating comparability of prelaboratory techniques when collecting biological and other samples for analyses requires most of the same considerations as laboratory methods. Integral components of the sample-collection process include written procedures, training, documentation that defines conditions under which the techniques are equivalent, and validation data from field tests of the techniques involved. Because of the vast number of different conditions under which samples must be collected and the comparably large number of natural and contaminated water, sediment, and habitat matrices, it is probable that many prelaboratory techniques will need to be used side by side in the matrix to be measured to establish comparability. Side-by-side comparisons are costly, especially in the field. To limit duplication of on-site comparisons, it is recommended by the ITFM that Federal and cooperating agencies consider maintaining a common on-line computerized listing of comparable prelaboratory methods and associated validation data. One approach to developing a list of comparable field methods is to start with documented side-by-side comparisons of comparable techniques. The initial list can be supplemented by any agency that acquires comparison data on additional procedures.

The documentation of a prelaboratory technique would include the same elements that are typically included in the description of a laboratory method; results of sample analysis would demonstrate that in the matrix of interest and for the type of sample (for example, flow-weighted surface water, ground water, and so forth), the sample is not contaminated or reduced in analyte concentration within a specified limit or is representative of the biological community being evaluated. There also would be a qualitative estimate of the skill level required to perform the technique. For those techniques that require greater skill, more-frequent quality-control samples would be recommended.

In summary, prelaboratory methods become more important as the number of secondary-data users increases, data from varying habitats are compared, and method detection levels decrease. The construction of an index of equivalent prelaboratory methods into a national electronic data base, which would include access to QA information that demonstrates applicability of the method, is recommended. The maintenance of a list of accessible sites is proposed for prelaboratory methods verification, such as springs, wells, and large lakes at which constant, known concentrations of analytes or aquatic and semiaquatic communities exist. Such sites would be utilized by two or more water-quality-data-collecting entities for the purpose of evaluating comparability of prelaboratory data-collection techniques.

Laboratory Practices

There are two general ways of approaching the acquisition of comparable chemical, biological, and physical data. One way is for everyone to use identical analytical procedures. This is the current practice within many of the national water-quality-monitoring programs. By using this rigidly prescriptive approach, laboratories and data-collecting entities must maintain competence in a large number of prescribed methods (one for each of the monitoring programs); this produces, in some cases (more frequently for chemical analyses), almost identical data. Unfortunately, when these data are stored in a multiuser data base, the original data-quality objectives and data characteristics usually are lost. This is neither practical nor cost effective.

The alternative approach is to specify the data-quality requirements for a program and to permit the data-collecting entity or laboratory to select the method that best meets its specifications. This is called a performance-based methods system (PBMS). A PBMS is defined as a system that permits the use of any appropriate sampling and analytical measurement method that demonstrates the ability to meet established performance criteria and complies with specified data-quality objectives. The Task Group has recommended the use of PBMS as a mechanism to assure data comparability. Performance criteria, such as precision, bias, sensitivity, specificity, and detection limit, must be designated, and a sample collection or sample-analysis method-validation process, documented. The implementation of a PBMS with corresponding required data qualifiers entered into a multiuser data base will allow divergent data from numerous environmental programs to be used for many purposes. Eventually, a PBMS should apply to all measurement systems. However, initial application is proposed only for chemical and physical laboratory methods. Implementing a PBMS will be a principal activity of the Methods and Data Comparability Board.

For a PBMS to work, the following basic concepts must be defined and targeted:

Similar checklists (or procedures) need to be developed to address the unique features of prelaboratory methods and eventually biological and other systems, which are equally important.

Defining the performance criteria of a method to meet data-quality objectives is the first step in initiating a PBMS. Statistically based quality-control criteria for replicate measurements and calibrations should be established as a measure of required precision. Bias limits are determined by analyzing spiked samples, standard reference materials, and performance-evaluation samples. Method detection limits over a significant period of time are required to determine the application of a method to monitoring needs or regulatory requirements. The performance range of a method also should be determined. The method must not generate background or interferences that will give false qualitative or quantitative information. If a method is considered to be applicable for multimedia, then documented evidence should be available to support this use. The Task Group strongly recommends using methods that have been published in peer-reviewed or equivalent literature and that meet or exceed the performance criteria of reference methods for the analytes of interest. (Many of these principles, approaches, and needs are equally applicable regardless of their use in generating chemical, physical, or biological data.)

Achieving these goals in all media requires training, the availability of matrix-specific performance-evaluation materials, the implementation of a laboratory-accreditation process, and the systematic audit of activities. The current stock of standard chemical and biological reference materials and performance evaluation samples is limited or, in some cases, nonexistent and needs to be developed or expanded to cover a wider range of constituents and media.

The training requirements to execute a PBMS and to reach some level of national comparability are extensive because of the diversity of water-quality-monitoring programs and data requirements. A "National Curriculum" needs to be established and should include formal and informal components.

The Task Group recognizes the need for laboratory accreditation with periodic review of activities as an important element in the PBMS. The concept of a national accreditation program was recently approved by a Federal interagency committee, the Committee on National Accreditation of Environmental Laboratories, and discussed at the National Environmental Laboratory Accreditation Conference in February 1995. Factors to be included in such a program should be based on International Standards Organization (ISO) Guide 25 and address organization and management; quality-system audit and review; personnel; physical accommodations and work environment; equipment, reference materials, and reference collections; measurement traceability and calibration; calibration and test methods; handling of calibration and test samples; records; certificates and reports; subcontracting of calibration and testing; outside support and supplies; and complaints.

The programmatic elements and resources required to achieve data comparability by using a PBMS are presented in table 1. In this presentation, sampling and laboratory-related processes are included, as are physical, chemical, and biological disciplines.

Table 1. Resources needed to support a plan for achieving data comparability
In summary, implementation of a PBMS will be consistent with the production of data of known quality based on scientific procedures and judgments rather than on methods and procedures that have been mandated by regulatory programs. A PBMS will provide the incentive to develop innovative and better methods that are cost effective. This will allow greater flexibility by the water-quality-monitoring community that is consistent with total quality management.

Data Qualifiers

The Task Group has recommended a minimum set of water-quality-data qualifiers that must reside with the sampling and analytical information. These data qualifiers should be evaluated and updated subsequently as standardization of information continues. They are as follows:

Methods and Data Comparability Board


The Data Collection Methods Task Group has recommended that the Interagency Advisory Committee on Water Data (IACWD), under which the ITFM functions, establish a Methods and Data Comparability Board (MDCB). With the concurrence of participating agencies, the MDCB will coordinate those water-quality-monitoring protocols, methods, and practices being carried out by government agencies to improve the efficiency and effectiveness of these efforts and to improve the comparability of the resulting data. It also will reconcile inconsistencies among agencies in current practices and encourage governmentwide coordination to conduct the most economical and scientifically defensible approach to water-quality monitoring.

The MDCB will provide a framework and a forum for common approaches to data collection in all appropriate water-quality-monitoring programs. Action will be taken to improve the scientific validity of water-quality data; to establish common approaches to collecting water-quality-monitoring information; to provide a forum for advancing state-of-the-art technology in water-quality methods and practices; to assist all levels of government in carrying out monitoring in a coordinated, mutually enforceable manner; and to recommend initiatives that lead to data comparability between agencies.

To accomplish its objectives, the Board will establish priorities and function in the following areas.

Organizational Framework

From an organizational perspective, the following principles will be operative:

Quality Assurance and Methods Comparability

The MDCB will actively develop interagency approaches to ensure data comparability. Important specific activities will include establishing minimum data-quality criteria, conducting intercomparison exercises (testing comparability of methods), using performance and reference samples, validating methods, characterizing reference sites, and problem solving among agencies.


To assure data quality and comparability, the MDCB will investigate accreditation of laboratories and certification of employees.

Guides and Training

A critical aspect of assuring data quality and comparability is the availability of suitable training materials and guides. The MDCB will investigate publishing criteria for validating a method; publishing guides, materials, and standards; issuing specifications for operating under a PBMS; and issuing training curricula to meet the needs of users.


The authors wish to thank the members of the Task Group for their efforts in developing the recommendations set forth in this paper. In particular, Wayne Webb of the U.S. Geological Survey (USGS) evaluated the significance of prelaboratory practices on measurement data, Ann Strong of the U.S. Army Corps of Engineers, developed the position of performance-based methods, Orterio Villa of the U.S. Environmental Protection Agency (USEPA) suggested that an interagency board be formed to assure data comparability in water-quality-monitoring programs, Tom Bainbridge of Wisconsin drafted the Terms of Reference (Appendix H) for the MDCB, and Bernard Malo of the USGS was instrumental in assembling the positions presented in this paper. Herb Brass of the USEPA and Russell Sherer of the South Carolina Department of Health and Environmental Control cochaired the Task Group. This technical appendix is based on a paper presented at the Water Environment Federation Analytical Specialty Conference in Santa Clara, California, August 8-11, 1993.

Return to ITFM Report Appendixes Table of Contents

Please e-mail comments to
Last modified: Thur Feb 5 13:03:32 2004