Each year Federal and State government agencies spend in excess of $1 billion to monitor the quality of water. The programs are conducted to assess status and trends in water quality, to identify and rank existing and emerging problems, to design and implement resource-management programs, and to determine compliance with regulatory programs. Although the data that are collected by government agencies are useful to the individual organizations that sponsor the program, future data users within the same organization and data users outside the collecting agency typically find it difficult to use existing data with confidence. The reasons for this situation are many:
A related problem for those providing data is that many regulatory and nonregulatory programs specify the methodology to be used in analyzing water samples. Although this provides each monitoring program with a measure of comparability, there has been virtually no methodologic consistency between programs. Therefore, data providers must respond to requests for different methods for determining the same constituent, often within the same measurement range. This program-specific approach to water-quality monitoring inappropriately and inefficiently increases the demands on limited resources while reducing the utility of water-quality information available.
Intergovernmental Task Force on Monitoring Water Quality (ITFM) has as its principal objective the development of an integrated, voluntary nationwide strategy for water-quality monitoring. Implicit is the collection of comparable data of known quality. This appendix presents the approach of the ITFM Data Collection Methods Task Group (Task Group) to the collection of samples and analysis of environmental data in a manner that produces comparable data and permits the merger of data from many sources into definable data sets to address the needs of the user community. In this appendix, sampling, sample handling, field and laboratory methods, and data qualifiers that are used to describe these activities are considered, and an institutional framework to encourage evaluation and implementation of the component principles of data comparability is proposed. Data comparability is defined by the ITFM as the characteristics that allow information from many sources to be of definable or equivalent quality so that it can be used to address program objectives not necessarily related to those for which the data were collected.
Achieving data comparability and communicating the characteristics of the data that permit assessment of comparability (utility) by a secondary user are the key technical issues to be addressed. The issues involved in achieving data comparability to maximize data utilization are consistent with operating in a well-defined quality system. Methods and procedures need to be fully described, validated, and performed by competent practitioners, and performance needs to be evaluated against a reference. These requirements are equally applicable to field and laboratory data and physical, chemical, and biological measures. However, the extent to which they can be applied varies significantly and is discussed in the following sections.
For many analytes, the equivalency of prelaboratory techniques must be demonstrated if the results reported by different groups are to be compared. There is no longer any question that the individual sample collector must continually demonstrate competence in prelaboratory techniques if the resulting analyses are to be internally reliable or comparable with the results of other groups. For chemical measurements, analytes of this nature generally include constituents measured in concentrations of less than 10 micrograms per liter. Demonstrating comparability of prelaboratory techniques when collecting biological and other samples for analyses requires most of the same considerations as laboratory methods. Integral components of the sample-collection process include written procedures, training, documentation that defines conditions under which the techniques are equivalent, and validation data from field tests of the techniques involved. Because of the vast number of different conditions under which samples must be collected and the comparably large number of natural and contaminated water, sediment, and habitat matrices, it is probable that many prelaboratory techniques will need to be used side by side in the matrix to be measured to establish comparability. Side-by-side comparisons are costly, especially in the field. To limit duplication of on-site comparisons, it is recommended by the ITFM that Federal and cooperating agencies consider maintaining a common on-line computerized listing of comparable prelaboratory methods and associated validation data. One approach to developing a list of comparable field methods is to start with documented side-by-side comparisons of comparable techniques. The initial list can be supplemented by any agency that acquires comparison data on additional procedures.
The documentation of a prelaboratory technique would include the same elements that are typically included in the description of a laboratory method; results of sample analysis would demonstrate that in the matrix of interest and for the type of sample (for example, flow-weighted surface water, ground water, and so forth), the sample is not contaminated or reduced in analyte concentration within a specified limit or is representative of the biological community being evaluated. There also would be a qualitative estimate of the skill level required to perform the technique. For those techniques that require greater skill, more-frequent quality-control samples would be recommended.
In summary, prelaboratory methods become more important as the number of secondary-data users increases, data from varying habitats are compared, and method detection levels decrease. The construction of an index of equivalent prelaboratory methods into a national electronic data base, which would include access to QA information that demonstrates applicability of the method, is recommended. The maintenance of a list of accessible sites is proposed for prelaboratory methods verification, such as springs, wells, and large lakes at which constant, known concentrations of analytes or aquatic and semiaquatic communities exist. Such sites would be utilized by two or more water-quality-data-collecting entities for the purpose of evaluating comparability of prelaboratory data-collection techniques.
The alternative approach is to specify the data-quality requirements for a program and to permit the data-collecting entity or laboratory to select the method that best meets its specifications. This is called a performance-based methods system (PBMS). A PBMS is defined as a system that permits the use of any appropriate sampling and analytical measurement method that demonstrates the ability to meet established performance criteria and complies with specified data-quality objectives. The Task Group has recommended the use of PBMS as a mechanism to assure data comparability. Performance criteria, such as precision, bias, sensitivity, specificity, and detection limit, must be designated, and a sample collection or sample-analysis method-validation process, documented. The implementation of a PBMS with corresponding required data qualifiers entered into a multiuser data base will allow divergent data from numerous environmental programs to be used for many purposes. Eventually, a PBMS should apply to all measurement systems. However, initial application is proposed only for chemical and physical laboratory methods. Implementing a PBMS will be a principal activity of the Methods and Data Comparability Board.
For a PBMS to work, the following basic concepts must be defined and targeted:
Similar checklists (or procedures) need to be developed to address the unique features of prelaboratory methods and eventually biological and other systems, which are equally important.
Defining the performance criteria of a method to meet data-quality objectives is the first step in initiating a PBMS. Statistically based quality-control criteria for replicate measurements and calibrations should be established as a measure of required precision. Bias limits are determined by analyzing spiked samples, standard reference materials, and performance-evaluation samples. Method detection limits over a significant period of time are required to determine the application of a method to monitoring needs or regulatory requirements. The performance range of a method also should be determined. The method must not generate background or interferences that will give false qualitative or quantitative information. If a method is considered to be applicable for multimedia, then documented evidence should be available to support this use. The Task Group strongly recommends using methods that have been published in peer-reviewed or equivalent literature and that meet or exceed the performance criteria of reference methods for the analytes of interest. (Many of these principles, approaches, and needs are equally applicable regardless of their use in generating chemical, physical, or biological data.)
Achieving these goals in all media requires training, the availability of matrix-specific performance-evaluation materials, the implementation of a laboratory-accreditation process, and the systematic audit of activities. The current stock of standard chemical and biological reference materials and performance evaluation samples is limited or, in some cases, nonexistent and needs to be developed or expanded to cover a wider range of constituents and media.
The training requirements to execute a PBMS and to reach some level of national comparability are extensive because of the diversity of water-quality-monitoring programs and data requirements. A "National Curriculum" needs to be established and should include formal and informal components.
The Task Group recognizes the need for laboratory accreditation with periodic review of activities as an important element in the PBMS. The concept of a national accreditation program was recently approved by a Federal interagency committee, the Committee on National Accreditation of Environmental Laboratories, and discussed at the National Environmental Laboratory Accreditation Conference in February 1995. Factors to be included in such a program should be based on International Standards Organization (ISO) Guide 25 and address organization and management; quality-system audit and review; personnel; physical accommodations and work environment; equipment, reference materials, and reference collections; measurement traceability and calibration; calibration and test methods; handling of calibration and test samples; records; certificates and reports; subcontracting of calibration and testing; outside support and supplies; and complaints.
The programmatic elements and resources required to achieve data comparability by using a PBMS are presented in table 1. In this presentation, sampling and laboratory-related processes are included, as are physical, chemical, and biological disciplines.
The MDCB will provide a framework and a forum for common approaches to data collection in all appropriate water-quality-monitoring programs. Action will be taken to improve the scientific validity of water-quality data; to establish common approaches to collecting water-quality-monitoring information; to provide a forum for advancing state-of-the-art technology in water-quality methods and practices; to assist all levels of government in carrying out monitoring in a coordinated, mutually enforceable manner; and to recommend initiatives that lead to data comparability between agencies.
To accomplish its objectives, the Board will establish priorities and function in the following areas.
ITFM Report Appendixes Table of Contents