Successful chemical assessment programs allow the science to speak for itself

Everyone who is committed to producing better and timely chemical assessments would agree objectivity is one of the hallmarks of a sound assessment program. Objectivity is particularly important when it comes to evaluating studies and results that will form the foundation for a chemical assessment.

To ensure a strong foundation, these studies and results must be judged on their merits through the application of consistent criteria regarding quality, relevance, and reliability. This approach for evaluating research is one of ACC’s core principles for improving federal chemical hazard and risk assessment programs, such as the U.S. Environmental Protection Agency’s (EPA) Integrated Risk Information System (IRIS).

A proposal was floated at a recent EPA IRIS stakeholder meeting that would move assessment programs away from this core principle. The scheme would brand a study as biased, and downgrade it, if industry scientists conducted the investigation or if the private sector funded it. ACC strongly opposes such an approach because it would be a major step backward for producing high-quality assessments.

It has been, and should remain, a fundamental principle of science that findings must be evaluated independent of the persons who conducted the study or who funded it — or the affiliation, gender, religion, political beliefs, or etc., of the investigators.

If a particular group funded a study, either on behalf of industry, a public interest group, a foundation or a granting agency, this may call for a close examination of the science to ensure the results are indeed reliable and credible. However, to downgrade a study based solely on the funding source would inappropriately politicize science and research and would have the effect of restricting the use of relevant and high quality data without objective scientific justification.

All science is funded by some entity, and all sources of funding are potentially biasing. Questions can arise about the credibility of research by scientists funded by government agencies or non-profit organizations, not just the private sector. The best approach to evaluating quality, reliability, and credibility of a study needs to be transparent and applicable to all sources of funding and across all affiliations of investigators.

In fact, George Mason University conducted a recent survey of risk assessment experts. It found that a great majority of experts agree that the same criteria should be used to evaluate the quality and reliability of all studies, regardless of their origin in academia, government, industry, contract labs, etc.

The scientific community and regulatory agencies must continue to reject any notion that source of funding or affiliation of investigators should be grounds for downgrading studies and potentially limiting their use in chemical assessment. Quite simply, applying transparent and objective procedures to all studies will allow the science to speak for itself.

Consistent with our principles, ACC urges federal agencies to develop, define, and apply clear criteria that can be used consistently and objectively to judge the quality (including internal validity and external validity) and relevance of scientific data.

For Further Reading

For a complete discussion of these points, see the open access article published in Environmental Health Perspectives (EHP) by Barrow and Conrad entitled “Assessing the Reliability and Credibility of Industry Science and Scientists.”

Additionally, a more refined proposal for how competing interests should be handled is presented in Conrad and Becker’s 2011 EHP open access publication, “Enhancing Credibility of Chemical Safety Studies: Emerging Consensus on Key Assessment Criteria,” and correspondence related to this publication.

, , , ,