All research data streams (and their associated research inference and outcomes) are subject to the traditional gatekeepers of scientific quality such as research mentoring, peer review, and the self-correcting nature of science. However, the scientific community and the public it serves, are increasingly being presented with troubling indications that these gatekeepers are failing to ensure that research data are reliable, reproducible and lead to improvements in health outcomes [7,8,9,10,11,12]. These and other reports illustrate the frequency and high cost of irreproducible research, questionable research practices and research waste [13,14,15,16,17]. As a result, funding, publishing and quality assurance organizations are exploring ways to ensure the quality of the data they fund, publish and support. The National Institute of Health (NIH) has expanded guidelines to enhance rigor and reproducibility (https://www.nih.gov/research-training/rigor-reproducibility) in the scientific research they fund [15, 18]. Scientific journals have established specific policies designed to encourage the submission of research reports that are reproducible, robust, and transparent  and the American Society for Quality (ASQ) has recommended the establishment of a national quality standard for biomedical research in drug development .
Funding, publishing and QA associations all have an obvious incentive to engage in efforts to improve scientific research outcomes. However, individual scientists are directly responsible for the generation, quality, integrity and security of experimental data, in addition to the on-going mentoring of research trainees. Their perspective and participation is required to develop effective strategies that will advance translational medicine and improve research outcomes . Individual scientists must be accountable for the quality of their work and the integrity of their data by providing credible assurances that data are robust, reliable and transparent so that their contributions effectively support the entire research enterprise.
Scientists conducting veterinary clinical trials frequently work in non-regulated research settings where flexibility, innovation and creativity are highly valued because these characteristics facilitate learning, self-correction, redirection, and serendipity. In comparison, veterinary clinical trials conducted within regulated research programs are partially constrained to meet regulatory requirements established to ensure patient safety and maintain data integrity. In spite of these differences, the scientist-driven development of a common approach to basic data quality that spans the non-regulated and regulated veterinary clinical trial spectrum would be an effective strategy for demonstrating data quality and enhancing research reliability.