- Open Access
Quality assurance and best research practices for non-regulated veterinary clinical studies
© The Author(s). 2017
- Received: 20 June 2017
- Accepted: 2 August 2017
- Published: 16 August 2017
Veterinary clinical trials generate data that advance the transfer of knowledge from clinical research to clinical practice in human and veterinary settings. The translational success of non-regulated and regulated veterinary clinical studies is dependent upon the reliability and reproducibility of the data generated. Clinician-scientists that conduct veterinary clinical studies would benefit from a commitment to research quality assurance and best practices throughout all non-regulated and regulated research environments. Good Clinical Practice (GCP) guidance documents from the FDA provides principles and procedures designed to safeguard data integrity, reliability and reproducibility. While these documents maybe excessive for clinical studies not intended for regulatory oversight it is important to remember that research builds on research. Thus, the quality and accuracy of all data and inference generated throughout the research enterprise remains vulnerable to the impact of potentially unreliable data generated by the lowest performing contributors. The purpose of this first of a series of statement papers is to outline and reference specific quality control and quality assurance procedures that should, at least in part, be incorporated into all veterinary clinical studies.
Veterinary clinical studies are designed to determine whether a medical intervention (e.g. device, treatment, approach) is safe and effective when used in client-owned animals. While such studies are typically performed in veterinary patients with spontaneous disease (as opposed to experimentally induced models), occasionally studies are performed in healthy client owned dogs (e.g. for disease prevention). Recently, in addition to benefiting animal health, an increasing number of clinical trials in veterinary patients are being undertaken to evaluate a novel therapeutic or device prior to initiation of human clinical trials (known as comparative and/or translational trials). Such studies are based on the premise that spontaneous disease in veterinary patients more closely recapitulates similar human diseases and therefore, data generated from these trials would be more informative than that generated using induced models of disease. Ultimately, the overarching goal of these efforts is to generate data that advance the transfer of knowledge from clinical research to clinical practice in human and veterinary settings . Veterinary clinical trials are also an important component of the ‘one health’ research continuum where complex health interactions among animals, humans and the environment are recognized as key targets for affecting and advancing global health .
Non-regulated and regulated veterinary clinical studies lead to medical advances that improve the health of animals, people and the environment; however, the success of translational and comparative medicine is dependent on the ability of scientists to ensure that the data generated are reliable and reproducible. Research builds upon research; therefore, when expectations related to data quality and study conduct are not standardized, the quality and accuracy of all data and inference generated throughout the research enterprise remains vulnerable to the impact of potentially unreliable data generated by the lowest performing contributors.
To mitigate this risk, clinician-scientists that conduct non-regulated and regulated veterinary clinical studies would benefit from a commitment to research quality assurance and best practices throughout all non-regulated and regulated research environments.
The Food and Drug Administration (FDA) provides internationally harmonized guidance on the ‘design and conduct of all clinical studies of veterinary products in the target species’. (FDA Guidance for Industry: Good Clinical Practice, (GCP; VICH GL9. No 85) . This guidance document is intended to ensure that veterinary clinical studies incorporate the recommended best practices that support the integrity of the study data, and protection of people, animals, the food supply and the environment.
The GCP guidelines established for use with veterinary clinical studies is not the same as the GCP guidelines for clinical trials performed in humans [4, 5]. Nevertheless, both standards include quality assurance (QA) principles and procedures designed to safeguard data integrity, reliability and reproducibility. Adherence to the respective GCP standards is mandated for research conducted in animals or humans when the clinical data are intended to be submitted to regulatory authorities (FDA) for approval. These GCP guidelines provide clear expectations, a consistent approach for conducting ethical, responsible and reliable scientific research, and an established route to compliance. Adherence to them reinforces appropriate research behaviors, supports research quality, and facilitates the development and sustainability of research environments where research and data management best practices are routine. There are other federal (and state) regulations and guidelines that may also be applicable to veterinary, human and translational science including Good Manufacturing Practice (GMP), Good Laboratory Practices (GLP) and others within the Code of Federal Regulation (21 CFR) .
Many foundational (basic and discovery), translational, and one-health research investigations are conducted in non-regulated research environments because the studies are exploratory and data are not intended to be submitted to regulatory authorities. As a result, foundational research is conducted in an ad hoc manner that is highly dependent on individual investigators and is rarely subject to external monitoring prior to entering the publication review process. This inconsistent approach is problematic because research in human and veterinary medicine is not unidirectional. All data along the research spectrum (non-regulated through regulated, discovery through translational) have the potential to advance, divert, or limit scientific progress at all stages. If the quality of the data is questionable in the foundational studies and if these data remain unchallenged and uncorrected, progress at all stages will be impeded - sometimes for years to come.
All research data streams (and their associated research inference and outcomes) are subject to the traditional gatekeepers of scientific quality such as research mentoring, peer review, and the self-correcting nature of science. However, the scientific community and the public it serves, are increasingly being presented with troubling indications that these gatekeepers are failing to ensure that research data are reliable, reproducible and lead to improvements in health outcomes [7–12]. These and other reports illustrate the frequency and high cost of irreproducible research, questionable research practices and research waste [13–17]. As a result, funding, publishing and quality assurance organizations are exploring ways to ensure the quality of the data they fund, publish and support. The National Institute of Health (NIH) has expanded guidelines to enhance rigor and reproducibility (https://www.nih.gov/research-training/rigor-reproducibility) in the scientific research they fund [15, 18]. Scientific journals have established specific policies designed to encourage the submission of research reports that are reproducible, robust, and transparent  and the American Society for Quality (ASQ) has recommended the establishment of a national quality standard for biomedical research in drug development .
Funding, publishing and QA associations all have an obvious incentive to engage in efforts to improve scientific research outcomes. However, individual scientists are directly responsible for the generation, quality, integrity and security of experimental data, in addition to the on-going mentoring of research trainees. Their perspective and participation is required to develop effective strategies that will advance translational medicine and improve research outcomes . Individual scientists must be accountable for the quality of their work and the integrity of their data by providing credible assurances that data are robust, reliable and transparent so that their contributions effectively support the entire research enterprise.
Scientists conducting veterinary clinical trials frequently work in non-regulated research settings where flexibility, innovation and creativity are highly valued because these characteristics facilitate learning, self-correction, redirection, and serendipity. In comparison, veterinary clinical trials conducted within regulated research programs are partially constrained to meet regulatory requirements established to ensure patient safety and maintain data integrity. In spite of these differences, the scientist-driven development of a common approach to basic data quality that spans the non-regulated and regulated veterinary clinical trial spectrum would be an effective strategy for demonstrating data quality and enhancing research reliability.
Data quality across the non-regulated to regulated research spectrum could be maintained through the establishment of a national quality standard with requirements that should be (voluntary) or must be (regulatory) met. The American Society for Quality (ASQ) advocates for the establishment, implementation, and maintenance of a quality management system for biomedical research in drug development that is based on international quality assurance (QA) standards . Others also recommend adapting existing International Organization for Standardization (ISO) standards for research laboratories . While this approach sets a justifiably high bar for research conduct, it may be difficult for individual research scientists to meet the requirements while working within typical non-regulated research environments where QA support and expertise are historically rare.
Resources for integrating Quality Assurance Best Practices into non-regulated research
TDR Handbook: Quality Practices in Basic Biomedical Research, (QPBR)
‘Provide institutions and researchers with the necessary tools for the implementation and monitoring of quality practices in their research, thus promoting the credibility and acceptability of their work. The handbook highlights non-regulatory practices that can be easily institutionalized with very little extra expense’.
RQA: Quality in Research Guidelines for working in non-regulated research
‘to facilitate the stepwise and straightforward development of a value-adding Quality System into any research institute.
Michelson Prize & Grants Research Quality Assurance Toolkit
A basic QA toolkit designed to facilitate best practices in research and data management. Tools and templates that facilitate the development of effective records (personnel, equipment, methods, supplies/reagents, and data) throughout the data life cycle are provided.
RQA Quality Systems Workbook 
‘to provide tools and a practical approach to develop a Quality System that works for the user’
ASQ TRI-2012: Best Quality Practices for Biomedical Research in Drug Development
‘This technical report identifies important quality management system elements for non-regulated biomedical research in drug development in order to ensure credibility of biomedical research results.’
Quality assurance mechanisms for the unregulated research environment.
Quality: an old solution to new discovery dilemmas.
Implementation of Good Research Practices for the early phase, non-regulated drug discovery research environment.
A novel audit model for assessing quality in non-regulated research
Unfortunately, these sound principles and practices have not been adopted in most academic environments because research trainees (and the faculty that mentor them) are not specifically educated in QA principles and procedures. QA support is rare within most non-regulated research settings and funding agencies do not typically require adherence to QA best practices. In spite of these challenges, scientists should consider implementing research QA as a timely and reasonable strategy for demonstrating the quality of their data and the credibility of their research. In addition, there may be a competitive funding advantage for those that adopt and demonstrate research QA best practices.
The resources listed in Table 1 were developed to maintain research flexibility and support (and monitor) data quality and reconstruction. They encourage basic research scientists to integrate QA activities and commit to good documentation practices within their research environment in order to ensure the consistent management (policies, processes and procedures) of personnel, equipment, methods (procedures), reagents, supplies, and data. Some resources (References [25, 27]) include downloadable and adaptable documentation tools and templates for use within specific research settings.
An individual, a team (specific research project based), a program, a college or a private hospital could use the resources identified in Table 1 to design a quality management system or integrate specific quality assurance best practices. Research mentors that wish to establish a full research Quality Management System (QMS) could do so by using these resources to design a sustainable integration and implementation approach. Alternatively, clinician-scientists could begin by selecting specific best practices that would directly mitigate potential risks within their research projects. In a multi-institutional clinical study, for instance, investigators may choose to create standard operating procedures (SOPs) for routine tasks to ensure that all scientists in all areas are consistently performing critical procedures. In the case of a routine procedure such as blood sampling, the SOP might include documentation of patient data, when the blood was collected, who collected the blood, and uniform storage and mailing procedures to a central laboratory. Others may choose to establish a consistent approach to laboratory notebooks and data management so that data are secure, archived, retrievable and do not change over time. Investigators that depend upon electronic data capture may choose to focus on data security, verification and accurate transfer across users and networks. Equipment management is critical for the generation of reliable data in all research settings. Therefore, a reasonable first step for improving data reliability would be to develop consistent procedures for maintaining and managing critical research equipment. Uncertain reagent quality and characterization is a known confounder contributing to research irreproducibility . Therefore, scientists using critical reagents should implement good documentation practices for the receipt, characterization and use of research reagents and supplies.
Using this voluntary and risk-based approach, a research QA implementation plan will be scientist-driven and sustainable over time. Investigators that describe the research QA measures they have adopted will be providing credible evidence within their manuscripts, grant applications, and study reports that their data have been generated and maintained under conditions that support data rigor and reliability. In addition, the integration of these practices will create a laboratory culture where research QA is routine, providing important opportunities for expanding QA expertise among current and future research scientists.
If possible, clinical investigators should consider integrating research QMS using the resources listed in Table 1. In addition to these resources, clinical investigators should explore opportunities to seek help and advice from QA professionals who are supporting regulated research within their research institutions.
Research Documentation Checklist
Project Management: Ensure that research objectives, approach, timeline and budget are planned, communicated and understood.
1. Project plan (roles and responsibilities, objectives, timeline)
2. Research review plan
3. Research publication plan
Personnel Records: Ensure that research records can be traced to competent and appropriate personnel
1. Job descriptions, resumes or CVs
2. Signature & initials identification log
3. Training and ongoing competency (procedures, policies, methods, equipment) records
Critical Equipment Records: Ensure that research records can be traced to well managed and fully operational equipment
1. Equipment inventory log (unique identification)
2. Equipment use, maintenance, verification and calibration records
3. Standard Operating Procedures (SOPs) for use, care and management of equipment
4. Computer systems used to capture, process, generate and report data should be secure, working as expected and fit for their intended purpose.
Method/Procedure Records: Ensure that research data can be traced to methods or procedures that are well described, working as expected, and fit for their intended purpose
1. SOPs for routine research methods
2. Method validation records
3. On-going quality control records
Standard Operating Procedures: Ensure that procedures are performed consistently, revised as needed and maintained as historical records
1. Routine procedures: research methods, equipment use, personnel training, data and research management (lab notebooks, research review, reagents and supplies, data (paper and electronic) collection, use and security)
2. Document management (creation, revision, archiving)
3. SOP linkages to associated recording forms
Research Records (paper/electronic): Ensure that research data and work (who, what, where, when, how) can be fully reconstructed
1. Reagent inventory, reagent characterization, verification and preparation records (receipt, verification, storage, expiration and disposition), supply records
2. Facilities data (temperature, water/air quality, emergency preparedness) if quality critical
3. Unique identification records for research subjects and samples.
4. Sample handling and storage procedures
5. Re-constructable records (accurate, legible, contemporaneous, original, attributable, unchanging, readily retrievable, secure)
6. Error management procedures (detecting, recording, managing errors, outliers and non-conforming data)
Once scientists have successfully integrated a QMS or a targeted approach to a quality commitment, they must monitor their program to ensure compliance and to capture opportunities for continuous improvement within their research management program. Approaches to monitoring quality activities may include self- team- or peer-assessment, the establishment of QA metrics, or the integration of other types of research audit exercises. Finally, scientists should communicate their approach to research quality within their grant proposals and research reports so that other scientists can evaluate the rigor of their research and data management program.
Basic outline that can be used to build a clinical study
1. Study Personnel
4. Study Design
a. Study Type
b. Study Overview
c. Treatment Groups
d. Randomization Procedures
e. Blinding Procedures
5. Intervention and Placebo Details
6. Population Studied
a. Institutional Protocol Review
b. Informed Owner Consent
c. Animal Identification
d. Inclusion Criteria
e. Exclusion Criteria
f. Removal and Rescue Criteria
a. Veterinary Outcome Measures
b. Owner Outcome Measures
c. Patient Biologic Measures
8. Adverse Events
9. Statistical Analysis
10. Data Collection, Security and Independent Review
11. Protocol Deviations and Changes
Translational or comparative science on behalf of animal, human and environmental health is not linear or unidirectional. Outcomes along the entire research spectrum require constant scrutiny as new data inform ongoing work in regulated and non-regulated research efforts. This mutual dependency means that scientists must commit to a shared vision of research rigor and research conduct to minimize the risk of inconsistent data quality and irreproducible research outcomes. A voluntary commitment to research best practices designed to support data integrity and reliability provides scientists with a roadmap for conducting quality research, and the opportunity to support the work of their peers by ensuring a consistent and reliable data stream throughout the research continuum. Clinician-scientists have the most to gain by voluntarily establishing the characteristics of this data stream so that they can define sustainable approaches, demonstrate the quality of their research and warrant continued funding for the important work they do.
This work was supported by the CTSA One Health Alliance.
CL, BL and MC concieved the design of the manuscript. RD drafted the manuscript. All authors read and approved the final manuscript.
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
- Rubio DM, et al. Defining translational research: implications for training. Acad Med. 2010;85:470–5.View ArticlePubMedPubMed CentralGoogle Scholar
- Travis DA, et al. One medicine one science: a framework for exploring challenges at the intersection of animals, humans, and the environment. Ann N Y Acad Sci. 2014;1334:26–44.View ArticlePubMedPubMed CentralGoogle Scholar
- Federal Drug Administration. Guidance for Industry GOOD CLINICAL PRACTICE VICH GL9. 15 June 2000 (2000). Available at: http://www.fda.gov/downloads/AnimalVeterinary/GuidanceComplianceEnforcement/GuidanceforIndustry/ucm052417.pdf. (Accessed: 13th July 2016).
- Clinical Trials and Human Subject Protection - Regulations. Available at: http://www.fda.gov/ScienceResearch/SpecialTopics/RunningClinicalTrials/ucm155713.htm#FDARegulations. (Accessed: 13th July 2016).
- Guidance for Industry E6 Good Clinical Practice: Consolidated Guidance. 20857, 301–827 (1996). https://www.fda.gov/downloads/Drugs/.../Guidances/ucm073122.pdf.
- CFR - Code of Federal Regulations Title 21. Available at: https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfcfr/cfrsearch.cfm. (Accessed: 13th July 2016).
- Prinz F, Schlange T, Asadullah K. Believe it or not: how much can we rely on published data on potential drug targets? Nat Rev Drug Discov. 2011;10:712.View ArticlePubMedGoogle Scholar
- Gratzer, W. Trouble at the lab. Econ 302, 774–775 (2013).Google Scholar
- Bohannon J. Who’s afraid of peer review? Science. 2013;342:60–5.View ArticlePubMedGoogle Scholar
- Alberts B, et al. Self-correction in science at work. Science. 2015;348:1420–2.View ArticlePubMedGoogle Scholar
- Begley CG, Ioannidis JPA. Reproducibility in science: improving the standard for basic and preclinical research. Circ Res. 2015;116:116–26.View ArticlePubMedGoogle Scholar
- Martinson BC, Anderson MS, de Vries R. Scientists behaving badly. Nature. 2005;435:737–8.View ArticlePubMedGoogle Scholar
- Freedman LP, Cockburn IM, Simcoe TS. The economics of reproducibility in preclinical research. PLoS Biol. 2015;13:e1002165.View ArticlePubMedPubMed CentralGoogle Scholar
- Baker M. How quality control could save your science. Nature. 2016;529:456–8.View ArticlePubMedGoogle Scholar
- Begley CG, Buchan AM, Dirnagl U. Robust research: institutions must do their part for reproducibility. Nature. 2015;525:25–7.View ArticlePubMedGoogle Scholar
- John LK, Loewenstein G, Prelec D. Measuring the prevalence of questionable research practices with incentives for truth telling. Psychol Sci. 2012;23:524–32.View ArticlePubMedGoogle Scholar
- Steneck, N. H. Fostering Professionalism and integrity in research*. https://www.stthomas.edu/media/hollorancenter/pdf/Steneck.pdf (accessed 20 Sept 2016).
- Collins FS, Tabak LA. Policy: NIH plans to enhance reproducibility. Nature. 2014;505:612–3.View ArticlePubMedPubMed CentralGoogle Scholar
- Mcnutt M. Journals unite for reproducibility. Sci J. 2014;346:679.View ArticleGoogle Scholar
- Trotter AM, Calabrese R, Palm U, Krumenaker A. ASQ TRI-2012 best quality practices for biomedical research in drug development. ASQ. 2012; http://asqprinceton.org/wordpress/archives/1300# (Accessed 20 Sept 2016
- Robins MM, Scarll SJ, Key PE. Quality assurance in research laboratories. Accred Qual Assur. 2006;11:214–23.View ArticleGoogle Scholar
- Volsen SG, Kent JM, Masson M. Quality: an old solution to new discovery dilemmas? Drug Discov Today. 2004;9:903–5.View ArticlePubMedGoogle Scholar
- Riedl DH, Dunn MK. Quality assurance mechanisms for the unregulated research environment. Trends Biotechnol. 2013;31:552–4.View ArticlePubMedGoogle Scholar
- Vermaercke, P. Sense and nonsense of quality assurance in an R&D environment. Accred Qual Assur 5, 11–15 (2000).Google Scholar
- TDR handbook:Quality Practices in Basic Biomedical Research (QPBR). Special Programme for Research and Training in Tropical Diseases (TDR) (WHO, 2006). http://www.who.int/tdr/publications/training-guideline-publications/handbook-quality-practices-biomedical-research/en/ (Accessed 20 Sept 2016).
- Quality in Research, Guidelines for working in non-regulated research. (RQA, 2014). http://www.therqa.com/publications/booklets/quality-research/ (Accessed 20 Sept 2016).
- Michelson Prize & Grants Quality Assurance Toolkit. Available at: http://www.michelsonprizeandgrants.org/resources/qa-toolkit. (Accessed 20 Sept 2016).
- Vasilevsky NA, et al. On the reproducibility of science: unique identification of research resources in the biomedical literature. PeerJ. 2013;1:e148.View ArticlePubMedPubMed CentralGoogle Scholar
- COSORT Transparent Report of Trials. http://www.consort-statement.org/.
- NC3Rs National Centre for the Replacement Refinement and Reduction of Animals in Research. https://www.nc3rs.org.uk/arrive-guidelines.
- http://veterinaryrecord.bmj.com/pages/authors/ and at: http://www.equator-network.org
- Nuzzo R. Statistical errors. Nature. 2014;506(13):150–2.View ArticlePubMedGoogle Scholar
- Hutton JL. Number needed to treat and number needed to harm are not the best way to report and assess the results of randomized clinical trials. Br J Haematol. 2009;146(1):27–30.View ArticlePubMedGoogle Scholar
- Quality Systems Workbook. (Research Quality Association Ltd, 2013). http://www.therqa.com/publications/quality-systems-workbook/ (Accessed 20 Sept 2016).
- Volsen SG, Masson MM. A novel audit model for assessing quality in non-regulated research. Qual Assur J. 2009;12:57–63.View ArticleGoogle Scholar