"The Commercialization of Clinical Trials: An Examination of Resulting Ethical Issues"
The Commercialization of Clinical Trials: An Examination of Resulting Ethical Issues
Clinical trials aims to progress scientific understanding and develop treatments to various diseases and illnesses. To understand how these conditions relate to the human body, researchers use human participants in experiments. The use of human participants creates a scenario where researchers must balance their scientific goals with the necessity to protect the rights and welfare of participants. Unfortunately, the history of scientific research is littered with tragic cases where scientists fail to protect subjects and instead expose subjects to undue harm in the name of scientific research. These cases have led to several national and international codes of clinical research, as well as a system of regulation in the United States. Despite regulation, there are still cases where clinical research exists in an ethical gray area. In this paper, I will discuss controversial aspects of clinical research that relate to research design, publication of results, and participant recruitment. I propose that these issues must be considered in context of the current state of clinical research, where research has shifted away from academic centers to pharmaceutical companies and other commercial entities. The commercialization of clinical trials places a new demand on researchers, the demand for profit, and creates new ethical issues.
Regulation of Clinical Trials
In the United States, regulation of clinical trials began with the creation of the Yellow Fever Board by Walter Reed in 1900. (1) Reed created the board in response to research of Italian bacteriologist Guiseppe Sanarelli. Sanarelli claimed he discovered the virus that caused yellow fever and purposefully infected five human participants in order to prove his point. However, Sanarelli did not notify his subjects of the risk involved in this experiment. Reed’s Yellow Fever Board served to inform participants of the potential for harm in his own study of yellow fever. To do this, Reed created a written contract for participants that explained the potential risk of being involved in the study and offered financial reimbursement for involvement in the study. Reed’s board set the precedent for components of regulation that exist today, particularly informed consent. (1) In all research involving human subjects, informed consent forms are given to potential participants before the start of the experiment. Informed consent forms notify participants of the risks and benefits involved in the study, similar to Reed’s contract. They also function to communicate the overall the research procedures and reasons for research.
Informed consent forms became widely accepted by WW-II in the United States and most research in that period included the signature of the participant. However at this point, there were no guidelines of what information should be included on informed consent forms and no rules mandating the use of informed consent forms. (1) As a result of the lack of regulation, grossly unethical experiments using human subjects occurred in the United States in the mid 20th century. In particular, two scandals involving medical research were highly publicized: the Tuskegee Syphilis Study and the Willowbrook Hepatitis Study.
The goal of the Tuskegee Study was to increase knowledge of the long-term effects of syphilis. To study this, researchers recruited poor black sharecroppers with syphilis as participants. However, the researchers did not tell the participants they had syphilis and did not administer treatment even though it existed. Participants in the Tuskegee study were not informed of the goals of the study or the risks involved in the study; they did not have informed consent. As a result of the 40-year long study, subjects were unjustly exposed to the long-term effects of syphilis, including death. In the Willowbrook Hepatitis Study, children in a state school for mental retardation were given hepatitis in order to test a new vaccine. Parents did give consent for their children to be involved in the study, but under the pretense that involvement in the study was the only way to gain acceptance to the school. Researchers in the Willowbrook study unfairly manipulated parents into providing consent on behalf of their children. In both studies, researchers risked the welfare of their participants in order to gain scientific understanding. In particular, researchers took advantage of the vulnerable populations of minorities and children in order to further their scientific interests.
In response to these cruel experiments, the United States created two measures to regulate clinical trials: The Belmont Report and Institutional Review Boards (IRBs). In 1979, the United States published The Belmont Report as a guideline for ethical research. The report focuses on issues that were relevant to the Tuskegee and Willowbroook studies, particularly creating a favorable risk-benefit ratio for participants and not taking advantage of vulnerable populations. The guidelines also emphasize values like respect for the participants, beneficence, justice, and informed consent in all research involving human participants. (2) While The Belmont Report serves as a general guideline for researchers, IRBs were created as a system of legal regulation. IRBs mandate the independent review of a research proposal by a third party. IRBs are committees of scientists and nonscientists who are familiar ethical issues of research and have the expertise to determine if research is ethical. IRBs function to approve, monitor, and review all research involving humans. Currently, all research involving human subjects must be approved by the IRB before the experiment can begin. Elliot (2008) points out, that similar to The Belmont Report, IRBs were designed primarily designed to review risk-benefit ratios and informed-consent forms. (3)
The general guidelines proposed The Belmont Report and enforced by IRBs focus on informed consent, respect for enrolled subjects, favorable risk-benefit ratios, and independent review. Emanuel et al. (2000) argue there are additional components to ethical research, including scientific validity and social or scientific value. (4) To reach this conclusion, the authors studied major codes and declarations similar to The Belmont Report. Emmanuel et al. came up with seven requirement for ethical research: (1) value (2) scientific validity (3) fair subject selection (4) favorable risk-benefit ratio (5) independent review (6) informed consent and (7) respect for enrolled subjects. The implications and justifications of the requirements are summarized in Table 1.
Emmanuel et al. propose that these requirements are universal and necessary in all clinical research, in order to guarantee that the rights and welfare of participants is respected. As a result, these requirements are useful to determine if issues in clinical research are ethical or unethical. However, to fully understand current ethical issues one must consider the context of clinical research, in particular the commercialization of clinical research.
Explanations and Justifications
(1) Social or scientific value
Research must enhance health or scientific knowledge in order to prevent the exploitation of subjects and waste of resources.
(2) Scientific Validity
Research must use rigorous methodology to produce reliable and valid data. This prevents the exploitation of subjects and waste of resources.
(3) Fair subject Selection
Subjects should not be selected based on privilege or vulnerability in order to assurance justice for participants.
(4) Favorable Risk-benefit Ratio
Risk must be minimized and benefits must be enhanced in order to prevent exploitation and unnecessary harm to participants.
(5) Independent Review
Unaffiliated individuals must review the research and approve, amend, or terminate it if necessary (IRBs) in order to minimize conflicts of interest and create accountability.
(6) Informed Consent
Individuals should be informed about research and provide voluntary consent in order to respect subject autonomy.
(7) Respect for Enrolled Subjects
Participants should have their privacy protected, the opportunity to withdraw from the experiment, and have their well being monitored in order to respect subject autonomy and welfare.
Table 1. Requirements of ethical research and their implications. (Emanuel et al., 2000).
The Commercialization of Clinical Research
Pharmaceutical companies are one of the most profitable industries in the United Sates and finance most clinical research on prescription drugs. In contrast to federal funding, the drug industry’s share in clinical research has been growing. From 1980 to 2000, the drug industry’s investment in biomedical research grew from about 32% to 62%, while the government’s investment fell. (5) While drug companies are responsible for much of the innovation in medicine, their role and influence in running clinical trials should be carefully examined.
Pharmaceutical companies develop drugs in their own laboratories and have traditionally sponsored academic health centers to carry out clinical testing. Drug companies carry out clinical testing with academic health centers for three main reasons: academic health centers can provide a pool of potential participants, academic researchers who can design the trials, and publications in prestigious academic journals that could help market drugs. (5) As a result of these benefits, in the past twenty years, most drug studies by pharmaceutical companies have been affiliated with academic researchers and medical schools.
Academic entities differ from pharmaceutical companies in their primary responsibilities. The main responsibility of academic physicians and medical schools has been scientific research, patient care, and education. In comparison, the main responsibility of the drug industry is to generate profit for shareholders. During clinical trials, the objective of drug companies is to receive Food and Drug Administration (FDA) approval and to put the resulting treatment on sale. To increase the likelihood of positive results during clinical trials and as a result, FDA approval, drug companies can provide researchers and academic centers with financial incentives. Various forms of incentives have been identified, including gifts, direct profit through shares or share options, and non-trial research funding. (6) These gifts can create conflicts of interest for the academic researcher, who may be influenced to act in the commercial interests of industry, instead of interests of patients and scientific research. Financial gifts do not only include to individual researchers, but also entire academic centers. Montaner et al. explain that, “often academic health centers rely on non-specific industry funding to pursue their overall objectives. Industry-sponsored basic research programs and endowed chairs are not uncommon in today’s academic environment...” (6)
Such conflicts of interest are widespread among individual researchers and academic centers- around 1 quarter of academic staff and two thirds of academic center have financial relationships with industry. (7) Unfortunately, no one knows the total amount of money that drug companies give to academia, but Angell estimates the total comes out to tens of billions of dollars based on the annual reports of the top nine US drug companies. (8) Although it is unclear how much impact financial incentives have on researchers and institutions, conflicts of interest should be exposed or regulated in some manner.
The Belmont Report and IRBs were created before the scientific community considered conflicts of interest a major issue. As a result, these forms of regulation do not include any guidelines related to conflicts of interest. Instead, conflicts of interest are managed by other sources, namely academic journals and institutions. Most academic journals and institutions require that researchers report ties to industry in publications and research. Montaner et al. argue that further definition is needed is how and where conflicts of interest are reported. The authors cite a study published in 2000, which surveyed 89 academic institutions with highest National Institutes of Health (NIH) funding. The survey found there was wide variation in conflict of interest policies and a lack of specificity of what types of relationships were not allowed with industry. (10) Additionally, the study reported that only found that 19% of the conflict of interest policies placed limitations on the financial interests that researchers could have with industry. Further definition and regulation of conflicts of interest are needed in the policies of academic institutions and academic publications.
In the last twenty years, the link between academic health centers and pharmaceutical companies has weakened. Increasingly, drug companies are able to run clinical trials independent of drug companies. In the past, drug companies did not have the expertise to run clinical trials and had to rely on academic researchers. More and more drug companies hire in-house clinical researchers to design and interpret drug trials. (9) As one can imagine clinical trials designed by in-house researchers are prone to similar issues as industry-sponsored research in academia. Unlike academia, the salary of in-house researchers is paid directly by pharmaceutical companies, so their job security is dependent on the company. As a result, in-house researchers may be more influenced to think in the commercial interests of drug companies. In-house researchers don’t have affiliations with academic centers, and thus have no substantial responsibility to publish research or care for patients. Thus, they may be less likely to act in the interests of scientific research or participants when conducting clinical trials. With in-house researchers, pharmaceutical companies gain more control in evaluating their own products.
The link between academia and drug companies has also weakened due to the expansion of commercial entities related to clinical trials. Particularly, pharmaceutical companies are frustrated with the speed of research in academic centers. The speed of clinical trials is an important factor for drug companies in terms of finances. Each day of delay in getting approval from the F.D.A. costs pharmaceutical companies $1.3 on average and shortens the amount of time the company holds the patent of the drug they intend to test. (9) Since academic researchers have multiple responsibilities, including patient care, teaching, and research, the rate of trials can be slowed. Additionally the drug companies must sign contracts with academic researchers and gain approval from the institution’s IRB. This process can be delayed due to the slow rate of bureaucracy in academia. (9) Linking with other companies during clinical trials allows drug companies to evade some of the complications that are associated with academic centers.
The slow rate of trials in academia has lead pharmaceutical companies to increasingly link with contract research organizations (CROs) during clinical trials. The use of CROs by drug companies has considerably increased in the past decade. In 1993, only 28% of industry-sponsored clinical trials hired CROs, while in 2007, 64% of trials hired CROs. (10) In essence, CROs are business that function to recruit participants and run clinical trials for their clients. Compared to academic centers, CROs are able to complete clinical trials faster, as running clinical trials is their only responsibility. Critics argue that this aspect of CROs can potentially generate ethical issues. Angell proposes that CROs must accede to drug companies’ demands and work in their commercial interests because they are their only clients. (11) Thus ethical guidelines and scientific interests may become a second priority to commercial interests. Angell’s position is supported by a handful of publicized controversies relating to CROs. In one incident in 2006, the CRO SFBC International allowed patients to remain in a trial despite having active tuberculosis; as a result nine other participants later tested positive for tuberculosis. In another incident in 2005, Bloomberg News reported a clinical trial test center in Miami had poor living conditions for participants and minimal oversight by researchers.
Elliot notes that it is telling that an investigative newspaper, not regulators, discovered the case in Miami. (3) Currently, the regulation of CROs by governmental agencies is limited. Elliot points out that The Office of Human Research Protections, a governmental office that provides ethical oversight, does not have jurisdiction over privately sponsored studies. Additionally the FDA randomly inspects only about 1% of clinical trials for potential ethical issues. Ethical issues generated by CROs are new to governmental agencies and bioethics. The IRB was not designed to monitor quality of life issues during clinical trials, which have shown to be a recurrent issue in CRO- run clinical trials. Additional forms of regulation are needed in response to the problems generated by CROs. Schuman recommends a requirement for certification of researchers or testing sites to ensure that researchers are competent and that the quality of testing sites is acceptable. (12) Schuman also notes that monitoring CROs can be more difficult, considering many of their trial sites are international. CROs choose to conduct trials in countries like Eastern Europe, Russia, India, and Asia cost, where cost is lower and there is less governmental oversight. (12) Research in developing countries is another important ethical issue in clinical research that should be examined by scientists and government authorities. Clearly, international research requires added regulation by both the United States and countries abroad.
In addition to CROs, another commercial entity in clinical research is for-profit IRBs. Customarily, IRBs have been non-profit organizations associated with academic institutions. However, work-overload among academic IRBs has led to the use of for-profit IRBs by pharmaceutical companies (13) For-profit IRBs exist independently of academic institutions and function as businesses. On these grounds, it can be argued that for-profit IRBs are prone to the same financial demands as CROs. The only clients of for-profit IRBs are pharmaceutical companies, so for-profit IRBs may find themselves functioning in the commercial interests of their sponsoring company. For-profit IRBs can jeopardize the quality of IRB review and skirt the Emmanuel et al. requirement for independent assessment. An IRB influenced by the commercial interests of drug companies may not be able to provide a truly independent assessment of research deign and potential ethical issues.
Issues related to the commercialization of clinical research, including conflicts of interest in academia, in-house researchers, CROs, and for-profit IRBs converge to give pharmaceutical companies unprecedented control over how clinical trials are approved and run. Increased control can allow drug companies to run clinical trials in a manner that is consistent with their commercial interest, regardless of scientific and ethical interests.
In particular, the commercialization of clinical research can lead to the manipulation of experimental design, publication of results, and participants in clinical trials
The interests of drug companies can play an important role is how clinical research is conducted and reported. Studies have shown that industry-sponsored research is more likely to have an outcome that favors the sponsor compared to studies with governmental or non-profit sponsors. In an examination of this trend, one study found that trials supported by pharmaceutical companies were about three times more likely to report in favor of the new therapy. (7)
This discrepancy may be explained by various methods industry-sponsored research can use to manipulate the results of a clinical trial. Bodenheimer describes one method that can be used. (10). Bodenheimer explains that, “If a drug is tested in a healthier population (younger with fewer disease conditions) than the population that will actually receive the drug, a trial may find that the drug relieves symptoms and creates fewer side effects than will actually be the case.” To support his claim he cites a study that found only 2.1% of participants in a trial for nonsteriodal anti-inflammatory drugs were 65 years or older, despite the fact that the drugs are more commonly used and have a higher incidence in the elderly age group. (14) A scientifically rigorous study would select participants that are representative of the population who will receive the drug. Doing otherwise, as in this case, demonstrates how commercial interests can take precedent to scientific validity.
Additional methods that industry-sponsored research uses relate to randomized control trials (RCTs). RCTs are designed to give researchers an objective idea of the efficacy of a drug. In RCTs, participants are randomly divided into experimental and control arms. The experimental arm receives the new drug, while the control arm receives the standard drug or if no standard treatment exists, a placebo. RCTs are fundamentally based on uncertainty. In a scientifically rigorous RCT, the experimenter does not know what treatment will be more effective. To remove the experimenter’s basis some clinical trials are double-blind, where neither the experimenter nor participants are aware of what treatment they are receiving. Thus, double-blind studies remove the confounding variable of the experimenters and participants’ expectations and biases. However, double-blind studies are often not possible in clinical trials. In many cases participants need to know what treatment they are receiving in order to plan the rest of their drug regime and researchers must to know the treatment in order to respond to side effects and other health concerns.
When industry-sponsored research uses RCTs, experimenters should be unbiased towards the new and old treatments. However, this may be unlikely due to conflicts of interest within academia and drug companies themselves. Industry-sponsored researchers are prone to believe the new drug is more efficacious from the beginning of the study. As a result of these biases clinical trials can be manipulated in a manner that compromises scientific validity. Backstone, an economist working in the pharmaceutical industry comments that, “the design choices in planning of RCTs, such as which comparators to use, which endpoints to evaluate and which sample size to adopt, are effectively investment appraisal decisions” (15) Thus, design of RCTs can be based on commercial interests not scientific validity.
Bodenheimer provides one example of how RCTs can be manipulated. (10) He reports that in order make new drugs appear efficacious, industry sponsored research compares the new drug with a lowered dose of the competing product. Brodheimer describes a study, which discovered that trials of nonsteroidal anti-inflammatory drugs always found the new drug to be superior or equal to the competing product. (16) However, in 48% of trials the dose of the new drug was higher than the competing drug. In a scientifically rigorous experiment, both drugs would be given at the same dosage. In this case a scientifically rigorous method would be have led to poor results for pharmaceutical companies. Schweitzer et al. provides a second method related to RCTs. The authors comment that comparing a new drug to the placebo make “tends to increase the likelihood that the study’s drug will be statistically significant, compared to testing against an existing drug.” (17) As a result, drug companies prefer to design studies that compare the new drug to a placebo. From the patients’ perspective, comparison to a placebo is not particularly useful; a comparison to existing treatments is much more useful. Comparisons to existing competitive treatments allow the patient and physician to see how the efficacy of two drugs compares. In fact, it is most useful to compare multiple treatments to the new development for the same reason. However, a third technique industry-sponsored research may use is comparing a new drug to only one existing treatment, instead of multiple alternative treatments. To drug companies comparisons to multiple treatment creates fear their product will be inferior to existing treatments and a result is not the preferred research design. (17)
Despite the potential for manipulation in industry-sponsored research, RCTs are still considered the ‘gold standard’ in clinical research. Primarily because the FDA approves drugs on the basis of having successful RCTs. This standard is useful because it allow the FDA receive highly purified knowledge of the effects of a drug at the end of clinical trials. At the same time, the relevance of RCTs to industry-sponsored research can be questioned. The purpose of RCTs is to provide objective data but, as Schweitel et al. and Bodenheimer conclude, researchers can manipulate RCT design to produce favorable results.
From a bioethical perspective, manipulation of the experimental design is important because it relates to the Emmanuel et al. requirement for scientific validity. Emmanuel et al. argue that research that is designed or conducted poorly is unethical because it “exposes subjects to risks or inconvenience to no purpose.” (1) Based on these grounds, some industry-sponsored clinical trials can be considered unethical. Additionally, manipulation of the experimental design by drug companies violates the Emmanuel et al. requirement for social and scientific value. Drugs evaluated under false contexts have limited medical and social value, considering such drugs are not guaranteed to have a positive effect. Manipulation of the design does not serve a scientific, social, and medical purpose. Instead, manipulation of results serves to the commercial interests of pharmaceutical companies.
To prevent manipulation, further regulation of clinical trials is needed. One potential form of regulation is to create a higher standard of ethical research for IRB approval. IRBs should be aware of the potential for manipulation in clinical trial design and look for signs of such behavior. Freemantle et al. offer an alternative approach of increasing monitoring of clinical trials. (18) The authors suggest the expanded use of Data Safety and Monitering Commitees (DSMCs) as a potential solution to these issues. Similar to IRBs, DSMCs are independent advisory boards that function as a third party perspective on the experiment. However, DSMC is more focused on monitoring during the course of the experiment, while IRBs are focused on approving research proposals. Freemental et al. explain that DSMCs could balance the commercial interests of industry-sponsored research and advocate for the safety of the participants. Currently, DSMCs are recommended in clinical trials but not required. (18) The expanded use of DSMCs and higher standards for IRBs could help guarantee that industry-sponsored research is not manipulated in a way that generates positive results.
Manipulation of Publication
In addition to manipulating the experimental procedure, industry-sponsored research can manipulate how and when results are published in academic journals or other sources. Pharmaceutical companies do not require journal articles publications to receive FDA approval. However, journal articles are important for marketing purposes. Physicians often rely on the results of clinical trials published in journal articles to determine what drugs to prescribe to patients. Positive results in journal articles help persuade physicians to prescribe the drug in question to patients and in turn generate revenue.
To facilitate this, Angell notes that, “positive results are often repeatedly published in slightly different forms, in order to mislead doctors.” (12) Additionally, drug companies may use “ghost writers” to promote positive results. Ghostwriters are professional medical authors who are hired by drug companies to write an article, but are not named as authors. One study published in 1998 found that 11% of publications included contributions by anonymous ghost writers. (19) Bodenheimer explains, “ghost writers typically receive a packet of materials from which they write the article; they may be instructed to insert a key paragraph favorable to the company’s product.” (10) Ghost writers can also help publicize or even exaggerate positive results without attention to the true nature of the data.
In order to protect commercial interests, in some cases pharmaceutical companies attempt to delay or prevent the publication negative results. In a survey among life-science faculty members with industry sponsorship, 27% reported delays of six month or more in the publication of results. (10) In some case, wtihholding negative results has led to lawsuits and congressional hearings. Angell describes an example where GlaxoSmithKline withheld data that suggested paroxetine, a treatment for depression, was ineffective and could be harmful to children and teenagers. (12) In a released internal document from GlaxoSmithKline, a company official explained that they decided to withhold the negative results because, “It would be commercially unacceptable to include a statement that efficacy had not been demonstrated, as this would undermine the role of paroxetine.” In this example GlaxoSmithKline undermined the health and welfare of their patient pool in order to protect commercial interests. Withholding negative results allowed the company to protect its financial interests in favor of its prescribed patients.
It is important to remember that in science and medicine, a neutral or negative trial is as valuable as a positive one. In the GlaxoSmithKline example, the publication of the negative result would likely impact how was paroxetine prescribed among teenagers and children. For researchers and physicians, it is informative to use negative results to influence how drugs are prescribed and futures experiments are designed. However, at the same time it is true that in science in general it is less common to publish negative results. Freemantle argues that commercial influences may make this pattern more common. (17) In support of this proposal, Angell finds that not publishing negative results is pervasive in industry-sponsored clinical research. Angell cites a study that looked at 74 clinical trials of antidepressants found that 37 of 38 positive studies were published. However of the 33 of 36 negative studies were not published or published in a way that misleadingly conveyed a positive outcome. (12)
In clinical research, it is essential to release negative data and collect data in a manner that is objective as possible. Financial pressures that result form the commercialization of clinical research may compromise these values. As previously mentioned, conflicts of interest and ties to industry are reported in most journals. Reporting conflicts of interest allows helps minimize the potential damage and influence of unsubstantiated industry-sponsored research. However, this rules this does prevent physicians from being influenced by misleading publications and feeding into the commercial interests of drug companies.
Manipulation of Subjects
In addition, the commercialization of clinical research has directly affected how participants are recruited for trials. In the early 1970s, before the publication of the Belmont Report and implementation of IRBs, most drugs in the first phase of clinical trials were tested on people jail. (20) However, the development of trial regulations led pharmaceutical companies to find a new population of subjects. To attract potential participants, drug companies have turned to compensation.
Payment of participants is a controversial area in all research involving humans. In general, it is thought that payment should be high enough to compensate time lost during trials, but not high enough to cause potential participants to enroll in a study against their better judgment. (3) Despite this general understanding, the pay rate for clinical trials has been increasing due to larger financial pressures of pharmaceutical companies. As previously discussed, speed is important during trials for drug companies. In particular, speed in recruiting subjects is critical to the pace of the overall experiment. As a result, high rates of compensation serve as a means for drug companies to quickly draw up a participant pool. Abadie estimates that in the 1990’s participants received $100 a day for volunteering in a clinical trial. Now participants can receive $1,200 for three to four days in a less intensive study or up to $5,000 in a longer, more intensive study. (20) Elliot proposes that high rates of pay create “shadow economies” in North American cities with research hubs. (3) These economies are composed of individuals who participate in clinical trials primarily for the financial compensation and frequently enroll in clinical trials in replacement of a more conventional job.
In an ethnography of clinical trials participants in Philadelphia, Abadie notes that most of the individuals he met volunteered for financial compensation, not out of altruism. (20) This community of people, who he calls “professional guinea pigs” mostly volunteer for Phase I trials of drug development. Phase I trials are the first wave of clinical trials and function to test the effects of the drug in a small group of healthy people. Because this participant group is healthy, there are no health reasons to volunteer in a phase I trial. As a result, participants can see their participation in clinical trials as a job, a job with very real potential risks and dangers. In several cases participants have been harmed in Phase I trials and in a few isolated incidences participants have died as a result of the treatment received. However, most pharmaceutical companies and even academic institutions do not offer treatment of compensation for injured accrued during clinical trials. (3) Clearly, more guidelines are needed to create accountability and offer protection for subjects from a medical and legal perspective.
As a result of the potential for harm, Elliot argues that this approach of subject recruitment exploits a research underclass of poor people. (3) Similar to the Tuskegee Study and Willbrook Hepatitis Study a vulnerable population is used during clinical trials. Yet, unlike previous studies, subjects volunteer to participate in these experiments and have given informed consent. In this context, informed consent may not be the only requirement for ethical research. Elliot argues that this approach can still be considered exploitation because poor volunteers are unlikely to have full time employment and a result health insurance. So, essentially we allow the burden of testing on new poor drugs on poor people are less likely to have access of the same drugs. Using poor individuals as test subjects clearly violates the Emmanuel et al. requirement for fair subject selection and creates new ethical issues regarding informed consent. Participants in Phase I trials may give informed consent, but they still should be protected researchers and governmental organizations.
It is important to understand that the cornerstones of regulation in clinical trials, the Belmont Report and IRBs, were created in a different context from the present circumstances of clinical research. These regulatory measures were created in response to specific incidences where researchers compromised the rights and welfare of participants in the name of scientific research. However in the current state of commercialized clinical research, a new set of pressures can influence researchers. Industry-sponsored research is more likely to be influenced by commercial interests, which can in turn undermine the rights and welfare of participants. This new set of issues requires new forms of regulation, monitoring, and accountability that can balance the commercial interests of businesses. Proponents of the status quo may argue that increased regulation would drive down innovation in the drug industry and would, as a result, negatively affect society. However, this paper has shown that it can be dangerous to let commercial entities evaluate their own products, dangerous to participants in clinical trials, science, and even to society. We must carefully examine the role commercial entities play in clinical research for better and worse. Private industry will likely to be the main player in drug development for the foreseeable future.
(1) Emmanuel, E., Crouch, C., Arras, A., Moreno, J., and Grady, C., ed. Ethical and Regulatory Aspects of Clinical Research: Readings and Commentary. Baltimore: Johns Hopkins University Press, 2003.
(2) “The Belmont Report” National Institutes of Health. http://ohsr.od.nih.gov/guidelines/belmont.html. 6 Dec 2010.
(3) Elliot C., Exploiting a Research Underclass in Phase 1 Clinical Trials. NEJM. 358 (2008): 2316-2317.
(4) Emmanuel, E., Wendler, D., Christine, G. What Makes Clinical Research Ethical? JAMA. 20 (2000): 2701-2711.
(5) Moses, H., Martin, JB. Academic Relationships with Industry: A New Model for Biomedical Research. JAMA. 285 (2001): 933-935
(6) Elliot, C., Guinea-pigging. The New Yorker. 7 Jan 2008
(7) Montaner, J., O’Shaughnessy, M., and Schechter, M. Industry-sponsored Research: a Double Edged Sword. The Lancet. 358 (2001)
(8) Schott, G. The Financing of Drug Trial by Pharmaceutical Companies and Its Consequences. Dtsch Artzebl Int 107 (2010): 279-85
(9) Angell, M. Drug Companies & Doctors: A Story of Corruption. The New York Review of Books. 15 Jan 2009.
(10) Bodenheimer, T., Uneasy Alliance: Clinical Investigators and the Pharmaceutical Industry. NEJM. 342 (2000)
(11) Schuman, M., Commercialization of Clinical Trials- Risks and Benefits of the CRO Boom. NEJM. 357 (2007)
(12) Angell, M. Industry-Sponsored Clinical Research: A Broken System. JAMA. 300 (2008)
(13) Lemmen, T., & Freedman, B. Conflict of Interest and Commercial Review Boards. Milbank Quarterly. 78 (2000): 547- 84.
(14) Rochon PA., Berger, PB., Gordon, M. The Evolution of Clinical Trials: Inclusion and Representation. CMAJ. 159 (1998): 1373-4.
(15) Backhouse, M.E., An Investment Appraisal Approach to Clinical Research. Health Econ. 7 (1998): 605-619
(16) Rochon, PA, Gurwitz, JH, & Simms, RW. A Study of Manufacturer-Supported Trials of Nonsteroidal Anti-immflamatory Drugs in the Treatment of Arthritis. Arch Intern Med. 154 (1994): 157-63
(17) Schweitzer, S., Pharmaceutical Economics and Policy. New York City: Oxford University Press, 2007.
(18) Freemantle, N. & Stocken, D. The Commercialization of Clinical Research: Who Pays the Piper Calls the Tune. Family Practice. 21 (2004): 335-336
(19) Flanagin, A., Carey LA., Fontanarose, PR, et al. Prevalence of Articles with Honorary Authors and Ghost Writers in Peer-reviewed Medical Journals. JAMA. 280 (1998): 222-4
(20) Abadie, R. Professional Guinea Pig: Big Pharma and the Risky World of Human Subjects. Durham, NC: Duke University Press, 2010.