Human experimentation: The good, the bad, and the ugly


Ever since the earliest medical practitioners treated the first patients, a tension has existed between potentially beneficial innovation and unintentional harm. For many centuries doctors relied on their own experience or intuition to determine what was best for those whom they treated. It was not until the 17th century that Francis Bacon introduced the scientific method that consisted of systematic observation and testing of hypotheses. In the case of clinical science, this provided an objective means of determining which treatments would be in the best interest of patients. Since then, society has greatly benefited from remarkable medical advancements based on what is essentially human experimentation, much of it noble, but unfortunately some episodes quite tragic, misguided, and even demonic.

The most notorious human research abuses were those perpetrated by the Nazi regime during the Holocaust. There were only 200 survivors from the 1,500 sets of twins forced to participate in Josef Mengele’s infamous twin experiments at the Auschwitz concentration camp. Many of these investigations were genetic experiments intended to prove the superiority of the Aryan race. Little useful scientific information was gained from these inhumane and evil studies.

However, totalitarianism is not a prerequisite for mistreatment of human subjects. The American research community has its own checkered past. Possibly the most well-known abuse is the Tuskegee syphilis experiments that were conducted between 1932 and 1972 by the U.S. Public Health Service. Four hundred impoverished African American males infected with syphilis, who were not fully informed about their disease, were closely followed in order to record the natural history of this deadly and debilitating illness. These patients were not treated with penicillin although the drug became available in 1947. As a result, over one-third of the subjects died of their disease, many of their wives contracted syphilis, and numerous children were unnecessarily born with congenital syphilis.

On the other end of the ethical scale are a number of noble researchers scattered throughout history who insisted on experimenting on themselves before submitting others to their treatments or procedures. A prime example is a courageous and creative German surgical intern, Werner Forssmann, who paved the path to heart surgery through self-experimentation. Even into the 20th century, it was taboo for a physician to touch the living heart. Thus, much of its physiology and pathophysiology remained shrouded in mystery. In 1929, Dr. Forssmann did a cut-down on his antecubital vein, inserted a ureteral catheter into the right side of his heart, and then descended a flight of stairs to confirm its position by x-ray. Later experiments, also performed on him, resulted in the first cardiac angiograms. Although heavily criticized by his superiors and the German medical establishment, Dr. Forssmann, an obscure urologist and general surgeon at the time, was eventually rewarded by sharing the Nobel Prize in 1956.

From the very beginning of surgery as a clinical science, surgeons have sat on the precipice of beneficial innovation versus unintentional harm to their patients. Because of the very nature of what they do, it has not usually been possible for them to self-experiment before testing their ideas on others. Every operation ever devised, occasionally with, but often without, animal experimentation, has had its initial human guinea pigs. In fact, surgeons have generally been given freer rein to try new and untested procedures or to modify older accepted ones. They have had greater license than have their counterparts who innovate with drugs and medical devices and are thus more tightly regulated by agencies such as the Food and Drug Administration.

In the best of circumstances, surgical patients are fully informed as to the potential consequences of a novel operation, both good and bad, and the results are carefully recorded to determine the benefit/harm ratio of the procedure. Ideally, though it is often not possible, the new approach is compared to a proven alternative therapy in a carefully designed trial. Unfortunately, such careful analysis has not always been done.

A glaring example of surgical human experimentation gone wrong is the frontal lobotomy story. In the early part of the 20th century, mental institutions in this country and throughout the world were filled with desperate patients for whom there were few therapeutic alternatives available. Many of these patients were incapable of giving meaningful informed consent. In 1935, frontal lobotomy was introduced by Antonio Moniz, a Portuguese neurologist, who later shared in a highly controversial Nobel Prize for his discovery. In 1946, an American neuropsychiatrist, Walter Freeman, modified the procedure so it could be done by psychiatrists with an ice pick–like instrument via a transorbital approach. A neurosurgeon performing a craniotomy, general anesthesia, and an operating room were no longer necessary, resulting in the rapid proliferation of this simpler operation despite its increasingly well-known and devastating side effects of loss of personality, decreased cognition, and even death. Only after more than 40,000 procedures were done in the United States did mounting criticism eventually lead to a ban on most lobotomies..

Next Article:

   Comments ()