Americans love technology, especially new medical technology. The cliché ‘America has the best healthcare in the world’ really means that American healthcare uses the most advanced technology to an extent that surpasses any other healthcare delivery system in the developed world. Our federal and state government programs in biomedical research, our universities and affiliated medical schools, and our vast medical/industrial complex, including our pharmaceutical and device manufacturing companies, produce thousands of new technologies each year. The National Institutes of Health (NIH) is the primary agency of the US Government responsible for biomedical and health-related research. As of 2003, the NIH was responsible for 28%—about $26.4 billion—of total annual biomedical research funding in the US, with most of the rest coming from industry.
This supremacy in the development, dissemination, and clinical application of technology is also the reason our per capita expenditure on healthcare exceeds that of other ‘first-world’ countries by a factor of two or more. For those with rare afflictions or who have conditions requiring high-tech, sophisticated diagnostic and/or therapeutic interventions (if they are wealthy or happen to enjoy good health insurance coverage), America is the place to be.
However, does this bounty of technology mean Americans enjoy better health—or medical care—than the rest of the world? The fact is that we are not getting our money’s worth. While our per capita spending exceeds that of other developed nations, we rank well below other ‘first-world’ countries in measures of public health such as life expectancy, infant mortality, maternal mortality, and others.
At the heart of this dilemma is the distinctly American love affair with technology. Technology in health is defined broadly to include drugs, devices, procedures, and organized care such as chronic disease management systems. ‘New is better’ is powerful psychology for patients who want and need access to the ‘best.’ To be sure, the triumph of modern medicine is the adoption of the scientific method and the application of science to medical practice. Much medical technology truly is life-saving and brings enormous benefits to patients. One has only to consider antibiotics, insulin, blood transfusion, vaccination, heart valves, artificial joints, open heart surgery, cancer screening, and chronic disease management systems for heart failure and asthma, to name just a few.
Conversely, a great deal of technology is actually of little benefit, often duplicative, and sometimes even harmful. Examples are plentiful: ‘me too’ pharmaceuticals; drugs such as Vioxx, which, after initial introduction, US Food and Drug Administration (FDA) approval, widespread use, and exuberant marketing, was later withdrawn as safety issues came to light; devices such as defective heart valves and pacemaker lead systems; and surgical procedures such as carotid sinus denervation for asthma and lobotomy for schizophrenia.
Even technologies that have been shown to be safe and effective for specified indications can be inappropriately applied, over-used, misused (as in some ‘off-label’ use), or under-used. The Wennberg and the Dartmouth group study of documented variation in medical practice pointed out significant local and regional differences in the use of medical technology, concluding that much medical practice is arbitrary and lacking convincing supportive evidence.
The key—and the success of any effort at healthcare reform—will be to develop a competent and fair system to evaluate which technologies merit adoption, always balancing benefit and affordability.
Technology Assessment
There is a growing demand for ‘evidence-based medicine,’ i.e. the requirement for rigorous scientific evidence in clinical practice, particularly as innovation has proliferated and the medical armamentarium has become increasingly complex and expensive. There are a number of efforts, both public and private, to apply rigorous scientific analysis to the assessment of medical technology (see Table 1).
One such activity is the California Technology Assessment Forum (CTAF), a private, non-profit organization sponsored by The Blue Shield of California Foundation. The CTAF panel of experts consists of approximately 15 voting members including medical and surgical sub-specialists, methodologists, ethicists, and consumer advocates who meet three times a year in public session to evaluate four or five new medical technologies at each session. CTAF employs reviewers who are members of the University of California, San Francisco medical faculty, who search the published literature relevant to the topic at hand and prepare a report on the extent to which the published data fulfill the five CTAF criteria (Table 2). After considering the reviewer’s recommendation, hearing testimony from experts, and full discussion by the Panel, a vote is taken.
If the vote affirms that all five criteria are met, the technology is declared ‘approved’ for clinical application for the clinical conditions specified in the studies cited. If CTAF criteria are not met, the technology is said to remain ‘investigational.’ It is emphasized at each public meeting that the process is focused on safety and effectiveness and is not for the purpose of making insurance coverage decisions; nor is cost considered in CTAF’s assessments.
Table 3 lists a sample of topics considered over the past six years, highlighting those approved. The overall rate of technologies meeting CTAF criteria is just under 30%. Failure to meet criteria may mean simply that the data are insufficient to draw conclusions. The technology may be promising but is in need of further study, not necessarily unsafe, ineffective, or both.
Healthcare Reform
There are three central elements in the current debate about healthcare reform:
- increasing access, i.e. providing some degree of insurance coverage to the estimated 61 million Americans who are uninsured (45 million) or underinsured (16 million);
- improving the quality of care, i.e. eliminating some of the medical errors leading to death and disability, and promoting ‘best practices’ shown to result in improved patient outcomes; and
- reducing the cost of care, i.e. ‘bending the cost curve.’
By far the most important of these elements is cost reduction. If we simply increase access to a system of care that is financially unsustainable, we will not have achieved meaningful reform. Also, gaining control of ever-rising healthcare costs by limiting the use of technologies that are not beneficial or may be harmful will also improve the quality of care.
Healthcare is increasingly unaffordable to many Americans, not just the poor. Healthcare costs are hurting our competitive position in the global economy. Approximately 46% of healthcare expenditure is funded by Government, and healthcare-related entitlements are projected to place a heavy tax burden on future generations. Medicare is projected to become insolvent by 2017.
In 2007 we spent $2.2 trillion on healthcare, which is 16% of gross domestic product (GDP) and amounts to $7,420 for every man, woman, and child in the US. Healthcare costs have risen an average of 2.4 times the consumer price index (CPI), and have done so since 1970. What is it that drives medical cost inflation? Health economists all agree that far and away the single most important factor is new medical technology. To be sure, there are also other influences such as incentives to increase the volume of services to maximize income for doctors and hospitals, the insulation of both patients and doctors from the costs of care, the profit incentives for pharmaceutical and device manufacturers and their shareholders, and, not least, the public’s insatiable desire for any and all medical care from which they might conceivably benefit, whatever the cost, and especially if perceived as free. ‘Healthcare is a right, not a privilege’ sums up the sense of entitlement to the ever-increasing supply of medical and surgical services, as more and more of everyday life becomes subject to medical intervention.
With the advance of medical science, more and more of life has become ‘medicalized.’ For example, the inability to conceive was once described as being a ‘barren couple’ or ‘God’s will.’ Now, thanks to scientific advances, this condition can be treated as an ‘infertility problem,’ with sperm counts, ovulatory cycles, ex vivo fertilization, and pharmaceutical manipulation of implantation and gestation. Other examples include the drug treatment of inattention in school children (attention-deficit disorder [ADD]), erectile dysfunction (ED), depression and anxiety, cosmetic surgery, laser-assisted in situ keratomileusis (LASIK) surgery to obviate the need for eyeglasses or contact lenses, bariatric surgery for obesity, and a variety of surgical interventions to improve performance or duration of participation in athletics. The point of these diverse examples is to illustrate that because of the ever-expanding applications of medical technology to everyday living, the demand for healthcare is insatiable, infinite, and, ultimately, unaffordable. How do we begin to pare back our healthcare expenditures?
Solutions
If technology is the major driver of increasing healthcare costs, and if the central element of healthcare reform is cost containment, or, as some have called for, ‘bending the cost curve,’ how we evaluate and use technology is one of the ways, and perhaps the most important way, in which we can achieve this goal.
We need to know ‘what works?’ In other words, how do we know which technologies work? This is an epistemological question, taking us into the realm of philosophy. Epistemology is the branch of philosophy dealing with knowledge, i.e. ‘how do we know what we know?’ In medical technology, we need to examine scientific evidence for safety and effectiveness, benefit in terms of patient-centered outcomes (survival, function, and quality of life), comparison with existing technologies, and whether the technology can be applied outside research environments.
The gold standard for ‘knowing what we know’ about ‘what works’ is the well-designed randomized controlled trial (RCT), appropriately powered statistically to allow inferences to be drawn. The CTAF process is a good example of this kind of technology assessment based on objective data published in the peer-reviewed medical literature.
Technology assessment is not easy or inexpensive. The alternative is the acceptance of technologies based on ‘clinical experience,’ which at its best can identify beneficial technologies, but can often be quite wrong, and at worst be subject to the malign influence of the provider’s (doctor’s or hospital’s) reimbursement needs, direct-to-public marketing, and the hopes and yearnings of the afflicted for any benefit, however unlikely, and no matter the cost.
In our current system, where some technologies can cost vast sums and can be initiated by a stroke of the doctor’s pen, the idea that the doctor–patient interaction is inviolate is no longer tenable. As physicians we all value professional autonomy, but in order to limit the indiscriminate use of medical technology there must be limitations—perhaps peer pressure, public disclosure, or some measure of coercion, such as denying payment for some applications.
Public education is also crucial for cost control. It will require a cultural change to convince consumers that good medical care must be evidence-based. It is useful to reflect on three examples of well accepted technologies that, after wide acceptance in medical practice, were later discredited when subjected to rigorous analysis (see opposite page). These three examples illustrate the power of evidence-based medicine in improving care while at the same time reducing healthcare expenditure.
For the public to embrace a culture in which medical practice is evidence-based and understand that this is a path to better and more affordable healthcare will require adroit political and scientific leadership. This is admittedly a tall order, but polls showing that a large majority of Americans are in favor of healthcare reform suggest that cultural change is possible. Comparative effectiveness studies, properly performed, that look at what works (safety and effectiveness), for whom and under what circumstances (appropriateness), and at what price (cost-effectiveness) are the path to genuine healthcare reform.
Healthcare legislation needs to emphasize cost containment, which, as I have argued, begins with acknowledging that technology assessment can lead us out of our upward spiral of healthcare costs by limiting the indiscriminate use of technology, the principal driver of medical cost inflation. This sounds like rationing, the dreaded ‘R-word,’ which it surely is. However, it is rationing in the sense of limiting costly technologies that are neither beneficial nor harmless, and those whose cost vastly exceeds their benefit. An example is the drug Tarceva, which costs about $3,500 a month and was approved by the FDA as a treatment for pancreatic cancer because it improved survival by 12 days.
The legislation and resulting policies and structures to accomplish healthcare reform along the lines I am suggesting are beginning to take shape. $1.1 billion has been allocated for comparative effectiveness research (CER) as part of the American Recovery and Reinvestment Act (ARRA) of 2009, the so-called stimulus package. Also, much thought is being given to a national forum modeled on the National Institute for Health and Clinical Excellence (NICE), an agency in the British National Health Service (NHS) that has already produced reports on some 250 medical technologies and provides ‘advice’ on how these should be used in the NHS. NICE has been in existence for a decade, has been accepted by the British people, and has stimulated similar efforts in other countries to offer technology assessment and cost-effectiveness determinations by an agency that is independent, transparent, and evidence-based.
Everyone agrees that America must constrain healthcare costs and has the technical expertise to create a national forum to evaluate new medical technologies competently and fairly. We simply cannot keep up our romance with technology, paying whatever providers and manufacturers demand for their services and products. The questions remain: Do we have the political will to limit ourselves? How much healthcare can we afford?