Evidence-Based Dermatology



Evidence-Based Dermatology: Introduction




| Print

Evidence-Based Medicine at a Glance





  • Evidence-based medicine (EBM) is the use of the best current evidence in making decisions about the care of individual patients.
  • EBM is predicated on asking clinical questions, finding the best evidence to answer the questions, critically appraising the evidence, applying the evidence to the treatment of specific patients, and saving the critically appraised evidence.
  • The EBM approach is most appropriate for frequently encountered conditions.
  • Results from well-designed clinical studies involving intact patients are at the pinnacle of the hierarchy of evidence used to practice EBM.
  • Recommendations about treatment, diagnosis, and avoidance of harm should take into account the validity, magnitude of effect, precision, and applicability of the evidence on which they are based.





 Brief History of Evidence-Based Medicine





 Until the 1940s there was no gold standard for determining whether a treatment was effective. The publication of the randomized controlled clinical trial (RCT) that demonstrated that streptomycin was effective in the treatment of pulmonary tuberculosis was a landmark event.1 Thereafter, the RCT quickly became the gold standard for determining whether a treatment is effective.






 Hundreds of RCT were conducted between 1950s and the 1970s. However, their results were not cataloged or used systematically to inform medical decision-making. In 1972, Archie Cochrane, a British epidemiologist and physician, published his response to being asked to evaluate the effectiveness of the British National Health Service in delivering health care to the population of the United Kingdom. In his analysis, he concluded that medical science was poor at distinguishing interventions that were effective from those that were not and that physicians were not using available evidence from clinical research to inform their decision-making.2 See http://www.cochrane.org/about-us/history/archie-cochrane for more information on Archie Cochrane.






 Groups of like-minded epidemiologists and physicians responded to Archie Cochrane’s challenge by examining the methods by which medical decisions and conclusions were reached and proposed an alternative approach based on finding, appraising, and using available data from clinical research involving intact patients.3 In 1985, Sackett et al published Clinical Epidemiology: A Basic Science for Clinical Medicine, which detailed the rationale and techniques of this evidence-based approach.3 These authors and others reduced the rules of evidence to a small subset of principles that were easier to teach and understand, and reintroduced the concept in 1992.4 They named this technique evidence-based medicine (EBM). It was defined as the conscientious, explicit, and judicious use of the best current evidence in making decisions about the care of individual patients.4 Whereas making decisions about therapy has been the primary focus of EBM, its principles have been extended to diagnosis, prognosis, avoidance of the harmful effects of interventions, determination of cost-effectiveness, and economic analyses.






 The introduction of EBM was met with considerable hostility. It was perceived as cookbook medicine, old hat, too restrictive, and an insult to those already trying to practice good medical care. The definition was softened to include the integration of independent clinical expertise, best available external clinical evidence from systematic research, and the patient’s values and expectations.5






 The Cochrane Collaboration was formed to some extent in response to Archie Cochrane’s challenge to generate critical summaries, organized by specialty or subspecialty and updated periodically, of all relevant RCT.6 Created and maintained through the collaborative efforts of volunteers, the Cochrane Library is an impressive and useful compendium of systematic reviews, abstracts of systematic reviews, and the Cochrane Central Register of Controlled Trials, a database of over 620,700 controlled clinical trials (). The Cochrane Library is the most complete and best index database of RCT and controlled clinical trials and is the best and most efficient place to find evidence about therapy.






What Is “the Best Evidence?”





The acceptance of evidence-based medicine (EBM) in the specialty of dermatology has been slow and reluctant. The term and principles are understood by few and misunderstood by many. EBM is perceived as an attempt to cut costs, impose rigid standards of care, and restrict dermatologists’ freedom to exercise individual judgment. Practicing EBM in dermatology is hampered by the continued belief among dermatologists that clinical decisions can be guided by an understanding of the pathophysiology of disease, logic, trial and error, and nonsystematic observation.7,8 It is hampered also by a lack of sufficient data in many areas. As with EBM in general, therapy is often primarily emphasized; however, evidence-based approaches to diagnosis and avoidance or evaluation of harm are also important considerations.






Practicing EBM is predicated on finding and using the best evidence. Potential sources of evidence include knowledge regarding the etiology and pathophysiology of disease, logic, personal experience, the opinions of colleagues or experts, textbooks, articles published in journals, and systematic reviews. An important principle of EBM is that the quality (strength) of evidence is based on a hierarchy. The precise hierarchy of evidence depends on the type of question being asked (Table 2-1).9 This hierarchy consists of results of well-designed studies (especially if the studies have findings of similar magnitude and direction, and if there is statistical homogeneity among studies), results of case series, expert opinion, and personal experience, in descending order.6,8 The hierarchy was created to encourage the use of the evidence that is most likely to be accurate and useful in clinical decision-making. The ordering in this hierarchy has been widely discussed, actively debated, and sometimes hotly contested.10







Table 2-1 Grades of Evidencea,b 






A systematic review is an overview that answers a specific clinical question; contains a thorough, unbiased search of the relevant literature; uses explicit criteria for assessing studies; and provides a structured presentation of the results. A systematic review that uses quantitative methods to summarize results is a meta-analysis.11,12 A meta-analysis provides an objective and quantitative summary of evidence that is amenable to statistical analysis.11 Meta-analysis is credited with allowing the recognition of important treatment effects by combining the results of small trials that individually lacked the power to demonstrate differences among treatments. For example, the benefits of intravenous streptokinase in treating acute myocardial infarction were recognized by means of a cumulative meta-analysis of smaller trials at least a decade before this treatment was recommended by experts and before it was demonstrated to be efficacious in large clinical trials.13,14 Meta-analysis has been criticized because of the discrepancies between the results of meta-analysis and those of large clinical trials.1417 For example, results of a meta-analysis of 14 small studies of the use of calcium to treat preeclampsia showed a benefit to treatment, whereas a large trial failed to show a treatment effect.14 The frequency of such discrepancies ranges from 10% to 23%.14 Discrepancies can often be explained by differences in treatment protocols, heterogeneity of study populations, or changes that occur over time.14






Publication bias is an important concern regarding systematic reviews. It results when factors other than the quality of the study are allowed to influence its acceptability for publication. Several studies have shown that factors such as sample size, direction and statistical significance of findings, and investigators’ perceptions of whether the findings are “interesting” are related to the likelihood of publication.18,19






For example, in a study by Dickersin et al, the reasons given by investigators that results of completed studies were not published included “negative results” (28%), “lack of interest” (12%), and “sample size problems” (11%).18 Results of studies with small samples are less likely to be published, especially if they have negative results.18,19 This type of publication bias jeopardizes one of the main goals of meta-analysis (i.e., an increase in power through pooling of the results of small studies). Creation of study registers and advance publication of research designs have been proposed as ways to prevent publication bias.20,21 Publication bias can be detected by using a simple graphic test (funnel plot) or by several other statistical methods.22,23 In addition, for many diseases, the studies published are dominated by drug company-sponsored trials of new, expensive treatments. The need for studies to answer the clinical questions of most concern to practitioners is not addressed because sources of funding are inadequate.






Not all systematic reviews and meta-analyses are equal. A systematic review can be only as good as the clinical trials that it encompasses. The criteria for critically appraising systematic reviews and meta-analyses are shown in eTable 2-1.1. Detailed explanations of each criterion are available.11,24







eTable 2-1.1 Critical Appraisal of a Systematic Review 






The type of clinical study that constitutes best evidence is determined by the category of question being asked. Questions about therapy and prevention are best addressed by RCT.11,2426 Questions about diagnosis are best addressed by cohort studies.11,24,27,28 Cohort studies, case-control studies, and postmarketing surveillance studies best address questions about harm.11,24,29 RCT are a good source of evidence about the harmful effects of interventions for adverse events that occur frequently but not for rare adverse events. Case reports are often the first line of evidence regarding rare adverse events, and sometimes they are the only evidence. Methods for assessing the quality of each type of evidence are available.11,24






With regard to questions about therapy and prevention, the RCT has become the gold standard for determining treatment efficacy. Thousands of RCT have been conducted. Studies have demonstrated that failure to use randomization or to provide adequate concealment of allocation resulted in larger estimates of treatment effects, caused predominantly by a poorer prognosis in nonrandomly selected control groups than in randomly selected control groups.30 However, studies comparing randomized and nonrandomized clinical trials of the same interventions have reached disparate and controversial results.3032 Some found that observational studies reported stronger treatment effects than RCT.30 Others found that the results of well-designed observational studies (with either a cohort or a case-control design) do not systematically overestimate the magnitude of the effects of treatment compared with RCT on the same topic.31,32 Examining the details of the controversy leads to the following limited conclusions. Trials using historical controls do yield larger estimates of treatment effects than do RCT. Large, inclusive, fully blinded RCT are likely to provide the best possible evidence about effectiveness.10,33,34






Although personal experience is an invaluable part of becoming a competent physician, the pitfalls of relying too heavily on personal experience have been widely documented.3,35,36 Nisbett and Ross extensively reviewed people’s ability to draw inferences from personal experience and describe several of these pitfalls.37 These include the following:







  • Overemphasis on vivid anecdotal occurrences and underemphasis on significant statistically strong evidence
  • Bias in recognizing, remembering, and recalling evidence that supports preexisting knowledge structures (e.g., ideas about disease etiology and pathogenesis) and parallel failure to recognize, remember, and recall evidence that is more valid
  • Failure to accurately characterize population data because of ignorance of statistical principles, including sample size, sample selection bias, and regression to the mean
  • Inability to detect and distinguish statistical association and causality
  • Persistence of beliefs in spite of overwhelming contrary evidence






 Relying on Personal Experience





 Nisbett and Ross provide examples from controlled clinical research. Simple clinical examples abound. Physicians may remember patients whose condition improved, often assume that patients who did not return for follow-up got better, and conveniently forget the patients whose condition did not improve. A patient treated with a given medication may develop a severe life-threatening reaction. On the basis of this single undesirable experience, the physician may avoid using that medication for many future patients, although on average it may be more efficacious and less toxic than the alternative treatments that the physician chooses. Few physicians keep adequate, easily retrievable records to codify results of treatments with a particular agent or treatments of a particular disease; and even fewer actually carry out analyses. Few physicians make provisions for tracking those patients who are lost to follow-up. Therefore, statements made about a physician’s “clinical experience” may be biased. Finally, for many conditions, a single physician sees far too few patients with the given disorder to allow reasonably firm conclusions to be drawn about the response to treatments. For example, suppose that a physician has treated 20 patients with lichen planus using tretinoin and found that 12 (60%) had an excellent response. The confidence interval for this response rate (i.e., the true response rate for this treatment in the larger population from which this physician’s sample was obtained) ranges from 36% to 81%. Thus, the true response rate might well be substantially less (or more) than the physician concludes from personal experience.






 Other Sources of Evidence





 Expert opinion can be valuable, particularly when the disorder is a rare condition in which the expert has the most experience or when other forms of evidence are not available. However, several studies have demonstrated that expert opinion often lags significantly behind conclusive evidence.3 Experts suffer from relying on bench research, pathophysiology, and treatments based on logical deduction from pathophysiology as well as from the same pitfalls noted for reliance on personal experience. Experts should be aware of the quality of the evidence that exists.






 It is widely believed that clinical decisions can be made on the basis of an understanding of the etiology and pathophysiology of disease and logic.7,8 This paradigm is problematic because the accepted hypothesis regarding the etiology and pathogenesis of disease changes over time. Therefore, the logically deduced treatments change over time. For example, in the last 20 years, hypotheses about the cause of psoriasis have shifted from a disorder of keratinocyte proliferation and homeostasis, to abnormal signaling of cyclic adenosine monophosphate, to aberrant arachidonic acid metabolism, to aberrant vitamin D metabolism, to the current favorite, T-cell-mediated autoimmune disease. Each of these hypotheses spawned logically deduced treatments. The efficacy of many of these treatments has been substantiated by rigorous controlled clinical trials, but others are used even in the absence of systematically collected observations. Therefore, we have many options for treating patients with severe psoriasis (e.g., ultraviolet B radiation, narrow-band ultraviolet B radiation, Goeckerman treatment, psoralen plus ultraviolet A radiation, methotrexate, cyclosporin, and the biologics) and mild to moderate psoriasis (e.g., anthralin, topical corticosteroids, calcipotriene, and tazarotene) (see Chapter 18). However, we lack a clear sense of which is best, in what order they should be used, and in what combinations.8,38 Treatments based on logical deduction from pathophysiology may have unexpected consequences. For example, it was thought that thalidomide might be a useful treatment for toxic epidermal necrolysis based on the observation that thalidomide showed antitumor necrosis factor-a properties. Yet an RCT of thalidomide was stopped early because it was found that mortality was significantly higher in the thalidomide group than in the placebo group.39






 Textbooks can be valuable sources of evidence, particularly for rare conditions and for conditions for which the evidence does not change rapidly over time. However, textbooks have several well-documented limitations. They tend to reflect the biases and shortcomings of the experts who write them. Because of the way they are written, produced, and distributed, most are approximately 2 years out of date at the time of publication. Most textbook chapters are narrative reviews that do not consider the quality of the evidence reported.3,11






Finding the Best Evidence





The ability to find the best evidence to answer clinical questions is crucial for the practice of EBM. Finding evidence requires access to electronic search tools, searching skills, and availability of relevant data. Evidence about therapy is the easiest to find. The most useful sources for locating the best evidence about treatment include the following:







  • The Cochrane Library
  • The MEDLINE (Medical Literature Analysis and Retrieval System OnLine) and EMBASE (ExerptaMedica Database) databases
  • Primary journals
  • Secondary journals
  • Evidence-based dermatology and EBM books
  • The National Guideline Clearing-house (http://www.guideline.gov/)
  • The National Institute for Health and Clinical Excellence (http://www.nice.org.uk)






The Cochrane Library contains the Cochrane Database of Systematic Reviews, the Database of Abstracts of Reviews of Effectiveness, the Cochrane Central Register of Controlled Trials, and the Health Technology Assessment Database, among other databases (). Volunteers write the systematic reviews in the Cochrane Library according to strict guidelines developed by the Cochrane Collaboration. Issue 1, 2010, of the Cochrane Library contained 6,153 completed systematic reviews. The number of reviews of dermatologic topics is steadily increasing.






 Finding Evidence (Continued)





 The Database of Abstracts of Reviews of Effectiveness contains abstracts of systematic reviews published in the medical literature. It provides abstracts and bibliographic details on 12,594 published systematic reviews. The Database of Abstracts of Reviews of Effectiveness is the only database to contain abstracts of systematic reviews that have been assessed for quality. Each abstract includes a summary of the review together with a critical commentary about the overall quality. The Health Technology Assessment Database consists of completed and ongoing health technology assessments (studies of the medical, social, ethical, and economic implications of health care interventions) from around the world. The aim of the database is to improve the quality and cost-effectiveness of health care.






 The Cochrane Library is the best source for evidence about treatment. It can be easily searched using simple Boolean combinations of search terms as well as more sophisticated search strategies. The Cochrane Database of Systematic Reviews, Database of Abstracts of Reviews of Effectiveness, Central Register of Controlled Trials, and Health Technology Assessment Database can be searched simultaneously.






 The Cochrane Library is available on CD-ROM by personal or institutional subscription and on the World Wide Web from Wiley InterScience (). Subscribers to the Cochrane Library are provided with quarterly updates. The Cochrane Library is offered free of charge in many countries by national provision and by many medical schools in the United States. It should be available at your medical library.






 The second best method for finding evidence about treatment and the most useful source for finding most other types of best evidence in dermatology is searching the MEDLINE database by computer.40,41 MEDLINE is the National Library of Medicine’s bibliographic database covering the fields of medicine, nursing, dentistry, veterinary medicine, the health care system, and the preclinical sciences. The MEDLINE database contains bibliographic citations and author abstracts from approximately 5,400 current biomedical journals published in the United States and 80 foreign countries. The database contains approximately 18 million records dating back to 1947.42






 MEDLINE searches have inherent limitations that make their reliability less than ideal.41 Specific search strategies, or filters, have been developed to aid in finding relevant references and excluding irrelevant references to locate the best evidence about diagnosis, therapy, prognosis, harm, and prevention.43






 These filters have been incorporated into the PubMed Clinical Queries search engine of the National Library of Medicine and are available at .44






 PubMed Clinical Queries is the preferred method for searching the MEDLINE database to locate the best clinically relevant evidence. It can be used freely by anyone with Internet access.






 EMBASE is ExcerptaMedica‘s database covering pharmacology and biomedical specialties.11 EMBASE contains 20 million indexed records from 7,000 active, peer-reviewed journals and contains 2,000 titles not offered in MEDLINE. EMBASE has better coverage of European and non-English language sources and may be more up to date.11 The overlap in journals covered by MEDLINE and EMBASE is approximately 34% (range, 10% to 75% depending on the subject).45,46



Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Jun 11, 2016 | Posted by in Dermatology | Comments Off on Evidence-Based Dermatology

Full access? Get Clinical Tree

Get Clinical Tree app for offline access