Answering Family Physicians’ Clinical Questions Using Electronic Medical Databases
We also evaluated combinations of databases that were available at no cost. The combination of the 2 no-cost databases that answered the largest proportion of questions (75%) was DynaMed and American Family Physician. The greatest proportion of clinical questions that could be answered using the freely available sources was 80%, and this required the use of 3 databases (DynaMed, MDChoice.com, and American Family Physician).
Discussion
Our study suggests that individual databases can answer a considerable proportion of family physicians’ clinical questions. Combinations of currently available databases can answer 75% or more. The searches in this study were based on the combination of efforts of 2 experienced physician searchers. These results may not be replicable in the practice setting but do provide an objective best-case scenario assessment of the content of these databases.
The time required to obtain answers, while much less than searching for original articles, is still longer than the 2-minute average time spent by family physicians in the study by Ely and colleagues.1 Our time estimates are not precise, as time was not the primary focus of our study. Time was only recorded in 1-minute intervals, so searches that took 10 seconds were recorded as 1 minute. Even so, the existence of median times to obtain adequate answers greater than 2 minutes suggests that these databases may require more time than most physicians will take to pursue answers during patient care.
This is the first study to systematically evaluate how many questions can be answered by electronic medical databases. The strengths of this study include the use of a standard set of common questions asked by family physicians, testing by 2 experienced family physician searchers, and a systematic replicable approach to the evaluation. The only similar study we identified was one in which Graber and coworkers9 used 10 clinical questions and tested a commercial site, 2 medical meta-lists, 4 general search engines, and 9 medicine-specific search engines to determine the efficiency of answering clinical questions on the Web. Different approaches answered from 0 to 6 of the 10 questions, but that study looked primarily at sites that were not generally designed for use in clinical practice.
Limitations
Our study was limited by the relatively small number of questions, causing wide confidence intervals. Some answers were present in the databases but not found despite the use of 2 searchers. For example, a database manager identified 2 answers that were not found but would have been considered adequate.
We accepted answers as adequate if, in our judgment, they offered a practical course of action. We did not attempt to determine whether the individual asking the question believed that the answer was adequate nor did we attempt to validate the accuracy or currency of answers using independent standards. Many of the answers were based on sources that were several years old, and few were based on explicit evidence-based criteria. Although we determined the adequacy of answers for clinical practice through formal mechanisms, an in vivo study in which the clinicians asking the questions determined the adequacy of their findings during patient care activities would provide a more accurate assessment.
Our study presents a static evaluation of a dynamic field. Over time, answers may be lost because of lack of maintenance of resource links or may be gained by addition of new materials. Our use of questions gathered several years ago may not accurately reflect the ability of databases to answer current questions, which may be more likely to reflect new tests and treatments.
Many of the databases were designed for purposes other than meeting clinical information needs at the point of care. Performance in this study does not reflect the capacity of these databases to address their stated purposes. For example, the Translating Research Into Practice (TRIP) database is an excellent resource for searches of a large collection of evidence-based resources. These resources are generally limited to summaries of studies with the highest methodologic quality. The TRIP database did not perform well in our study partly because most of our test questions (consistent with questions in clinical practice) cannot currently be answered using studies of the highest methodologic quality. Another example is Medical Matrix, which provides a search engine and annotated summaries for exploring the entire medical Internet and not just clinical reference information.
We did not study the costs involved in using the databases we evaluated, and these costs may have changed since our study was conducted. Most of the databases we included were free to use at the time of the study and at the time of this report. The 3 collections of textbooks required access fees. STAT!Ref, which scored the highest in our study, did so because we used the complete collection available to us through our institutional library. This collection would cost an individual $2189 annually at the time of our study. A starter library was available for $199 annually and would only answer 40% of the questions.