To determine a test's accuracy, researchers compare its results with those of a "gold standard" -- usually a more definitive test but one that is too costly, time-consuming or risky. However, shortcomings in study design and methodology are known to affect estimations of accuracy.
In this issue, Anne Rutjes and colleagues report on their analysis of 487 primary studies of test evaluations to determine if the published results were accurate. Surprisingly, they found that only one of the studies had no design deficiencies. Studies that included patients with severe disease and healthy control subjects were most likely to overestimate accuracy. Thus, the test would perform well among patients with obvious and severe disease, but it would be much less accurate when used to detect mild or early disease.
In an accompanying commentary, Toshi Furukawa and Gordon Guyatt underscore the importance of the study by Rutjes and colleagues. They see diagnosis and test performance as proceeding in a stepwise fashion, starting with a set of symptoms and signs. Knowing the accuracy of a test in a real clinical setting is critical, as is knowledge of the accuracy of the gold standard.
p. 469 Evidence of bias and variation in diagnostic accuracy studies
-- A.W.S. Rutjes et al
http://www.cmaj.ca/pressrelease/pg469.pdf
p. 481 Sources of bias in diagnostic accuracy studies and the diagnostic process
-- T. A. Furukawa, G.H. Guyatt
http://www.cmaj.ca/pressrelease/pg481.pdf
Journal
Canadian Medical Association Journal