Science Studies 103: Science, Truth, Public Policy — What the CDC should know but doesn’t
Posted by Henry Bauer on 2008/09/04
For decades, politicians have increasingly taken expert scientific advice as a guide to public policy. That’s wonderful in principle, but not necessarily in practice, because outsiders don’t understand that the experts’ advice is based on scientific knowledge, which is always tentative and temporary; scientific theories have a limited lifetime. It’s a widespread illusion, with seriously debilitating consequences, that “science” is synonymous with “truth”.
Not that anyone would, if asked, admit that they believe that science = truth; but observe how science is commonly talked about. If we want to emphasize that something really is so, we say it’s “scientific”. If we want to denigrate something as being untrue, we call it “unscientific”; or if we want to be nasty, we say it’s “pseudo-scientific”. “Tests have shown that …” somehow doesn’t seem convincing enough, so we say, “Scientific tests have shown …”, and then we need no longer fear any contradiction.
Policy makers ought to know that advice from scientific experts is fallible. Much public policy has to be based on judgments in situations where the facts are not certain, and decisions have to be made about possible benefits balanced by possible drawbacks. Therefore
the prime responsibility of technical experts
whose advice informs politicians
is to make as clear as possible
the uncertainties in what they think they know.
Otherwise, policy is influenced by judgments made unconsciously by the technical experts, who may see the trees fairly well but who are usually ignorant about the forest.
For example, when the Centers for Disease Control and Prevention (CDC) publish estimates of something, their responsibility is to make plain, indeed to emphasize, the limits of uncertainty in those estimates. This the CDC singularly and repeatedly fails to do; for example, it issued a press release in 2005 that HIV infections has surpassed 1 million “for the first time”, when it had already released estimates of about 1 million throughout the previous twenty years (p. 1 in The Origin, Persistence and Failings of HIV/AIDS Theory).
This post was prompted by the brouhaha following CDC’s recent announcement that its earlier estimate of about 40,000 annual HIV infections had been too low, its new estimate being 56,000. CDC was hardly a shrinking violet with this revision: “The new data is [sic] scheduled for publication in the peer-reviewed Journal of the American Medical Association. The report’s release is meant to coincide with the opening Sunday of the biannual International AIDS Conference in Mexico City, Mexico.”
“Dr. Kevin Fenton, director of the CDC’s National Center for HIV/AIDS, Viral Hepatitis, STD and TB Prevention”, proclaimed that “‘The fact that 56,000 Americans each year are contracting HIV for the first time is a wake-up call for all of us in the U.S.’ . . . [CDC] is now using technology capable of determining when someone was infected. The new method can indicate whether someone has been infected with HIV during the previous five months, rather than relying on statistical models. Diagnosis of HIV can occur years after infection”.
News accounts do not always reflect accurately, of course, what a speaker or a press release says, but in this instance it was evidently something that could easily lead a listener or reader to believe that some “new method” had supplanted “statistical models” — which is entirely untrue. A few media accounts did mention that this new number is simply a revised estimate, not a claim that the rate of HIV infections has been on the increase. What none of the media accounts that I have seen has pointed out is how fraught with assumptions and uncertainties this new estimate is, and how wrong are the conclusions of this “new method” when tested against reported “HIV” numbers from earlier years. Figure A shows what the CDC’s “new method”, combined with its computer-statistical model, “predicts” new HIV infections to have been since before the beginning of the AIDS era (source: Hall et al., “Estimation of HIV incidence in the United States”, JAMA, 300  520-529).
I’ve highlighted several clues to how uncertain all this is, though the clues are not hard to recognize. What is not at all uncertain, though, is that the estimates given in this Figure are totally at odds with data-based estimates of HIV infections during at least the first decade of the AIDS era.
From the Y-axis scale, Figure A yields the numbers in Table A, column I. Column II lists the AIDS deaths during the relevant periods (from “Health, United States” reports ). Column III is the net estimated prevalence, namely, the cumulation of annual new infections in Column I minus the deaths. Column IV lists earlier estimates from official reports and peer-reviewed articles. The CDC’s “new method” combined with their computer-statistical model constitutes a drastic re-writing of history. And just as the Soviet Union rewrote history all the time without mentioning the old version — let alone explaining what was wrong with it —, CDC fails to mention the numbers it and peer-reviewed articles had propagated during the 1980s and 1990s.
Those earlier estimates in Column IV had been made in a quite straightforward manner. The actually measured rate of testing “HIV-positive” in various groups was multiplied by the size of each group. Military cohorts, blood donors, and Job Corps members were routinely tested. Sentinel surveys had been carried out in hospitals and a range of clinics, and special attention had been paid to sampling homosexual men and injecting drug users. The only uncertainty was in estimating the sizes of the various groups, but good information was available about most of those, and moreover there was a National Household Survey that provided a good check on what is typical for the general population overall. Persistently over two decades, the result was an approximately constant prevalence of something like 1 million.
That fact is incompatible, however, with HIV/AIDS theory, which insists that “HIV” somehow entered the USA in the late 1970s. Naturally, the CDC’s model incorporates that assumption even though it remains unproven and is incompatible with surveillance of HIV infections since 1985. Now CDC continues its attempt to shape public policy with numbers derived from a model whose validity is not merely uncertain, it’s demonstrably invalid.
That seems incredible, but Science Studies once again offers insight. The modelers know they are “just” modeling, trying to establish the best possible algorithms to describe what’s happening. In their publication, they scrupulously set out all the assumptions made in this latest set of calculations; indeed, almost the whole text of the article describes one assumption after another. The failure to discuss how incompatible the model is with data from 1985 through the late 1990s is, plausibly, because the authors are not even aware of those earlier publications — they’re working on this particular model, that’s all. The blame, if any, should be directed at the administrators and supervisors, whose responsibility it is to know something about the forest, and most particularly about the CDC’s responsibility to the wider society: not to arouse panic without good cause, for example; to ensure that press releases are so clear to lay people that the media will not misrepresent them. But CDC big-shots, like bureaucrats in other agencies, suffer inevitable conflicts of interest: they want to attract the largest possible funding and to gain the highest possible public appreciation, esteem, prestige. That’s why, in the early days of AIDS, the CDC had hired a PR firm to convince everybody that AIDS was a threat to every last person, even as they knew that it wasn’t (Bennett & Sharpe, “AIDS fight is skewed by federal campaign exaggerating risks”, Wall Street Journal 1 May, 1996, A1, 6.)
At any rate, this latest misleading of the public, seemingly not unintentional, is far from unprecedented. The crimes and misdemeanors of CDC models are legion; see, for example, “Numbers”, “Getting the desired numbers”, and “Reporting and guesstimating”, respectively p. 135 ff., p. 203 ff., and p. 220 ff. in The Origin, Persistence and Failings of HIV/AIDS Theory). Consider the instance in Table B of CDC-model output that was wildly off the mark. The modelers had seen fit to publish this, as though it were somehow worthy of attention, when the calculated male-to-female ratios for “HIV-positive” are completely unlike anything encountered in actual HIV tests in any group for which such data had been reported for the previous dozen years.
Not that WHO or UNAIDS models are any better, as James Chin — who designed and used some of them — has pointed out cogently (“The AIDS Pandemic”). Jonathan Mann was one of the first international HIV/AIDS gurus, responsible for authoritative edited collections like “AIDS in the World II: Global Dimensions, Social Roots, and Responses” (ed. Jonathan A. Mann & Daniel J. M. Tarantola, Oxford University Press, 1996) . In that volume, the cumulative number of HIV infections in the USA is confidently reported as in Table C below. On the one hand, I give Mann et al. high marks for restricting themselves to 4 significant figures and avoiding the now-standard HIV/AIDS-researchers’ penchant for giving all computer outputs to the nearest person. On the other hand, their estimates are in total disagreement with those based on actual data obtained during the relevant years.
The CDC’s 2008 model is a bit closer than WHO’s to the data-based estimates, but it’s still wildly off the mark, up at least to 1990.
The model is clearly invalid
and the numbers derived from it are WRONG
This post is already long enough. I’ve written more about science not being truth and related matters in Fatal Attractions: The Troubles with Science, New York: Paraview Press , 2001 . In another post I’ll write more specifically about this latest CDC publication, the array of unvalidated underlying assumptions as well as hints of troubling conflicts of interest.
This entry was posted on 2008/09/04 at 7:32 pm and is filed under clinical trials, experts, Funds for HIV/AIDS, HIV/AIDS numbers, M/F ratios, uncritical media. Tagged: and Responses”, “AIDS in the World II: Global Dimensions, CDC ignorance, CDC PR campaign, computer model, Daniel J. M. Tarantola, Edward H. Kaplan, H. Irene Hall, invalid CDC estimates, James Chin, John Karon, Jonathan A. Mann, Joseph Prejean, Kevin Fenton, Lisa M. Lee, Matthew T. McKenna, misleading CDC announcements, Philip Rhodes, Qian An, Robert S. Janssen, Ron Brookmeyer, Ruiguang Song, science-and-truth-and public policy, Social Roots. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.