Saturday, December 17, 2005

Rats and cancer, Part II

We return to the problem of how to identify which of the estimated 1,000 to 10,000 chemicals now in the everyday stream of commerce are carcinogens. They are part of a much bigger pile of 100,000 chemicals, most of which are not carcinogens.

For a long time industry has been pushing the (seemingly commensensical) notion that we should use epidemiological methods to do this. As a cancer epidemiologist, I have a self-interst in this being the gold standard for carcinogen identification, but alas, I also know it would be poor public policy. Epidemiologic methods are extremely insenstive to any but the most powerful health effects.

A brief digression on the nature of epidemiology. There are a variety of textbook definitions, but for our purposes it is sufficient to look at what epidemiologists do in practice. If we want to determine if a chemical causes cancer in humans, we obviously can't do an experiment, i.e., purposely expose a group of people to the chemical and compare their cancer rate to a similar group of unexposed people. What we can do, however, is look around for some natural circumstance almost like an experiment, say, a factory where workers are being exposed to benzene. We can then compare their cancer experience with the cancer experience of the general population. Observing such "natural experiments," arranging the observations in ways that provide the most information (often that means using statistical methods), and then interpreting the data is what we call (observational) epidemiology.

The problem, of course, is that Nature isn't a very tidy research assistant, so there are always loose ends sticking out: the groups may not be exactly alike, differing in ways that matter; the data might be inherently uncertain or difficult to obtain; the populations available for study might be small; etc., etc. In this sea of noise, distinguishing important risks, such as an increase of 30% or 50% or even a doubling of "background" cancer risk becomes difficult. Only the strongest and most powerful are detectable with epidemiologic methods. Along these lines my colleague David Ozonoff once gave a now much quoted (facetious) definition of a public health catastrophe: it is a health effect so powerful even an epidemiologic study can detect it.

So using epidemiology to detect environmental or occupational cancer is problematic on these grounds alone. But there are further difficulties. Cancer takes time to develop. Times from first exposure to a chemical carcinogen to the appearance of a clinical cancer (the latency period) are usually on the order of decades. What this means is that if we used epidemiologic methods to determine if an environmental chemical were a carcinogen and immediately upon making that determination we could wave a magic wand and make the chemical disappear from the environment (and of course putting the toothpaste back in the tube is usually impossible), we would still continue experience cancers from this chemical for the entire length of the latency period, i.e., for many decades after removing the chemical entirely from the environment. Those cancers are the ones thatfrom the decades of exposures that occurred before you waved your wand but which had yet to develop at the time you did so. They were in the pipeline, so to speak.

Thus using epidemiology as a means to identify carcinogens is seriously flawed in two respects: it won't "see" most of the carcinogens because it is so insensitive, and if it did see one because it was so powerful, it would be too late to stop decades worth of cancers from that chemical. This is not a very good policy option from the public health perspective (I'll let the economists argue it in their own terms).

So we need another way to identify chemical carcinogens, which is where the rodent bioassay, and Professor Rappaport's question, comes in. We will discuss this in Part III.

Part I here. Part III here.