Bad Science on the Highway

Carcentrism Skews "Safety" Research

D. A. Clarke


Cook and Sheikh have shown a decline in the numbers of head injuries in cyclists in the period from 1991 to 1995 and attribute it to the increased popularity of helmets. I have to declare an interest, since I enjoy cycling and would never wear a helmet; but it does all the same appear to me that if this kind of evidence is all that the helmeteers can adduce then I needn't worry. Did no other factor change? What about the fashion for cycling on pavements, the general slowing of traffic in towns, the increase in the popularity of reflective clothing, and the change in the shape of bicycles?

It is a matter of common observation that most cyclists these days no longer ride bottom upwards. They have shorter hair, too. It may sound callous to say so, but the only solid conclusion that can be drawn from their paper is that the overall risk is small. In a period during which about 20,000 people were killed in road accidents only 120, or 20 a year, died from a head injury while cycling. Moreover no one pretends that all, or even most, of these lives would have been saved by wearing a helmet.

-- Steven Butterworth, Letters to the BMJ in response to Cook and Sheikh "Trends in serious head injuries among cyclists in England: analysis of routinely collected data"

4 years ago I was cycling to work and was hit by a car coming out of a side road. I of course leapt off the road and inspected my mount for damage. I then noticed blood pouring onto the road so I went to Monklands hospital to get stitched up.

In the admissions area I was asked if I was wearing a helmet. It so happend that I was but only because it was February and polystyrene is decent insulation.

I told them that yes, I had been wearing one but as I had landed on my chin it wasn't important. I told them this several times. Nonetheless, if stats were kept of this admission it will be recorded as one where a cyclist was hit by a car and survived while wearing a helmet.

-- Aedan McGhie, Letters to the BMJ (same issue)

The government is extremely fond of amassing great quantities of statistics. These are raised to the Nth degree, the cube roots are extracted, and the results are arranged into elaborate and impressive displays. What must be kept ever in mind, however, is that in every case, the figures are first put down by a village watchman, and he puts down anything he damn well pleases.

-- Sir Josiah Stamp 1896-1919, quondam head of the Inland Revenue Dept., U.K.


Authoritative Sources and Official Stories

How do we evaluate the base data set?

When we talk about traffic safety, we usually trust a group of expert functionaries (medical, police or highway officialdom) to tell us what the risk are and how best to avoid them. They in turn discuss among themselves and present in public the reduced results of massive data-gathering activity. The individual "accident reports" written by millions of traffic police and beat cops, statistics from emergency rooms and hospitals, are all fed upstream into data repositories from which conclusions are drawn. But is this entire process objective?

There are two kinds of studies relevant to the helmet efficacy debate. One is the sample/case study, in which (for example) admissions to a particular hospital facility, or to a group of hospitals, are tabulated by injury type, and conclusions are drawn based on this (fairly small) sample which is presented as typical. Another is based on regional or national fatality/injury statistics over time, based on police reports.

There are two distinct propositions which these studies seek to prove: (in the first case) that cyclists wearing helmets incur fewer fatal or severe head injuries than non-helmeted cyclists, or (in the second case) that as helmet uptake increased in the target area (or after Date X on which a mandatory helmet law was enacted), cyclist fatalities or head injuries have declined.

There are several axes along which such studies can be contaminated by bad methodology or sloppy thinking. Most of the pro-helmet canon (the essential papers on which helmet promoters rely when making their case) have been criticized for one or more of the following:

In the letters above, S. Butterworth informally challenges the first type of weakness by questioning why, when so many other traffic and cycling variables were changing, injury/fatality trends are only attributed to helmetization. For a more formal challenge of this type, see the paper 'Post Hoc Ergo Propter Hoc' by myself and Riley Geary. This type of generous overinterpretation has also been challenged by motorcyclists' associations. Officials often claim that a decline in motorcyclist fatalities per million population follows, and is therefore proves the benefits of, a mandatory helmet law; motorcyclists retort that such reports ignore the decline in motorcycle registrations which is documented in states where such laws are enacted.

The literature of "cycling safety" also suffers from the overinterpretation of undersize samples (the second weakness in the list above). Many papers have been based on very small case studies. Perhaps the most famous (and controversial) of these is the 1989 paper by Thompson, Rivara, and Thompson. Despite considerable peer criticism, this paper caught the attention of authorities and is still the reference on which much public policy and received opinion is based. The authors originally concluded that the use of a helmet would reduce the risk of head injury and brain injury by over 80%. Critics of the paper pointed out that the sample was very small (acknowledged by the authors in the paper itself), that it was non-randomised, that the authors ignored significant class and race differences in the sample population, etc. See John Franklin's summary of these objections. JF offers a limited survey of the literature -- pro and con -- on cycle helmets.

A failure to group injuries rationally by type and severity also crops up from time to time. This "apples and oranges" problem sometimes is a side effect of small sample studies. The original draft of the Thompson, Rivara and Thompson paper interpreted its small sample so optimistically as to support the assertion that helmet use reduces leg and knee injury. Other researchers have defined "head injury" in such a way as to exclude jaw and facial injuries (against which a light bike helmet obviously offers no protection); this obviously amplifies the measured effectiveness of the helmet in "preventing head injury".

Whatever wrangles we may engage in over specific injury locations and severities or the exaggeration of helmet effectiveness, there is a far more insidious and important consequence of this failure to analyze injuries appropriately by type and severity. It contributes to a grossly exaggerated perception of the danger of cycling as an activity.

Much of the pro-helmet literature -- which shapes public policy and public opinion -- makes a fundamental assumption that cycling is very dangerous. We find this conflation of injury types in the otherwise excellent survey-based research done by Aultman-Hall (et al) in Canada; here it leads to the assertion that cycling is as much as 60 times more dangerous than driving. The authors conclude that there is a "crisis" in cycling safety.

This conclusion is reached by counting the number of injury-producing falls or collisions per mile experienced by cyclists versus car drivers. Unfortunately, Aultman-Hall's papers do not distinguish between the most trivial and the most serious injuries. Experienced cyclists know that the average cycling mishap results in trivial injuries which usually require no medical intervention: the average cyclist gets up after a tumble and rides away. Cycling is safer, in terms of injury exposure per hour, than almost all other leisure or sporting activities. But in public discourse in the US, cycling almost never appears except as a "safety issue," because of the general misperception that it is a high-risk activity.

While it is true that the risk of "falling down" or "falling off" is almost nonexistent for drivers, this does not necessarily make driving inherently safer than cycling or walking. Papers which conflate all cycling accidents and injuries, making no distinction between bruised knees and serious brain injuries, feed a misperception of cycling as very risky. Papers which restrict themselves to the analysis of immediate accident injury ignore lifetime health effects -- pollution, chronic back problems, decline in fitness -- associated with habitual driving, and also lifetime health benefits associated with cycling.

A fair amount of criticism and analysis has been directed to these three common weaknesses in "cycling safety" research. Not as much effort has been directed to correcting the fourth weakness (the one challenged by Aedan McGhie in the letters above): poor-quality primary data which are accepted as authoritative and subsequently used to define and justify public policy. Despite the protestations of the actual injured cyclist, the night watchman fulfills Sir Josiah's prediction and "puts down whatever he damn well pleases."

There is, unfortunately, a good case for questioning the quality of the raw data in official traffic safety databases. Are police reports unbiased? Are statistics collected objectively? Do coding systems permit accurate reporting? Are coding systems consistently followed? Is there an "under-reporting" problem? These questions may affect the outcome of analysis in either direction, so the quality of these data is of the utmost importance. Let's consider two very specific cases known to me personally, in which official bias affects the data gathering and/or reporting activity.


Bias in Data Gathering

Which data are gathered?

Let's say I am a drug manufacturer. My researchers have come up with a drug which we believe may cure stomach cancer. It's now time for field trials. We distribute trial kits to twenty different major hospitals.

The kits contain enough drug doses to treat, say, 25 patients per major hospital centre. They also contain literature describing the treatment regimen, and forms for reporting the results back to our research team at corporate HQ.

The instructions for reporting the test results read: "Promptly inform the research team of all cases in which the cancer growth is arrested or reversed after patient is treated with Adamorphoxanavil."

A drug test regimen along these lines, were it to become public knowledge, would cause a scandal, an ethics investigation, and outrage at the FDA level. Why? Because it's lousy science.

When testing a new drug, we have to measure and report more than just the cases in which it appears to succeed. We have to apply the therapy in a controlled case-study setting, that is, we need a set of similarly afflicted patients who do not receive the new drug, but instead receive conventional existing treatment. If it is ethically acceptable (and in the case of stomach cancer it would not be) we'd like to see a comparison of placebo, or no regimen, with the new drug. (Animal testing is usually used for this particular dataset in real life.)

We need to report the sample set's progress as it compares to the other (control) test subjects. We need to note how many of the test subjects showed no response to the new treatment; how many of the control group recovered just as well using conventional treatment; and so forth.

For the drug company to ask that only apparent successes were reported, and then to compile those into a final report to the FDA touting the success of their product, would be deeply unethical and scientifically bogus.

Now consider this actual language from a police grant proposal written by the Police Department of the town of Santa Cruz California, USA, and submitted in January of 2001. Among the 19 specific goals to be achieved by the grant using Office of Traffic Safety (statewide) funds, there is one which reads:

16) To notify OTS of all "saved by" events involving the use of bicycle helmets.

There is no requirement to notify OTS of any events in which a cyclist dies or receives severe head injury despite wearing a helmet; or of any events in which a cyclist in a collision does not incur severe head injury or brain-related fatality even though not wearing a helmet. There is no requirement to notify OTS of cyclist fatalities from causes other than head injury. There is no requirement to distinguish between auto-related collisions and bike-only mishaps, nor is there any requirement to report such other factors as bike lighting, sobriety of cyclist, sobriety of driver, etc.

Nor is there any standard for the forensic qualifications of the person making the judgment that the cyclist's life was "saved" by the helmet. The youngest rookie traffic cop or the oldest veteran of the Force might submit such a report; it is a pretty good bet that neither of them is a qualified pathologist or forensic analyst. In the US, the Coroner is not involved in "accidental" traffic deaths and injuries, so it is unlikely that any person with real pathology credentials will examine the evidence.

Returning to our drug test analogy, it is as if the hospital janitor or electrician, or at best a first-year medical student, were allowed to fill in the drug efficacy reports.

It's quite evident that OTS has an agenda, which is to promote helmet use. In order to pursue this agenda they encourage City police departments to report only data which support the agenda. This is not science. It is propaganda. It may be done with the best of intentions -- that is, convinced that wearing a helmet is a good idea, the OTS officials seek eagerly for anecdotal-quality data which would convince the public to do so. But it is not science, and such biased data-gathering can only undermine the legitimacy of traffic-safety statistics published by the authorities.


Bias in Data Reporting

How is risk attributed?

On January 20th 2001, a young fellow on a bicycle was struck and knocked down by the driver of a light pickup truck. According to eyewitnesses, the driver became impatient while waiting for pedestrian traffic at a crosswalk; first he shouted and honked, then he backed up his truck a couple of feet, hit the gas, and charged the crosswalk. The cyclist sustained only minor injuries (even though he was not wearing a helmet) but the bike, which was actually run over by the truck, was badly damaged.

The driver tried to leave the scene, but bystanders surrounded the truck shouting "Citizen's arrest!" and other less flattering phrases. The driver gave up his attempt to flee and exited the vehicle, waiting along with victim and witnesses for the police to arrive. Witnesses say that the incident was typical of "road rage" (current slang for aggressive and endangering driving) and that the driver was clearly angry and behaving unsafely and irrationally.

When police arrived, they declined to interview most of the 8 or 9 witnesses who were eager to testify. They accepted the driver's excuse that he "just didn't see" the cyclist. The driver was allowed to depart without a citation. In the incident report, police attribute fault to the cyclist and exonerate the driver completely.

The cyclist, outraged, is challenging the police report. A reporter for a local paper was present and witnessed the event; she wrote a news story describing the event and criticizing the "unjust" police response. Local cyclists expressed anger and frustration at the next City Council meeting, in particular because the police had just applied for a half-million dollar grant to enforce bicycle and pedestrian safety.

This incident is by no means isolated. Cyclist and pedestrian advocacy groups all over the country report that police consistently attribute blame to the cyclist or pedestrian who is struck by a car. If the driver is drunk then it is a different story in some areas; where DUI is a major enforcement target, the intoxicated driver can be heavily penalized. But if the driver is not drunk, merely careless, angry, reckless, inattentive, or incompetent -- then the police show a strong tendency to blame the victim.

Now we must consider that it is traffic-cop reports at this local, daily level which are gathered and combined in large statistical databases such as FARS and GES. These statistics are then used to "show" that in X percent of cases, collision is the cyclist's fault. How would these statistics change if police reporting on the street were not as biased as we have reason to believe it now is?

At present the highway safety establishment is focussed on pedestrians and bikes as "a safety problem", and on programs like "pedestrian management" and "cyclist enforcement" -- and of course on the relentless promotion of helmets for cyclists (and scooter riders, skateboarders, roller skaters, etc.). The "problem" is perceived to be where the "fault" or "risk" is. As long as we ignore the behaviour of motorists and point the finger at victims of motor crashes, we will continue to see pedestrians and bikes as "problems" and our safety policy will continue, ineffectually, to try to modify and control the behaviour of the least dangerous users of our roads.

The police grant proposal to which I refer above was exposed to limited public review only through the activism of the local cycling community, who in turn were spurred to action by the frustrating incident on Jan 20th. Thus I had a chance to read and review the text of the proposal.

One thing which I noted in my critique was that "injury/fatality incidents" involving bikes, and those involving pedestrians, are called out in the "statement of problem" section as separate statistics. Other (mutually exclusive!) categories are "driver speeding" and "drunk driver".

This biased taxonomy of collision incidents implicitly classifies pedestrians and bikes as a causative category in collisions, on a par with "driver speeding" and "drunk driver". The implication of such mutually exclusive categories is that drunk drivers never hit cyclists or pedestrians; neither do speeding drivers. We know empirically, from experience on the street, that this is nonsense.

Drunk drivers and speeding drivers hit all kinds of things, including pedestrians, bikes, lamp posts, and each other. Yet the police report has concealed this risk factor (drunk and speeding drivers) and has constructed in its place an artificial risk category called "cycling" and another called "walking."

Further bias was revealed when police were asked why "injury" and "fatality" had been lumped together when assessing risk to pedestrians and cyclists. How can we know how "severe" our "safety problem" is, people asked, if we have no information about the actual severity of injuries? Are we talking about skinned knees, or about lifetime disability? And who died?

After some pressure, the police admitted that in the study period there had been zero fatality incidents involving cyclists or pedestrians. There had been only three traffic fatalities in three years, and all three had been motorcyclists. However, the point of the grant was to get money, and to get money the police had to "prove" that there was a "serious safety problem with cyclists and pedestrians." Thus they categorized incidents as "injury/fatality," tacitly magnifying the severity of the problem.

The language of the grant conceals the fact that motorcyclists were at more risk of fatality than either pedestrians or cyclists. Furthermore, if "motorcyclist" had been split out as a separate category, it still would not have revealed who was at fault in each of the three fatality motorcycle collisions. Were motorcyclists at risk from drunk drivers? From their own speeding? From heavy lorries? We can't tell from this report.

Police claimed, based on their three years' worth of incident reports, that fault could be assigned about equally to drivers and to pedestrians or cyclists, in all collisions involving peds and bikes. Local cycle advocates immediately referred to the obviously biased police reporting of the "road rage" incident just days earlier, and asked how these statistics could be believed. There was no substantive response.

This micro-controversy, in one mid-sized town in one State of the USA, illustrates how official bias in reporting, at the very lowest level, can contaminate statistics upstream; and also how taxonomy and methodology can place further "spin" on those same skewed statistics after collection, in official reports and papers. The night watchman not only puts down whatever he damn well pleases; he starts out with a bias against cyclists and pedestrians, and this bias infects the process of data analysis and interpretation upstream.



de@daclarke.org
De Clarke