Monday, 16 March 2009

Bicycle overtaking and rebuttals

Something I've become slightly infamous for is my 2007 work on drivers overtaking bicyclists. A couple of weeks ago I was alerted to a US website where somebody called Dan Gutierrez posted a surprisingly angry critique of my findings as well as some data from his own replication of parts of the study (the main document is here [pdf]). Dan found some different results to me, which is great as I've long expected there would be differences in driver behaviour between the UK and the US, particularly because of differences in road design between the two countries. However, rather than simply conclude our countries are different, Dan seems to conclude I'm either a big numpty who can't do research, or a deliberate liar. Either way: ouch, Dan.

Two weeks ago I emailed Dan to try to clear things up, but haven't had a reply, so I thought I'd reproduce my email here. Given he's been quite so stinging about my work in a public forum I feel I should have some right to reply. And, more critically, I spent ages writing this email and at least by posting it here the effort is less wasted. Again: ouch, Dan.

Dear Dan,

Hello! Someone recently pointed me to your online article discussing the findings of the bicycle overtaking study I conducted a couple of years ago. I had a look at your document and I have to say, I was slightly surprised by the general tone, and use of words like 'deceptive' when referring to my presenting findings. But hey! I've been called worse things than that. I hope you don't mind my writing to try and clear up one or two things, as having read that document I almost feel I've offended you somehow?

First, the graphs. I'm a big fan of How to Lie with Statistics too, but I really wasn't trying to hide anything with those graphs. They were intended primarily for use by my colleagues, who regularly use graphs of this sort and who would be totally familiar with the practice of truncating the y-axis. It's done simply to make the differences that exist clearer, to facilitate discussion, not to hide the overall magnitude of an effect - I'd fully expect people to look at the bottom of the axis and see it doesn't start at zero. I'm also satisfied I didn't build up any insignificant differences into significant ones by plotting the graphs this way - I can give you more details to explain this if you're interested.

Moreover, you're dead right that the overall mean passing distance is about 4 feet, but I think by focusing only on the average passing distances you overlook the really important thing, which is all the variation in the data. As you'd expect, the gaps drivers leave when passing cyclists vary a great deal. The distribution of gaps wasn't far off being a Gaussian distribution - a bell curve - which means most drivers left an amount of space somewhere near the average, a few left a massive amount of space and, critically, a few down in the left-hand tail-end of the distribution left very little space indeed (in fact, two of the drivers I encountered left less than zero space).

This last point, about the small number of drivers who leave very little space, is probably important. Every day, many cyclists are passed by motor vehicles. And we know that some of these events end with the cyclists being hit, sadly. Given there are drivers who leave very little space indeed (to the extent some leave less than none), I'd say that it probably does matter if the average gap left by drivers gets smaller. If the average gap gets smaller, this very likely means the whole distribution is shifting along to the left. It's probable - but by no means proven, certainly - that if the average gap declines by one inch, the very near misses shift by an inch too, so all the vehicles that would have just missed the bicycle by a whisker (which we know happens fairly often) instead become vehicles that just hit by a whisker. And the bigger the shift in average passing distance, the more near-misses might become hits. Noone can prove this, but given there are so many near-misses already every day, I really wouldn't want to see drivers doing anything to decrease the gaps they leave, even by only a centimetre on average. I don't know what your thoughts are on this tail-end issue?

You went on to mention that I didn't also look at riding in a position to command the lane. There is a cultural misunderstanding here! British roads are quite small compared to yours (typically an urban lane is about 2.5m wide; often half that size in the countryside). The 1.25m riding position is pretty much in the centre of the lane, so you can take those 1.25m data as being the centre-of-the-lane commanding data you were looking for. Also, you're totally correct to say that drivers did change their behaviour in response to changes in bicycle lane position - I think I said something to that effect in the paper's discussion - but the key point remains: as the bicycle moved further towards the centre of the road, the gap between it and passing vehicles tended to decline. That's why I said 'to a first approximation' vehicles don't respond to changes in the bike's position: I know they do respond, but they don't respond enough! As you say, a 1 foot move by the bike led to a 0.75 foot response in my data; the further out the bike was, the smaller the gap between it and the passing vehicles. Hence my saying 'to a first approximation': that statement was worded to convey my surprise at this finding, not to ignore it. (Incidentally, I suspect the strange disappearance of the helmet effect at the 1m riding position in my data is something to do with this position forcing motorists to approach the centreline of the road; at 1.25m they definitely have to cross it, but at 1m I suspect they had just enough space that they tried to stay entirely within the lane. Or something like that. You have to remember this was the first study looking at such things, and it wasn't clear in advance that the centreline would be an issue. Research builds over time.)

The difference between our countries' road systems, which I mentioned above, is the key to the final part of your paper. I'm honestly really impressed by the lengths you've gone to in collecting in those data. I'm on record saying that I'd expect other countries (especially in North America) to see different results to those I found in the UK, and that I'd love it if people were able to test this. That was why I was quite surprised to see that when somebody finally did do this test, it was in a document that seems distinctly hostile to me! It's great that you found something different to what I found - we now have concrete data showing there's a difference between our countries. But might it not have been fairer simply to describe this as what it is - a difference between our countries - rather than suggest I don't know what I'm talking about?

Anyway, this was only going to be a short email and it's grown into a lengthy one. I write in a genuine spirit of friendship, rather than to moan, although I fear it won't come across that way. I doubt either of us likes the idea of cyclists being struck from behind by passing cars (which, in the UK's accident data at least, has a really high probability of killing the cyclist). Any information we can gather which might make this less likely is incredibly valuable. I hope in future we might work together, rather than in opposition, towards this goal.

With good wishes,

Ian

Quite reasonable, I hope you'll agree. It's a shame we cyclists can't get along more. Goodness knows, we should be united against the common enemy.

Sunday, 1 March 2009

Science stories: the devil IS in the detail but you've got to look for it

One of my colleagues, Chris Ashwin, made a bit of splash in the news this last week from being involved in a study on the genetics of optimism and pessimism. Looking at the Guardian article, which the link takes you to, I saw something in the readers' comments which was painfully familiar:

100 people from God knows where isn`t really representative of humanity. Was is a nice day? How healthy were the volunteers? How old? What were they being paid? Its all in the detail folks, or is that my cynicism gene kicking in?

Oh goodness me, but aren't you clever? I saw an awful lot of this sort of thing when my bicycle overtaking work was being reported a couple of years ago: again and again people would write comments which essentially said "This three-paragraph newspaper report I've just read doesn't mention X. I can't believe this researcher didn't consider X! The whole thing is clearly bollocks!" An entire study is dismissed because some detail isn't immediately in front of the reader's nose. You can see this behaviour whenever science is reported in the media.

Of course these details matter in scientific research, which is why researchers write up their procedures with painstaking care in journal articles so the details are there for everyone to see, examine and judge. Typically, this sort of information runs to several closely-typed pages for any given study (I've looked at Chris's paper and the procedure and results run to over 1200 words). So why on earth do people expect to find the same level of reporting in a newspaper story or blog post? Do people think scientists spend years doing research and then, faced with the challenge of reporting their findings, dash off a quick press release in five minutes before moving on to the the next project? Or is it that they believe the details don't exist, and that teams of professional researchers manage routinely to overlook undergraduate-level design issues when doing their jobs?

It's great people are thinking about what they read and showing some critical thought, but can't they take the logical step and actually look for answers to their questions rather than simply assuming the answers don't exist because they're not in a newspaper report? Am I wrong to find this dismissal of people's efforts so irritating?