Among the criticisms levied against the Mandiant report on mainland hackers was one which caught my eye. The writer (who was trying very hard to rubbish the report) complained that it “had not been peer-reviewed”. This demonstrated a serious misunderstanding which I have occasionally detected in my colleagues. Peer review is an attempt to secure a minimum of quality in academic publications. It is a characteristic of academic publications that market forces do not work in the usual way there, because the writers want to be published, even if they are not paid. Also there are no readers worth speaking of.
In the real world people who write do not submit their work for peer review. They write on the understanding that they will be paid. If their work is not up to snuff they will not be asked to write again. In many ways this is a tougher regime than the one to which the academics are subjected. When I became an academic I found it rather easy, though time-consuming, to get things published in peer-reviewed journals. One does not see many academics switching to journalism, and those who do are usually historians, who are expected to be able to write properly even by their peers. As far as Mandiant is concerned this is a company which sells its research, I presume for large sums which customers would not part with if they thought they were getting garbage. The company’s peers are its competitors. Obviously it does not want to distribute its work to them free. And clients may prefer to have exclusive use of much of the output. But this company must in general be doing good work. Otherwise seriously greedy people would not be paying for it, and the company would go out of business.
Actually journalists seem to be rather impressed by peer review. Whether a finding has been published in a peer-reviewed place is often mentioned in science stories, even as the import of the science itself is mangled. Last week, for example, the Post reported a palpably peer-reviewed finding that playing of violent video games was associated with real-world violent behaviour. Unfortunately the headline, and indeed the comment crop incorporated in the story, proceeded on the assumption that the research had proved that violent video games CAUSED violent behaviour. Which it had not, and which the researchers had carefully not claimed. This is why the word “associated” was used. The fact that A and B are associated is interesting. It does not prove that A causes B or that B causes A. Actually it would be surprising if people interested in violence in the real world were not interested in violent video games. What is really going on may emerge eventually, or it may not. People are still struggling with the influence of other kinds of media on behaviour.
Either way, though, peer review is not much help. The reviewer (I have done this occasionally) is not expected to repeat the research, so there is a limited range of issues he can usefully look at. Does the research answer the question which the researcher says it answers (surprisingly often a tricky point)? Has the work been reported in sufficient detail to allow interested scholars to repeat the experiment? Do the sums add up? The reviewer is not supposed to, but frequently does, defend the conventional wisdom by giving the thumbs down to anything that questions it. Great diligence is not to be expected because the peer reviewer, like the author, is not paid. And the peer reviewer dos not get a by-line. This is not much of an attempt at quality control. The real quality control, in fields where such things are possible, is to repeat the experiment. As well as a long string of trail-blazing discoveries vetoed by reviewers there is also a long string of stories about pieces which passed peer review with flying colours, only to disappoint when people tried to repeat the work in their own laboratories.
This sort of event has become surprisingly common. Pharmaceutical companies which are constantly on the look-out for interesting new work report that in between 70 and 90 per cent of cases when they try to repeat work reported in a learned journal the experiment fails. A careful piece of research into duff science reporting looked in detail at 12 stories reporting breakthroughs in the treatment of ADD (a fashionable disorder of children). In each case the original report was in a prestigious refereed scientific journal and was lavishly reported. In 11 out of the 12 cases subsequent research disclosed that the breakthrough was not a breakthrough. These disappointments were not reported in the press. But they were not reported in the prestigious scientific journals either. Repeating somebody else’s experiment and getting a negative result is not the sort of thing they publish. The people who do it have to send their work to more humble newssheets.
If this sort of thing goes on in the hard sciences you can imagine what happens in “softer” fields like sociology. Academics are not rewarded for doing research; they are rewarded for having research published, which is not the same thing at all. Pleasing editors and peer reviewers is just a fatal to the open experimental mind as pleasing yourself. And in the statistical stuff you are allowed what is known as a 5 percent confidence level. That means you can show that the possibility of your result being entirely due to chance are 5 percent or less … or one in 20. This seems a reasonable standard if you are looking at one piece of research. If you look at the industry as a whole it is terrifying. If there were 10,000 articles published last year that allows for 500 which were entirely spurious, mere consequences of chance or coincidence. They were probably the most newsworthy ones.
Another problem is the disproportionate enthusiasm and autonomy accorded in academic circles to pure theory. Plenty of people have warned about this. Karl Popper said that philosophy became barren and pointless unless it considered questions which arose outside Philosophy. Clausewitz said that the green shoots of theory should not be allowed to get too far from reality, which was their proper soil. The modern view is that theory is an interesting and worthwhile thing in itself, and much more interesting and prestigious than practical matters. I fear it must also be said that a body of theory tends to be defended as a matter of interest and instinct by people who spent years mastering it in their youth and want to see potential competitors do the same. It’s like the British Admiralty’s attitude to steel warships in the 19th century – innovation should be resisted as long as possible because when it arrives everyone starts from the same position and advantages accumulated over the years become worthless. The most interesting front at the moment, at least to me, is the one where the psychologists are demonstrating that classical economics is based on a wholly erroneous view of human nature. The big battle of the future will be between the sociologists and the biologists, who are developing a whole new set of explanations for society.
Lusty blows can be expected from everyone concerned. All contributions, on both sides, will of course be peer reviewed.
Leave a Reply