One of the difficulties of "figuring out if peer review works" is being clear on what the purpose of it is. Engberg says that "scientists have claimed that peer review filters out lousy papers, faulty experiments, and irrelevant findings." And he rightly questions whether or not that is what really happens, citing the failure of four international conferences on peer review over the past decade or so to provide definitive proof.
Crawford suggests that "Peer review implies to many people a standard of quality that it doesn't and probably can't consistently deliver." Quite right.
My own experience with peer review over the past five years or so is that it is, nonetheless, an invaluable aid in the decision making process and in improving the articles that finally appear in the Journal of the Medical Library Association.
When I took over as editor, my predecessor, Michael Homan, pointed out that peer review can operate in two very different ways -- in the big, generalist journals that can only publish a tiny fraction of what gets submitted, the goal is to winnow things out. But in the smaller, specialty journals, the goal can be to help get stuff in. I've always taken that to heart and have used the peer review process as a way to work with authors to help get a marginal article into sufficient shape that it's worth being published.
Every JMLA submission goes to 3 reviewers who are asked to make a judgment on whether it should be published, and how extensively it might need to be revised. They're asked to comment on the methodology, the results, the writing style, etc. Those reviews serve as the basis for my judgment about whether or not to accept the article, and what revisions I might ask of the author before I'm willing to take it.
I don't read the articles thoroughly before I send them out for review (I believe that part of the compact between the JMLA and potential authors is that if you take the time to send an article in, you're entitled to a fair hearing by your colleagues, so I send everything out for review). Because of that, I don't know how often the comments of the reviewers lead me to accept or reject a paper that I might have made a different decision about on my own. But I suspect it is not often. Where the reviews are most helpful is in giving feedback to the authors. Different reviewers focus on different aspects, and see different things. It is not uncommon for reviewers to disagree, and then part of my job is to sort through those differences and guide the author to those revisions that I think are necessary.
On a couple of occasions I've raised the notion with the editorial board of doing open reviews, and each time the consensus is to continue with the double-blind method that we use (reviewers don't know who they're reviewing and authors don't know who they're being reviewed by). My personal inclination would be to use a more open process -- after all, I'm the final arbiter and I know who the authors are, and they certainly know who I am. (Once you've rejected a couple of articles written by friends and by people you like and admire, you develop a suitably tough skin). But I don't really know if it would improve the quality of the reviews -- I just prefer that there be more accountability and transparency in the process.
Our process does catch some errors -- I'm not an expert in statistics, so I need to be sure that somebody who is good at statistics reviews papers that rely heavily on them. There've been cases when one reviewer has caught a major problem with an article that has escaped the notice of the other two reviewers, or when, after repeated readings, I've identified a serious problem that escaped all of the reviewers. But, of course, I have no way of knowing what has slipped by all of us.
The peer review process for the JMLA "works" in the sense that it does what I expect it to -- helps me make better decisions and helps the authors write better papers. But does it insure that we never publish lousy papers, faulty experiments or irrelevant findings? Certainly not.