The latest issue of Agenda has just been published online. It includes a series of papers that investigate the misuse or abuse of modelling for policy purposes. Each of them is well worth reading (and yes, by pure conicidence, I have a paper in that collection).
If I have to pick out a favourite (and not my own) it would be John Humphrey’s analysis. He estimates that rather than ‘creating’ over 200,000 jobs the stimulus package “has resulted in the loss of over 30 000 jobs.”
Then there is Henry Ergas’ introduction to the papers:
The greater the difficulty consumers have in distinguishing the quality of modelling, the greater is the risk of ‘junk modelling’ dominating. Increasing the risk is the fact that poor-quality studies can masquerade as good-quality models by replicating their ‘look and feel’: for instance, through sheer size and complexity (as in the reports on the NBN discussed in this forum by Kevin Morgan), technical sophistication (as in the climate-change modelling discussed by Ergas and Robson), opacity and impenetrability (as in the modelling of royalties discussed by Pincus, of company tax forecasts discussed by Davidson, and of the stimulus discussed by Humphreys) or even simply by generating seemingly very large and highly newsworthy numbers (as in the modelling of the costs of congestion discussed by Harrison) — and, most often, by a combination of all of these. The fact that key interpretative economic concepts are often misunderstood and misapplied (as in the modelling of ATM regulation discussed by Green) then allows questionable results to be parlayed into effective advocacy.
Anyone interested in policy and the modelling that often underpins policy proposals should spend some time reading this issue of Agenda.