Some members of Congress are proposing a bottom-up approach to determine which programs merit cutting. The idea is to draft cost-cutting legislation based on aggregating citizens’ opinions on what should be cut and what should be kept. One of the targets of this approach is the NSF, or, more precisely, the merits of some of the research funded by the NSF. The premise is that watchdog citizens will identify research that isn’t worth funding and will bring this to the attention of the House Committee on Science and Technology. The members of the committee, will, then, presumably, take action to save the taxpayers’ money.
What’s wrong with this picture?
I have served on a few NSF funding advisory panels over the last decade, and can assure the reader that this is not an easy task. The goal is to read several documents, each of which can go well over 100 pages and to assess the merits of the proposals to perform, or to continue, some avenue of research. There are many details to consider, but overall, the goal is to judge the potential for advancing knowledge against the risk of failure. Of course research is an inherently risky proposition — “if we knew what we were doing, it wouldn’t be research” — and this has to be factored into the recommendation. Furthermore, the recommendation is made by a group of people, who have to achieve consensus through detailed discussion informed by a deep knowledge of the subject.
It is hard to imagine how random people with no training in the field will be able to assess in a few minutes the merits of a proposal (based solely on its abstract) put together with considerable thought by several people with significant expertise. It’s hard to imagine that these people will uncover anything other than abstracts that sound odd or incomprehensible to the layman. The premise of the YouCut campaign is not that the NSF funding should be cut, but that that NSF funding decisions are inappropriate. Thus the premise is that the rabble-sourcing of the review process will produce better outcomes than that generated by experts in the field, experts who are not in a hurry to recommend any particular proposal.
The YouCut proposal further differentiates between the hard sciences and other research (e.g., social sciences) proposals. Rep. Adrian Smith asks people to “help us identify grants which do not support the hard sciences or which you don’t think are a good use of taxpayer dollars.” The premise there is that hard sciences are uniformly good, and social sciences are frivolous and somehow less scientific. Or perhaps that math is hard and the average person cannot possibly understand it, whereas research in fields that cannot be reduced to simple equations is somehow more accessible.
In fact, one can reasonably argue the opposite: the hard sciences can get adequate funding from corporations that stand to benefit from the research results, whereas it is the social sciences, the goal of which is to understand how individuals, groups, and societies function, that need public funding the most. After all, doesn’t it make sense to base policies and laws on a principled understanding of behavior, rather than on prejudices and half-truths?
That the Federal Government does not always allocate its funds efficiently is not in doubt. What I do question, however, is the merit of using the popular vote to decide issues that require a deeper understanding. The premise that the public can make meaningful judgments with regard to research grants makes as much sense as using web polls to decide which government workers should be fired, how much should be spent by Army units on fuel or ammunition, or to determine outcomes of specific court cases. There is a place in our culture for the popular vote, but judging the merits of research proposals isn’t it.