Some members of Congress are proposing a bottom-up approach to determine which programs merit cutting. The idea is to draft cost-cutting legislation based on aggregating citizens’ opinions on what should be cut and what should be kept. One of the targets of this approach is the NSF, or, more precisely, the merits of some of the research funded by the NSF. The premise is that watchdog citizens will identify research that isn’t worth funding and will bring this to the attention of the House Committee on Science and Technology. The members of the committee, will, then, presumably, take action to save the taxpayers’ money.

What’s wrong with this picture?

I have served on a few NSF funding advisory panels over the last decade, and can assure the reader that this is not an easy task. The goal is to read several documents, each of which can go well over 100 pages and to assess the merits of the proposals to perform, or to continue, some avenue of research. There are many details to consider, but overall, the goal is to judge the potential for advancing knowledge against the risk of failure. Of course research is an inherently risky proposition — “if we knew what we were doing, it wouldn’t be research” — and this has to be factored into the recommendation. Furthermore, the recommendation is made by a group of people, who have to achieve consensus through detailed discussion informed by a deep knowledge of the subject.

It is hard to imagine how random people with no training in the field will be able to assess in a few minutes the merits of a proposal (based solely on its abstract) put together with considerable thought by several people with significant expertise. It’s hard to imagine that these people will uncover anything other than abstracts that sound odd or incomprehensible to the layman. The premise of the YouCut campaign is not that the NSF funding should be cut, but that that NSF funding decisions are inappropriate. Thus the premise is that the rabble-sourcing of the review process will produce better outcomes than that generated by experts in the field, experts who are not in a hurry to recommend any particular proposal.

The YouCut proposal further differentiates between the hard sciences and other research (e.g., social sciences) proposals. Rep. Adrian Smith asks people to “help us identify grants which do not support the hard sciences or which you don’t think are a good use of taxpayer dollars.” The premise there is that hard sciences are uniformly good, and social sciences are frivolous and somehow less scientific. Or perhaps that math is hard and the average person cannot possibly understand it, whereas research in fields that cannot be reduced to simple equations is somehow more accessible.

In fact, one can reasonably argue the opposite: the hard sciences can get adequate funding from corporations that stand to benefit from the research results, whereas it is the social sciences, the goal of which is to understand how individuals, groups, and societies function, that need public funding the most. After all, doesn’t it make sense to base policies and laws on a principled understanding of behavior, rather than on prejudices and half-truths?

That the Federal Government does not always allocate its funds efficiently is not in doubt. What I do question, however, is the merit of using the popular vote to decide issues that require a deeper understanding. The premise that the public can make meaningful judgments with regard to research grants makes as much sense as using web polls to decide which government workers should be fired, how much should be spent by Army units on fuel or ammunition, or to determine outcomes of specific court cases. There is a place in our culture for the popular vote, but judging the merits of research proposals isn’t it.


  1. […] This post was mentioned on Twitter by Elizabeth Buie and Gene Golovchinsky, Jon Elsas. Jon Elsas said: RT @HCIR_GeneG: Posted "Rabble-sourcing" http://bit.ly/fYCC9H #nsf […]

  2. I expect this is just grandstanding, but perhaps one outcome will be very carefully-worded proposal abstracts, since the author should expect a hostile and unknown audience.

  3. I agree that that’s a good strategy to mitigate a bad policy. I am still bothered by the precedent of the approach, however.

  4. It sure sounds like grandstanding.

    But I have a bone to pick with the hard/soft science distinction w.r.t. companies. You say:

    “the hard sciences can get adequate funding from corporations that stand to benefit from the research results, whereas it is the social sciences, the goal of which is to understand how individuals, groups, and societies function, that need public funding the most.”

    Companies also stand to benefit from research into how individuals, groups and societies function. It’s called sales and/or marketing!

  5. Anybody know what the soccer project he mentions is? Robocup has been a big force in moving AI research forward, including a lot of AI research relevant to the military. It would be ironic if one of what Congressman Smith thinks is a soft NSF target is actually something with strong DOD support as well.

  6. Not sure about his soccer allusion.

    @Bob I agree that benefits of social science research are broad as well, but historically sales/marketing organizations have not had a good record of funding basic research in academia, with the possible exception of business schools that have a different funding model anyway. But the applicability of the findings to real-world societal, business, government, and military problems seem to get overlooked when funding much of social science research. And we haven’t even started talking about the humanities!

  7. Here’s a post on the particular NSF grants that were singled out for ridicule and their unsurprisingly perfectly reasonable character:


  8. And here’s on that has more detail:


    Also has the interesting observation that Congressman Smith’s district has received $5 billion in farm subsidies since 1995.

Comments are closed.