New Things Under the Sun is a living literature review; as the state of the academic literature evolves, so do we. This post highlights some recent updates.
Risk Aversion and Budget Constraints
The post Conservatism in Science looked at some evidence on whether science was biased in favor of incremental science. One argument made in that post is that it’s easier to identify really good research proposals if they rely on a knowledge base reviewers are familiar with. If only really good proposals can be funded because the research budget is too tight, then that might mean more unusual ideas that are harder to evaluate don’t make the cut, creating a bias towards conservatism in science. A new paper provides some further evidence on this point. The updated post now includes the following paragraphs:
A 2023 working paper by Carson, Graff Zivin, and Shrader provides some further support for the notion that, when budget constraints bite, proposals with a greater degree of uncertainty are the first to be dropped. Carson and coauthors conduct a series of experiments on scientists with experience serving as NIH peer reviewers. In one experiment with 250 participants, they showed reviewers a set of ten grant proposals. The title and abstract of these proposals were drawn from real NIH grants, but in the experiment participants were provided with a set of 30 fictional peer review scores, ranging from 1 (best) to 9 (worst). They were then asked to pick four to (hypothetically) fund.
We don’t have a measure of novelty here, but the variance of peer review scores is a potentially informative related measure, as it indicates disagreement among peer reviewers about the merits of a proposal. Carson and coauthors show that, among proposals with the same average score, participants are actually more likely to select proposals with a greater variance in their peer review scores to be funded! But in the next stage of their experiment, they ask participants to imagine their research budget has been cut and now they have to drop one of the four proposals they selected to fund. When asked to tighten their belts, which projects do reviewers in this experiment choose to drop? As we might expect, they cut the ones with the lowest average. But above and beyond that, participants are also more likely to choose to cut the ones with the more variable scores.
Measuring the extent of knowledge spillovers
A key idea in the economics of innovation is the knowledge spillover: the research work I do tends to benefit people besides myself. This dynamic is an important reason why innovation has unusual properties, relative to other kinds of economic activity. The post Knowledge Spillovers Are a Big Deal looks at some papers to argue that knowledge spillovers matter in practice, as well as in theory. I’ve rearranged this paper a bit to highlight two new additions.
First, a new paper by Aslan and coauthors provides descriptive data on the extent of knowledge spillovers in biomedicine. From the article update:
Aslan et al. (2023) show pretty similar results in biomedicine. Since 2008, the NIH has classified its research grants into hundreds of different research categories, such as “cerebral palsy”, “vector-borne diseases”, and “lead poisoning” (to pick three examples at random). How often do grants for one category result in research publications in other categories? Quite often it turns out.
To see how often this kind of unexpected spillover happens, Aslan and coauthors get data on 90,000 funded NIH grants over 2008-2016, and 1.2mn associated publications. If the NIH and journals used the same classification system, it would then be a simple question of seeing how often a grant and its publications are assigned the same category (minimal spillovers) versus different categories (large spillovers). But there are two challenges.
First, unfortunately journals do not classify articles into categories using the same system that the NIH uses to classify its grants. Aslan and coauthors instead use machine learning algorithms to assign journal articles to the NIH’s categories, based on the text of the journal abstracts. Second, the NIH classification system can be too granular for identifying significant knowledge spillovers.
For example, there are categories for both “tobacco” and “tobacco smoke and health.” If research dollars are spent on a proposal assigned to the category “tobacco” but then generate a publication tagged as “tobacco smoke and health”, then while it is technically true that the grant generated knowledge applicable to a different category of knowledge than expected, the new category is so similar to the original that it doesn’t really feel like a significant knowledge spillover. To reduce this worry, Aslan and coauthors use a clustering algorithm to cluster categories frequently assigned to the same grants. This results in 32 different clusters of NIH topics. “Tobacco” and “tobacco smoke and health” now fall under the same category, for example, so that a grant assigned to “tobacco” but generating research assigned to “tobacco smoke and health” would no longer be classified as a knowledge spillover, since both categories are part of the same cluster.
In the end, 58% of publications are assigned at least one category that is different from the ones assigned to the grant. In other words, more than half of the publications emerging from NIH grants are at least partially about a topic significantly different from the topics that the research grant was originally assumed to be about.
The original article also included a discussion of Bloom, Schankerman, and Van Reenen (2013), which showed private sector R&D appears to “spillover” to other firms working on similar technologies, leading to more patents and greater productivity for these peers. The update now (briefly) notes that this paper’s analysis was repeated on a larger dataset in 2019, finding broadly similar results as the earlier paper.
Aging Economists
Finally, the post Age and the Impact of Innovation looked at some of the literature on how research impact metrics change over a researcher’s life. The original post looked at Yu et al. (2022) and Kaltenberg, Jaffe, and Lachman (2021) which showed that the average citations received by biomedical scientific research and patents, respectively, decline substantially as scientists and inventors age. We can now add economists to this dataset. A new paper by Kosnik and Hamermesh (2023) finds that as economists get older, the citations to their publications in a set of top journals also decline substantially.
As discussed in the post, the story is actually more complicated than it seems though. One complicating wrinkle discussed in the appendix to that post is that Yu and coauthors show life scientists who do not produce as many papers and whose work isn’t as highly cited drop out of research over time. That means older researchers are, on average, as productive as younger ones, but only because the set of older researchers is limited to the most productive and the set of younger ones includes all the people who will eventually drop out. Hamermesh and Kosnik (2023) also show that economists are less likely to retire if they have published more often in top journals in the preceding decade.
Until Next Time
Thanks for reading! If you think the updated posts above are interesting, you might also be interested in the following related posts:
For more on conservatism and science, see Biases against risky research
For more on spillovers, see Adjacent knowledge is useful
For more on age and innovation, see Age and the nature of innovation
As always, if you want to chat about this post or innovation in generally, let’s grab a virtual coffee. Send me an email at matt.clancy@openphilanthropy.org and we’ll put something in the calendar.