Listen now (21 mins) | Micro and macro evidence on the productivity of R&D over time
Great post. But I have an objection I haven’t seen anywhere: perhaps innovation is a non parallelisable problem. This is an obvious idea from computing. Throwing more resources at a problem only solves it quicker if it can be broken up into separate problems that can be solved simultaneously. But there’s no reason to believe innovation is like that. A lot of the time you can’t find the next problem until you’ve solved the first one. So one well funded group of excellent scientists working on making smaller transistors ie st intel. Is going to get you a Moores law type behavior. If another company comes along and tries to beat intel you’ve doubled your resources on the problem but they’re going to be doing a similar thing at a similar pace and you get the same result, maybe a tiny bit quicker if they beat Intel. Now economically it makes sense to do that if they can take a share of the market. But it doesn’t make sense to say that suddenly the rate of innovation halved cause you had twice as many scientists working on it. They were competing with each other not working in sync together.
Thanks for summarizing these papers, without reading them I have two (very belated) comments:
First, declining research productivity is exactly what should be expected given the market forces described by Clayton Christensen's Innovators Dilemma. Incremental innovation is typically a game of diminishing returns, that as a field matures and becomes appreciated by society, the risk of any one study failing decreases, attracting more researchers while reducing the chance of discovering anything transformative. On the other hand, disruptive innovations are definitionally not included in any particular measure of R&D, say comparing corn yields to the research into golden rice. Both feed people, but their methods and communities differ greatly.
Relatedly, we should not expect the rate of innovation to exceed the rate of market evolution, and this appears to be the weakness of the studies, that the R&D measures do not consider market growth. That is the key lesson of Moore's Law: though framed as an observation, it established a target that coordinated researchers and application developers, ensuring that R&D expenditures would have a market.
Thanks for this! Could you provide an example or link to the reasonable-seeming assumptions where a constant level of real R&D resources generates a constant level of innovations, and this ends up leading to constant exponential growth with constant population? (I've looked into this a bit and not aware of any reasonable-seeming examples.)
Thought-provoking post! Thanks for writing.
Another possible objection: Does "declining marginal return on investment in R&D", which is what I think this post and the paper show, necessarily mean that innovation is getting harder (for reasons inherent to science)? Couldn't it also be that research effort is getting less effective over time? I could imagine several ways for that latter explanation to be true: institutional sclerosis or bottlenecks in funding agencies, increasing administrative burdens on researchers, cultural changes in science like the rise of peer review, even a decline in the ability of the marginal researcher as more people get into research.
Another objection: what if R&D worker growth is mismeasured? Due to the higher education expansion of the last 60 years, firms employ far more people with university degrees who are considered scientists but who, in reality, perform routine work that earlier would have been handed over to technicians or other skilled personnel with vocational training only.
I think I'll try to read the paper, but here are my questions in the meantime:
What's the resulting model for how these innovations get harder? Would we still have exponential growth in output (transistors per chip) for a constant r&d investment?
Why would Moore's law be relatively fixed in that case? Is there an equilibrium process holding the rate of investment increase at the perfect level to counter the decreasing return?
“The *rate* of progress is what most people care about because that is what we’ve become accustomed to.”
Is “what we’re accustomed to” the right benchmark? Forgive me for tooting my own horn a bit, but don’t we need to make assumptions on preferences if we want to talk about what matters normatively for welfare? https://basilhalperin.com/essays/why-percentage-growth.html
1. If we just want to say “ideas are harder to find *in an exponential growth sense* than they used to be”, this all makes sense.
2. But as you note we could say “ideas are just as hard as they used to be *in a linear sense*”.
And then adjudicating over which of these two statements *is the more useful way of speaking about the world* requires some assumptions on preferences. (And the first probably is more useful!)