Great post. But I have an objection I haven’t seen anywhere: perhaps innovation is a non parallelisable problem. This is an obvious idea from computing. Throwing more resources at a problem only solves it quicker if it can be broken up into separate problems that can be solved simultaneously. But there’s no reason to believe innovation is like that. A lot of the time you can’t find the next problem until you’ve solved the first one. So one well funded group of excellent scientists working on making smaller transistors ie st intel. Is going to get you a Moores law type behavior. If another company comes along and tries to beat intel you’ve doubled your resources on the problem but they’re going to be doing a similar thing at a similar pace and you get the same result, maybe a tiny bit quicker if they beat Intel. Now economically it makes sense to do that if they can take a share of the market. But it doesn’t make sense to say that suddenly the rate of innovation halved cause you had twice as many scientists working on it. They were competing with each other not working in sync together.

That's an interesting take on the problem, and I can see it being part of the issue. Economists have studied this notion under the name of patent races (and probably others), where two teams race to be first to find something and get the patent: individually rational but collectively wasteful. I think for this to be a big part of the explanation, you would want evidence of rising competition in any one technological domain.

Thanks for summarizing these papers, without reading them I have two (very belated) comments:

First, declining research productivity is exactly what should be expected given the market forces described by Clayton Christensen's Innovators Dilemma. Incremental innovation is typically a game of diminishing returns, that as a field matures and becomes appreciated by society, the risk of any one study failing decreases, attracting more researchers while reducing the chance of discovering anything transformative. On the other hand, disruptive innovations are definitionally not included in any particular measure of R&D, say comparing corn yields to the research into golden rice. Both feed people, but their methods and communities differ greatly.

Relatedly, we should not expect the rate of innovation to exceed the rate of market evolution, and this appears to be the weakness of the studies, that the R&D measures do not consider market growth. That is the key lesson of Moore's Law: though framed as an observation, it established a target that coordinated researchers and application developers, ensuring that R&D expenditures would have a market.

Thanks for this! Could you provide an example or link to the reasonable-seeming assumptions where a constant level of real R&D resources generates a constant level of innovations, and this ends up leading to constant exponential growth with constant population? (I've looked into this a bit and not aware of any reasonable-seeming examples.)

Two responses, since I'm not 100% sure which is most relevant to what you're asking:

- The assumption of constant R&D resources generates constant innovation, and that leads to constant exponential growth comes from equation 6-11 in the paper "Are Ideas Getting Harder to Find." I wrote a thread trying to spell out the intuition here: https://twitter.com/mattsclancy/status/1406099824408207361?s=21

- In terms of whether it was ever a reasonable assumption that constant R&D resources generate constant innovations, one possible justification could be that we do observe cases where constant R&D leads to increasing innovation (healthcare innovation, 1975-1990), cases where constant R&D leads to constant innovation (healthcare again, some crops), and cases where constant R&D leads to decreasing innovation (everything else). So none of the assumptions are necessarily crazy - you can think of anecdotes supporting them all. And when you can think of anecdotes going in each direction, perhaps it seemed sensible simply to split the difference and assume it didn't get easier or harder.

(Note also that there were approaches assuming innovation gets easier or harder being developed at the same time by other economists, so it's not like these possibilities were ignored)

Thank you for linking to that thread - very useful!

I do still think that the model isn't reasonable when you dig into a bit though.

One thing your model doesn't explicitly mention is that the salaries of researchers will rise after each innovation. By the time GDP/capita has doubled, each $200 will buy you half as many researcher-hours. To me there seems something fishy in claiming that the improved technology and capital researchers are now using exactly cancels out this effect.

Here I'm echoing the 'knife-edge' critique from Jones 1995. In terms of the model in equations 6-11 of the paper, that model only delivers balanced growth because the exponents on A and K sum to 1 exactly. (See equation 7.) This means that by the time A has doubled, the increase in K is such that real R&D resources have exactly doubled (holding L constant). This means you sustain exponential growth.

If the exponents don't quite sum to 1, the model would predict that growth either speeds up over time, or slows down over time. For example, if you replace equation 7 with Y = A K^theta L^(theta - 1) then by the time A has doubled the increase in K means that Y has more than doubled (holding L constant). So real R&D resources more than double when A doubles; this ends up implying that growth should speed up over time.

Another possible objection: Does "declining marginal return on investment in R&D", which is what I think this post and the paper show, necessarily mean that innovation is getting harder (for reasons inherent to science)? Couldn't it also be that research effort is getting less effective over time? I could imagine several ways for that latter explanation to be true: institutional sclerosis or bottlenecks in funding agencies, increasing administrative burdens on researchers, cultural changes in science like the rise of peer review, even a decline in the ability of the marginal researcher as more people get into research.

Yes, that's certainly possible, and an explanation that is appealing to a lot of people. I wouldn't be surprised if that's part of it, but it seems so widespread across industries and countries and seems to go back as far as we have data, that I tend to prefer deeper explanations. For example, this post looked at some evidence that academic incentives have some bad impacts on the quality of science - but the effects aren't THAT big, at least in my view. https://mattsclancy.substack.com/p/how-bad-is-publish-or-perish-for

Another objection: what if R&D worker growth is mismeasured? Due to the higher education expansion of the last 60 years, firms employ far more people with university degrees who are considered scientists but who, in reality, perform routine work that earlier would have been handed over to technicians or other skilled personnel with vocational training only.

Well, be careful here; what they actually measure is R&D spending, which they convert into "effective number of researchers" by dividing by the wage of the typical scientist. It's a crude way of controlling for inflation in the conduct of research, but it's also technically the right measure they want for evaluating the performance of alternative economic models of technological change. They do, however, actually look at the headcount of people classified as scientists and researchers in their section on firms, though I didn't go over it here. Anyway, the main point is that if mismeasurement is a big deal here, it'll have to be mismeasurement in the R&D dollars spent.

I think I'll try to read the paper, but here are my questions in the meantime:

What's the resulting model for how these innovations get harder? Would we still have exponential growth in output (transistors per chip) for a constant r&d investment?

Why would Moore's law be relatively fixed in that case? Is there an equilibrium process holding the rate of investment increase at the perfect level to counter the decreasing return?

Thanks, Matt! No, that makes it worse =[. I'm just not familiar enough with these models to track the multiple quantities (output, r&d resources, r&d effort) being exponential or not across time, or in their derivative, or in their second derivative. At least, not without working carefully through the paper.

Sure enough, I needed to spend a couple of hours going through equations 1-14, tracking the units, before it all clicked together.

I'm not surprised the constant productivity model gave the wrong prediction! I tracked the following assumptions, and I'm curious which are true (are they all even independently measurable?)

Capital / output, Research / output, and Consumption / output are constant.

Elasticity of capitol and labor input sum to 1 (no returns to scale?) and are constant.

Lambda (scale correction for idea function), if present, is constant and close to 1.

Products are independent (paper mentions that changing spillover rate could explain results)

Wage / TFP is a constant. This seems like the a huge mistake to me!

Ideas generated are linear with inputs. For instance, there's no difference between spending 1 researcher for 10 years and 10 researchers for 1 year.

All TFP growth is the result of RND spending (as opposed to, say, a change in regulation)

I understand correctly, the authors decided that since the data for declining productivity is so strong, the null hypothesis of the constant growth model can be rejected for all reasonable values of the proposed corrections (like lambda) that could save it. If we're rejecting the model, then we don't need to check its assumptions (like constant theta). I guess I agree with them on that point.

I'm curious to see how any model could possibly capture all this complexity!

“The *rate* of progress is what most people care about because that is what we’ve become accustomed to.”

Is “what we’re accustomed to” the right benchmark? Forgive me for tooting my own horn a bit, but don’t we need to make assumptions on preferences if we want to talk about what matters normatively for welfare? https://basilhalperin.com/essays/why-percentage-growth.html

1. If we just want to say “ideas are harder to find *in an exponential growth sense* than they used to be”, this all makes sense.

2. But as you note we could say “ideas are just as hard as they used to be *in a linear sense*”.

And then adjudicating over which of these two statements *is the more useful way of speaking about the world* requires some assumptions on preferences. (And the first probably is more useful!)

Good essay! I agree that what really matters is how people perceive the changes in their living standards. My own hunch would tend towards the more pessimistic; that people habituate not only to their standard of living, but also to the change in their standard of living. Witness all the complaints that we're stagnating and the rate of progress isn't as fast as it used to be. If that's true, it would require ever accelerating growth to attain constant increases in utility (and I wonder if people would, in turn, habituate to that!).

"Ultimately it's an empirical question", as the cliche goes -- we just have to measure preferences better. I will say: I think people have not thought through sufficiently how crazy the implications are of the view that 'relative consumption is all that matters'. The future seems pretty bleak if we're stuck only playing relative status games until extinction! An example implication, again only one of many: https://marginalrevolution.com/marginalrevolution/2012/07/growth-mobility-and-utility.html.

Great post. But I have an objection I haven’t seen anywhere: perhaps innovation is a non parallelisable problem. This is an obvious idea from computing. Throwing more resources at a problem only solves it quicker if it can be broken up into separate problems that can be solved simultaneously. But there’s no reason to believe innovation is like that. A lot of the time you can’t find the next problem until you’ve solved the first one. So one well funded group of excellent scientists working on making smaller transistors ie st intel. Is going to get you a Moores law type behavior. If another company comes along and tries to beat intel you’ve doubled your resources on the problem but they’re going to be doing a similar thing at a similar pace and you get the same result, maybe a tiny bit quicker if they beat Intel. Now economically it makes sense to do that if they can take a share of the market. But it doesn’t make sense to say that suddenly the rate of innovation halved cause you had twice as many scientists working on it. They were competing with each other not working in sync together.

That's an interesting take on the problem, and I can see it being part of the issue. Economists have studied this notion under the name of patent races (and probably others), where two teams race to be first to find something and get the patent: individually rational but collectively wasteful. I think for this to be a big part of the explanation, you would want evidence of rising competition in any one technological domain.

Thanks for summarizing these papers, without reading them I have two (very belated) comments:

First, declining research productivity is exactly what should be expected given the market forces described by Clayton Christensen's Innovators Dilemma. Incremental innovation is typically a game of diminishing returns, that as a field matures and becomes appreciated by society, the risk of any one study failing decreases, attracting more researchers while reducing the chance of discovering anything transformative. On the other hand, disruptive innovations are definitionally not included in any particular measure of R&D, say comparing corn yields to the research into golden rice. Both feed people, but their methods and communities differ greatly.

Relatedly, we should not expect the rate of innovation to exceed the rate of market evolution, and this appears to be the weakness of the studies, that the R&D measures do not consider market growth. That is the key lesson of Moore's Law: though framed as an observation, it established a target that coordinated researchers and application developers, ensuring that R&D expenditures would have a market.

Thanks for this! Could you provide an example or link to the reasonable-seeming assumptions where a constant level of real R&D resources generates a constant level of innovations, and this ends up leading to constant exponential growth with constant population? (I've looked into this a bit and not aware of any reasonable-seeming examples.)

Two responses, since I'm not 100% sure which is most relevant to what you're asking:

- The assumption of constant R&D resources generates constant innovation, and that leads to constant exponential growth comes from equation 6-11 in the paper "Are Ideas Getting Harder to Find." I wrote a thread trying to spell out the intuition here: https://twitter.com/mattsclancy/status/1406099824408207361?s=21

- In terms of whether it was ever a reasonable assumption that constant R&D resources generate constant innovations, one possible justification could be that we do observe cases where constant R&D leads to increasing innovation (healthcare innovation, 1975-1990), cases where constant R&D leads to constant innovation (healthcare again, some crops), and cases where constant R&D leads to decreasing innovation (everything else). So none of the assumptions are necessarily crazy - you can think of anecdotes supporting them all. And when you can think of anecdotes going in each direction, perhaps it seemed sensible simply to split the difference and assume it didn't get easier or harder.

(Note also that there were approaches assuming innovation gets easier or harder being developed at the same time by other economists, so it's not like these possibilities were ignored)

Thank you for linking to that thread - very useful!

I do still think that the model isn't reasonable when you dig into a bit though.

One thing your model doesn't explicitly mention is that the salaries of researchers will rise after each innovation. By the time GDP/capita has doubled, each $200 will buy you half as many researcher-hours. To me there seems something fishy in claiming that the improved technology and capital researchers are now using exactly cancels out this effect.

Here I'm echoing the 'knife-edge' critique from Jones 1995. In terms of the model in equations 6-11 of the paper, that model only delivers balanced growth because the exponents on A and K sum to 1 exactly. (See equation 7.) This means that by the time A has doubled, the increase in K is such that real R&D resources have exactly doubled (holding L constant). This means you sustain exponential growth.

If the exponents don't quite sum to 1, the model would predict that growth either speeds up over time, or slows down over time. For example, if you replace equation 7 with Y = A K^theta L^(theta - 1) then by the time A has doubled the increase in K means that Y has more than doubled (holding L constant). So real R&D resources more than double when A doubles; this ends up implying that growth should speed up over time.

Thought-provoking post! Thanks for writing.

Another possible objection: Does "declining marginal return on investment in R&D", which is what I think this post and the paper show, necessarily mean that innovation is getting harder (for reasons inherent to science)? Couldn't it also be that research effort is getting less effective over time? I could imagine several ways for that latter explanation to be true: institutional sclerosis or bottlenecks in funding agencies, increasing administrative burdens on researchers, cultural changes in science like the rise of peer review, even a decline in the ability of the marginal researcher as more people get into research.

Yes, that's certainly possible, and an explanation that is appealing to a lot of people. I wouldn't be surprised if that's part of it, but it seems so widespread across industries and countries and seems to go back as far as we have data, that I tend to prefer deeper explanations. For example, this post looked at some evidence that academic incentives have some bad impacts on the quality of science - but the effects aren't THAT big, at least in my view. https://mattsclancy.substack.com/p/how-bad-is-publish-or-perish-for

Another objection: what if R&D worker growth is mismeasured? Due to the higher education expansion of the last 60 years, firms employ far more people with university degrees who are considered scientists but who, in reality, perform routine work that earlier would have been handed over to technicians or other skilled personnel with vocational training only.

Well, be careful here; what they actually measure is R&D spending, which they convert into "effective number of researchers" by dividing by the wage of the typical scientist. It's a crude way of controlling for inflation in the conduct of research, but it's also technically the right measure they want for evaluating the performance of alternative economic models of technological change. They do, however, actually look at the headcount of people classified as scientists and researchers in their section on firms, though I didn't go over it here. Anyway, the main point is that if mismeasurement is a big deal here, it'll have to be mismeasurement in the R&D dollars spent.

I think I'll try to read the paper, but here are my questions in the meantime:

What's the resulting model for how these innovations get harder? Would we still have exponential growth in output (transistors per chip) for a constant r&d investment?

Why would Moore's law be relatively fixed in that case? Is there an equilibrium process holding the rate of investment increase at the perfect level to counter the decreasing return?

I wonder if this thread would be helpful or not? https://twitter.com/mattsclancy/status/1405991290341371914?s=20

Thanks, Matt! No, that makes it worse =[. I'm just not familiar enough with these models to track the multiple quantities (output, r&d resources, r&d effort) being exponential or not across time, or in their derivative, or in their second derivative. At least, not without working carefully through the paper.

Haha, take 2 here. But check out the paper. https://twitter.com/mattsclancy/status/1406099824408207361?s=21

Sure enough, I needed to spend a couple of hours going through equations 1-14, tracking the units, before it all clicked together.

I'm not surprised the constant productivity model gave the wrong prediction! I tracked the following assumptions, and I'm curious which are true (are they all even independently measurable?)

Capital / output, Research / output, and Consumption / output are constant.

Elasticity of capitol and labor input sum to 1 (no returns to scale?) and are constant.

Lambda (scale correction for idea function), if present, is constant and close to 1.

Products are independent (paper mentions that changing spillover rate could explain results)

Wage / TFP is a constant. This seems like the a huge mistake to me!

Ideas generated are linear with inputs. For instance, there's no difference between spending 1 researcher for 10 years and 10 researchers for 1 year.

All TFP growth is the result of RND spending (as opposed to, say, a change in regulation)

I understand correctly, the authors decided that since the data for declining productivity is so strong, the null hypothesis of the constant growth model can be rejected for all reasonable values of the proposed corrections (like lambda) that could save it. If we're rejecting the model, then we don't need to check its assumptions (like constant theta). I guess I agree with them on that point.

I'm curious to see how any model could possibly capture all this complexity!

“The *rate* of progress is what most people care about because that is what we’ve become accustomed to.”

Is “what we’re accustomed to” the right benchmark? Forgive me for tooting my own horn a bit, but don’t we need to make assumptions on preferences if we want to talk about what matters normatively for welfare? https://basilhalperin.com/essays/why-percentage-growth.html

1. If we just want to say “ideas are harder to find *in an exponential growth sense* than they used to be”, this all makes sense.

2. But as you note we could say “ideas are just as hard as they used to be *in a linear sense*”.

And then adjudicating over which of these two statements *is the more useful way of speaking about the world* requires some assumptions on preferences. (And the first probably is more useful!)

Good essay! I agree that what really matters is how people perceive the changes in their living standards. My own hunch would tend towards the more pessimistic; that people habituate not only to their standard of living, but also to the change in their standard of living. Witness all the complaints that we're stagnating and the rate of progress isn't as fast as it used to be. If that's true, it would require ever accelerating growth to attain constant increases in utility (and I wonder if people would, in turn, habituate to that!).

"Ultimately it's an empirical question", as the cliche goes -- we just have to measure preferences better. I will say: I think people have not thought through sufficiently how crazy the implications are of the view that 'relative consumption is all that matters'. The future seems pretty bleak if we're stuck only playing relative status games until extinction! An example implication, again only one of many: https://marginalrevolution.com/marginalrevolution/2012/07/growth-mobility-and-utility.html.