3 Comments

I don't quite see what's wrong with the opinion that all of social science research doesn't amount to much of anything. As this shows, for most studies whether they find an effect or not depends more on the researcher than it does on the actual data.

You say, "Resist Science Nihilism!" but why? Because research ineffectiveness isn't necessarily caused by scientists favoring their own preconceived notions? That doesn't really line up at all now does it. Scientists don't have to be biased in favor of their preconceived notions for their research to be ineffective. That's just one possible cause.

Expand full comment

Very interesting Matt. I feel like the kind of methodological scrambling that's increasingly required is only necessary when progress isn't going very well. Like if science is going well, it seems quite easy. But when it's not going well, everyone becomes a "methodologist." If effects are this hard to find, are they even effects?

Expand full comment
author

For sure, if an effect is really strong, it will usually be detectable even with bad methods. Ideally, you should be able to just plot the data and eyeball it! But those kinds of effects seem to be rare and hard to find, perhaps because they are so easily noticed that they've already been discovered - at least in a lot of the social sciences.

These microeffects are a lot harder to pin down, and for sure they get exaggerated by publication bias. But I think they still matter. Technological progress is often about squeezing out a percent or two of improvement per year. Might be hard to detect in any given year, but they add up!

Expand full comment