Search This Blog

Monday, June 30, 2025

Science slow down - not a simple question

I participated in a program about 15 years ago that looked at science and technology challenges faced by a subset of the US government. I came away thinking that such problems fall into three broad categories.

  1. Actual science and engineering challenges, which require foundational research and creativity to solve.
  2. Technology that may be fervently desired but is incompatible with the laws of nature, economic reality, or both. 
  3. Alleged science and engineering problems that are really human/sociology issues.

Part of science and engineering education and training is giving people the skills to recognize which problems belong to which categories.  Confusing these can strongly shape the perception of whether science and engineering research is making progress. 

There has been a lot of discussion in the last few years about whether scientific progress (however that is measured) has slowed down or stagnated.  For example, see here:

https://www.theatlantic.com/science/archive/2018/11/diminishing-returns-science/575665/ 

https://news.uchicago.edu/scientific-progress-slowing-james-evans

https://www.forbes.com/sites/roberthart/2023/01/04/where-are-all-the-scientific-breakthroughs-forget-ai-nuclear-fusion-and-mrna-vaccines-advances-in-science-and-tech-have-slowed-major-study-says/

https://theweek.com/science/world-losing-scientific-innovation-research

A lot of the recent talk is prompted by this 2023 study, which argues that despite the world having many more researchers than ever before (behold population growth) and more global investment in research, somehow "disruptive" innovations are coming less often, or are fewer and farther between these days.  (Whether this is an accurate assessment is not a simple matter to resolve; more on this below.)

There is a whole tech bro culture that buys into this, however.  For example, see this interview from last week in the New York Times with Peter Thiel, which points out that Thiel has been complaining about this for a decade and a half.  

On some level, I get it emotionally.  The unbounded future spun in a lot of science fiction seems very far away.  Where is my flying car?  Where is my jet pack?  Where is my moon base?  Where are my fusion power plants, my antigravity machine, my tractor beams, my faster-than-light drive?  Why does the world today somehow not seem that different than the world of 1985, while the world of 1985 seems very different than that of 1945?

Some of the folks that buy into this think that science is deeply broken somehow - that we've screwed something up, because we are not getting the future they think we were "promised".  Some of these people have this as an internal justification underpinning the dismantling of the NSF, the NIH, basically a huge swath of the research ecosystem in the US.  These same people would likely say that I am part of the problem, and that I can't be objective about this because the whole research ecosystem as it currently exists is a groupthink self-reinforcing spiral of mediocrity.  

Science and engineering are inherently human ventures, and I think a lot of these concerns have an emotional component.  My take at the moment is this:

  1. Genuinely transformational breakthroughs are rare.  They often require a combination of novel insights, previously unavailable technological capabilities, and luck.  They don't come on a schedule.  
  2. There is no hard and fast rule that guarantees continuous exponential technological progress.  Indeed, in real life, exponential growth regimes never last. The 19th and 20th centuries were special.   If we think of research as a quest for understanding, it's inherently hierarchal.  Civilizational collapses aside, you can only discover how electricity works once.   You can only discover the germ theory of disease, the nature of the immune system, and vaccination once (though in the US we appear to be trying really hard to test that by forgetting everything).  You can only discover quantum mechanics once, and doing so doesn't imply that there will be an ongoing (infinite?) chain of discoveries of similar magnitude.
  3. People are bad at accurately perceiving rare events and their consequences, just like people have a serious problem evaluating risk or telling the difference between correlation and causation.  We can't always recognize breakthroughs when they happen.  Sure, I don't have a flying car.  I do have a device in my pocket that weighs only a few ounces, gives me near-instantaneous access to the sum total of human knowledge, let's me video call people around the world, can monitor aspects of my fitness, and makes it possible for me to watch sweet videos about dogs.  The argument that we don't have transformative, enormously disruptive breakthroughs as often as we used to or as often as we "should" is in my view based quite a bit on perception.
  4. Personally, I think we still have a lot more to learn about the natural world.  AI tools will undoubtedly be helpful in making progress in many areas, but I think it is definitely premature to argue that the vast majority of future advances will come from artificial superintelligences and thus we can go ahead and abandon the strategies that got us the remarkable achievements of the last few decades.
  5. I think some of the loudest complainers (Thiel, for example) about perceived slowing advancement are software people.  People who come from the software development world don't always appreciate that physical infrastructure and understanding are hard, and that there are not always clever or even brute-force ways to get to an end goal.  Solving foundational problems in molecular biology or quantum information hardware or  photonics or materials is not the same as software development.  (The tech folks generally know this on an intellectual level, but I don't think all of them really understand it in their guts.  That's why so many of them seem to ignore real world physical constraints when talking about AI.).  Trying to apply software development inspired approaches to science and engineering research isn't bad as a component of a many-pronged strategy, but alone it may not give the desired results - as warned in part by this piece in Science this week.  

More frequent breakthroughs in our understanding and capabilities would be wonderful.  I don't think dynamiting the US research ecosystem is the way to get us there, and hoping that we can dismantle everything because AI will somehow herald a new golden age seems premature at best.


7 comments:

Pizza Perusing Physicist said...

Personally, I don't think there is any doubt that the education and research ecosystem needs to be reformed. However we define scientific progress, I think most people both within and outside the system have the perception that things have gotten less 'efficient', with lower return on investment (time, money, people) as the decades have passed. Everyone wants to make things smoother, and I think most of us within the system would agree that burning everything down and starting from scratch is not the right way to do that.

Nevertheless, it's still important for us to realize that our current system was designed for a very different time, when there were a lot less people doing science, and there many less complex scientific questions to pursue. My sense is that everyone has been saying this for many years now, but beyond lip service, there has been little actual change. I think such inaction on our part is certainly in part responsible for the increased inefficiency and stagnating progress. I have a feeling that this is at least somewhat to blame for the rise in 'burn-it-down' sentiment that we have come to see.

Anonymous said...

The absence of complexity limits in discussions of AI "superintelligence" in the press continues to confound me. NP-hard problems don't become quick and easy just because you train a LLM to pontificate about it.

gh said...

To the people who complain about scientific progress being too slow or useless, I often refer them to Abraham Flexner's (founding director of Princeton's IAS) essay on the "usefulness of useless science". The outcome from science is non-linear and sometimes it can take decades before "useful" applications can be derived from the incremental research.

Anonymous said...

Pieter Thiel is indeed extrapolating on only the last 200 years of technological progress, and failing to consider the rate of progress over, say, the last 1000 or 2000 years. This is the "recency illusion".

In the NY Times interview, the revealing moment came when he was asked, in the context of AI, if the "human race should endure", and he reluctantly replied, after much hesitation, "Yes, ... but I would also like us to radically solve these problems [dementia, mortality, going to Mars, etc.]" In other words: humanity may have reached a plateau of inventiveness, and the only way to break through our "intellectual eggshell", is to create a superintelligence that will jump-start progress, and take over scientific research from our bumbling selves.

This is perhaps what we should be talking about: rather than arguing about the slowdown of science breakthroughs, what is the place of humans in the universe, when we realize we are not the final step in the evolutionary ladder?

Anonymous said...

The more urgent issue I worry about is not whether AI will be more intelligent than us (in terms of memory, processing, calculation, etc…), but whether it will be wiser. If the AI has the sense to control climate change, minimize war and corruption, promote cooperation and peace, etc…, then frankly I think we humans might do well to stand aside - we may have bigger brains than other species, but we don’t exactly have a credible track record indicating that we can be trusted with using them responsibly. If AI is just more cognitively advanced than humans, but shares their lack of foresight and judgment, then we will all be screwed.

Anonymous said...

If disruptive ideas do not get enough impact or funding its because the funding itself dictates the expectations; add this to the well established individuals in academia promoting their own agendas and the result is a neoliberal science modeled as a product of consumption. How can one as a grad student do interesting science if one must follow the already-written rules and expectations? In order to achieve further and significant progress, we should consider science by its intrinsic value of discovery rather than measuring it by metrics. As a grad student, I feel completely alienated from the scientific process as in the words of my supervisor, I am replaceable. I don't take it personally, as I acknowledge it has nothing to do with my skills, rather with the system that relies on the infinite supply of grad students to exploit them for 5 years and then replace them. That said, it is easier to imagine the end of the world than the end of capitalism, and if neoliberal science continues, its just going to accelerate to the point that it will crash and self-destroy. It is not a good system and its not the best that we could have.

Anonymous said...

We have substituted science for the symbolic value of it. What is the preferred way to estimate the value of a scientist? H-index. This small number cannot encapsule nor measure the quality of research. It only measures how good one resonates with the current echo chamber.