Friday, June 14, 2024

Artificial intelligence, extrapolation, and physical constraints

Disclaimer and disclosure:  The "arrogant physicist declaims about some topic far outside their domain expertise (like climate change or epidemiology or economics or geopolitics or....) like everyone actually in the field is clueless" trope is very overplayed at this point, and I've generally tried to avoid doing this.  Still, I read something related to AI earlier this week, and I wanted to write about it.  So, fair warning: I am not an expert about AI, machine learning, or computer science, but I wanted to pass this along and share some thoughts.  Feel even more free than usual to skip this and/or dismiss my views.

This is the series of essays, and here is a link to the whole thing in one pdf file.  The author works for OpenAI.  I learned about this from Scott Aaronson's blog (this post), which is always informative.

In a nutshell, the author basically says that he is one of a quite small group of people who really know the status of AI development; that we are within a couple of years of the development of artificial general intelligence; that this will lead essentially to an AI singularity as AGI writes ever-smarter versions of AGI; that the world at large is sleepwalking toward this and its inherent risks; and that it's essential that western democracies have the lead here, because it would be an unmitigated disaster if authoritarians in general and the Chinese government in particular should take the lead - if one believes in extrapolating exponential progressions, then losing the initiative rapidly translates into being hopelessly behind forever.

I am greatly skeptical of many aspects of this (in part because of the dangers of extrapolating exponentials), but it is certainly thought-provoking.  

I doubt that we are two years away from AGI.  Indeed, I wonder if our current approaches are somewhat analogous to Ptolemeiac epicycles.  It is possible in principle to construct extraordinarily complex epicyclic systems that can reproduce predictions of the motions of the planets to high precision, but actual newtonian orbital mechanics is radically more compact, efficient, and conceptually unified.  Current implementations of AI systems use enormous numbers of circuit elements that consume tens to hundreds of MW of electricity.  In contrast, your brain hosts a human-level intelligence, consumes about 20 W, and masses about 1.4 kg.  I just wonder if our current architectural approach is not the optimal one toward AGI.  (Of course, a lot of people are researching neuromorphic computing, so maybe that resolves itself.)

The author also seems to assume that whatever physical resources are needed for rapid exponential progress in AI will become available.  Huge numbers of GPUs will be made.  Electrical generating capacity and all associated resources will be there.  That's not obvious to me at all.  You can't just declare that vastly more generating capacity will be available in three years - siting and constructing GW-scale power plants takes years alone.  TSMC is about as highly motivated as possible to build their new facilities in Arizona, and the first one has taken three years so far, with the second one delayed likely until 2028.  Actual construction and manufacturing at scale cannot be trivially waved away.

I do think that AI research has the potential to be enormously disruptive.  It also seems that if a big corporation or nation-state thought that they could gain a commanding advantage by deploying something even if it's half-baked and the long-term consequences are unknown, they will 100% do it.  I'd be shocked if the large financial companies aren't already doing this in some form.  I also agree that broadly speaking as a species we are unprepared for the consequences of this research, good and bad.  Hopefully we will stumble forward in a way where we don't do insanely stupid things (like putting the WOPR in charge of the missiles without humans in the loop).   

Ok, enough of my uninformed digression.  Back to physics soon.

Update:  this is a fun, contrasting view by someone who definitely disagrees with Aschenbrenner about the imminence of AGI.

6 comments:

  1. Anonymous10:39 PM

    I mean, epicycles are really just a Fourier expansion.

    That doesn't impact your point one way or the other, but it sounds witty.

    ReplyDelete
  2. Anonymous12:59 AM

    Even if the current AI systems are far from efficient, the central question (which the author seems inclined to answer affirmatively) is merely whether they are sufficient for a dangerous sort of AGI

    ReplyDelete
  3. I agree it is really destructive for the world to allow the war-machine countries to take the lead.

    ReplyDelete
  4. How do we know it’s really Doug Natelson writing this post, and not an AGI attempting to lull us into a false sense of security before the impending takeover?

    ReplyDelete
  5. Anonymous12:17 AM

    To be fair to you Doug, the author of that essay is also a 22 year old and it shows in their bizarrely simplistic and ahistorical take on US - China geopolitics. The kernel of truth in there is the possibility of AGI which certainly could be the case soon-ish and we are unprepared. Trusting the current American government with AGI is only a good idea if your learning of American history comes from the movie Black Hawk Down.

    Unfortunately the cynical take here is to recognize the paper as a sign for what stocks the author believes should be invested in.

    ReplyDelete
  6. Anon@12:17, yeah, I was avoiding the string temptation to mention age and naïveté, because that can be an intellectually lazy way to dismiss ideas. I think the author views himself as Szilard, and I really doubt that’s an apt analogy. (Szilard was also frozen out of the whole Manhattan Project because he was viewed as a huge security pain in the ass.)

    I do think it’s a very interesting question whether there is something special about the neural network architecture in terms of being able to support what we consider consciousness. As discussed here: https://www.vox.com/future-perfect/351893/consciousness-ai-machines-neuroscience-mind

    ReplyDelete