Fortune reposted this
AI Editor at Fortune Magazine. Author of the forthcoming book Mastering AI: A Survival Guide to Our Superpowered Future (Simon & Schuster, July 2024; Bedford Square, August 2024).
Last month, Leopold Aschenbrenner, a former researcher on OpenAI's now-disbanded Superalignment team, published a 164-page report titled "Situational Awareness" in which he argued that artificial general intelligence (AGI) will likely be achieved within the decade and that superintelligence would likely follow shortly after. The assessment—which was based on the idea of continuing to scale up LLMs—got a lot of attention, including from those who think his analysis is faulty. Then, this past week, Bill Gates said he thinks we will get "two more turns of the crank" on LLMs but that this won't lead to AGI. In this week's Fortune Eye on AI newsletter, I look at the debate over AI progress and why, to most businesses, the pursuit of AGI may be a distraction.
That Aschenbrenner paper was not fantastic when subjected to deep scrutiny. This may or may not be related to the fact that Aschenbrenner would get really rich if enough people believed what was posited in his paper. Sabine Hossenfelder posted a pretty good, short & easy to understand critique of it.
Why do mainstream publications think these are great topics? Really curious what the rationale is for this kind of Pop writing. Is it public relations is it something else.
The quote that comes to mind is "If you want to know the truth about the emperor’s clothes, don’t ask the tailors." Leopold has his own clothing shop...
The LLM research tree doesn't lead to AGI, and it'll take at least that long for the players in that market to realize it, so, no, it's faulty analysis.
Jeremy Kahn #girlai Bill Gates is wrong in this case ... Agi will require a redefinition of what it means to be human
This is cool have you written or thought about writing what’s next beyond LLMs?
Gretel | Synthetic Data | Sustainable AI
3moYou yada yada'd over a critical point: what are the 2 turns of the crank? Here's the exact quote from Bill Gates on Rufus Griscom's podcast: Bill: Well, the big frontier is not so much scaling. We have probably two more turns of the crank on scaling, whereby accessing video data and getting very good at synthetic data that we can scale up probably two more times. That’s not the most interesting dimension. The most interesting dimension is what I call metacognition, where understanding how to think about a problem in a broad sense and step back and say, “Okay, how important is this answer? How could I check my answer? What external tools would help me with this?” The overall cognitive strategy is so trivial today that it’s just generating through constant computation each token in sequence, and it’s mind-blowing that that works at all. It does not step back like a human and think, “Okay, I’m going to write this paper and here’s what I want to cover. I’ll put some facts in. Here’s what I want to do for the summary.” Sure, it's not the steps Bill thinks will lead to AGI, but there's a lot of scale to gain in those two turns. https://nextbigideaclub.com/magazine/bill-gates-says-superhuman-ai-may-closer-think-podcast/50267/