Aussie AI
AI Hitting the Wall?
-
Last Updated 12 December, 2024
-
by David Spuler, Ph.D.
Recently, there's been a two-fold indication that AI progress is "plateauing" or "hitting a wall." The two main indicators are:
- Inference-based reasoning ("test time compute")
- Underwhelming progress in new models
The GPT "o1" model released in September 2024 wasn't a bigger, more heavily-trained model with trillions more weights. Instead, it's a model that improves intelligence by doing multiple steps of inference, rather than one smarter step in an uber-trained model. This algorithm for "multi-step reasoning" is known as "chain-of-thought" and uses repeated calls for process queries, before merging them together into the one final response.
Why does this change to multi-step inference for reasoning support the "wall" theory? Well, inference is a slow process when it runs, and "o1" is therefore slow for users — the line of logic goes that OpenAI wouldn't tolerate using this slow method if they could do it with one request to a bigger model. It almost seems like a kind of workaround.
Hence, wall.
Secondly, there are also rumors that the big players are having difficulty training much better next-gen models. In particular, there are indicators that the GPT-5 release is having trouble gaining capabilities compared to GPT-4. Instead of launching GPT-5 soon, we got "o1" with its multiple steps.
Obviously, training trillion-parameter models is a specialist field, and it's evolving fast, with literally billions of dollars in funding being applied there. But open source models seem to be keeping up with the leading commercial vendors (albeit, after a lag), which tends to indicate that there's only incremental progress in reasoning capabilities, and the commercial vendors don't have a huge "secret sauce" algorithmic advantage in training. Some of the constraints include:
- Shortage of new high-quality training data (text).
- Complexity of software algorithms to train ever-bigger LLMs.
- Sheer volume of training data needed for multimodal LLMs (audio, images, and video).
- Capital cost of GPUs to crunch all that.
- Apparent lack of a new algorithmic advance in one-shot reasoning.
- Fundamental limitations of the way that LLMs and Transformers work.
On the other hand, there's a lot of research happening in training and in making LLMs better at reasoning in general. Some of the newer areas include:
- Newer GPU hardware for training (e.g., Blackwell).
- Faster software training algorithms (optimizing both computations and inter-GPU network traffic).
- Resiliency improvements to training (both software and hardware).
- Synthetic training data and derivative data.
- Multi-step reasoning algorithms.
- Long context processing seems to be a solved problem now.
- Inference optimization research (makes each step of multi-step reasoning faster).
- Next-gen architectures beyond LLMs (e.g., SSMs, Mamba, Hyena, and hybrid versions).
Is there a wall? OpenAI CEO Sam Altman posted on X that "there is no wall." And there are certainly signs that many of the bigger players are still gearing up to use NVIDIA Blackwell GPUs for even bigger training runs. And there have been two multi-billion dollar fund raises in just the last month. So, the plateau may only be a temporary thing.
Research on the AI Progress Wall
Articles and papers on recent AI progress:
- Deirdre Bosa, Jasmine Wu, Dec 11 2024, The limits of intelligence — Why AI advancement could be slowing down, https://www.cnbc.com/2024/12/11/why-ai-advancement-could-be-slowing-down.html
- The Information, Nov 2024, OpenAI Shifts Strategy as Rate of GPT AI Improvement Slows https://www.theinformation.com/articles/openai-shifts-strategy-as-rate-of-gpt-ai-improvements-slows
- Bloomberg, Nov 2024, OpenAI, Google and Anthropic are Struggling to Build More Advanced AI, https://www.bloomberg.com/news/articles/2024-11-13/openai-google-and-anthropic-are-struggling-to-build-more-advanced-ai
- Gary Marcus, Nov 25, 2024, A new AI scaling law shell game? Scaling laws ain’t what they used to be, https://garymarcus.substack.com/p/a-new-ai-scaling-law-shell-game
- Kyle Orland, 13 Nov 2024, What if AI doesn’t just keep getting better forever? New reports highlight fears of diminishing returns for traditional LLM training. https://arstechnica.com/ai/2024/11/what-if-ai-doesnt-just-keep-getting-better-forever/
- Will Lockettm Nov 2024, Apple Calls BS On The AI Revolution, They aren’t late to the AI game; they are just the only sceptical big tech company. https://medium.com/predict/apple-calls-bullshit-on-the-ai-revolution-ae38fdf83392
- Sam Altman, Nov 14, 2024, there is no wall, https://x.com/sama/status/1856941766915641580
More AI Research
Read more about: