
"Next Year's News This Week"
HOW LEARNING MODELS WILL SHAPE AI'S FUTURE By Berit Anderson ________ Why Read: With generative models seeing diminishing returns on massive compute scaling, AI's brightest minds are striking out on their own and going back to the drawing board. This week's Global Report explores the next generation of approaches, models, and strategies from Yann LeCun, Ilya Sutskever, and Mira Murati. ________ At some point though, pre-training will run out of data. The data is very clearly finite. What do you do next? Either you do some kind of souped-up pre-training, a different recipe from the one you've done before, or you're doing RL, or maybe something else. But now that compute is big, compute is now very big, in some sense we are back to the age of research. - Ilya Sutskever FRS, Co-Founder, OpenAI AI pioneer Yann LeCun is leaving Meta, and he's done mincing words. "Silicon Valley is completely hypnotised by generative models," LeCun told an audience at AI Pulse in Paris last week, where he announced his new company, Advanced Machine Intelligence, which he intends to use to build a new type of artificial-intelligence model he calls world models. "To pursue this kind of research, you have to go outside the Valley - to Paris," he said. LeCun leaves behind a strong cohort of AI founders; but he's not wrong that, sometimes, to go against the grain, to build something focused and powerful, one needs a new perspective and an environment that can sustain it. Other than generative models, at least two other beliefs seem to be shared by LeCun's fellows in AI R&D - Ilya Sutskever, Mira Murati, Geoffrey Hinton, Mustafa Suleyman, Elon Musk, and Dario and Daniela Amodei:
Geoffrey Hinton used his 2025 Nobel acceptance press conference to nail both points home: I'm advocating that our best young researchers, or many of them, should work on AI safety, and governments should force the large companies to provide the computational facilities that they need to do that. "I'm particularly proud of the fact that one of my students fired Sam Altman," he told a group of assembled journalists the day he was awarded the Nobel Prize in Physics for his work on neural networks and promotion of AI safety. The student in question, of course, was Ilya Sutskever, who - as recent court releases have now made clear - lobbied hard for Altman's ouster behind the scenes due to his manipulation and deception of OpenAI's board and staff. Sutskever is unfortunately better at AI research than at office politics. His lack of clarity about the reason for Altman's firing within OpenAI, among a crew of employees concerned about the value of their stock options, created the opening of uncertainty that allowed Altman to catapult himself back into the CEO seat in less than a week. Even Mustafa Suleyman, whose work at Microsoft has required close collaboration with OpenAI, and who has publicly stated his love for Altman, is now souring on him, calling the latter's latest move to provide OpenAI erotica "dangerous." I met Mustafa several years ago at an event in New York City. At the time, he was still running DeepMind, which was later acquired by Google and became a strategic core of its AI portfolio. This is the same AI portfolio that first developed LLMs, long before OpenAI, but didn't release them publicly because they weren't yet reliable. One wonders how he feels about cleaning up the mess of a man who copied Google's work to launch the biggest artificial general intelligence (AGI) scam in Silicon Valley history. Generative models are cool and useful in specific contexts, but no amount of compute or pre-training can turn them into general intelligence. Won't everyone please stop giving Sam money?
AI Innovation: The Upside of Treachery There is an upside to all of this. Altman's willingness to screw over his friends and collaborators has been perhaps the greatest driver of AI innovation in the modern era. Musk's xAI, Google's Gemini, the Amodeis's Anthropic, Sutskever's Safe Superintelligence, and Murati's Thinking Machines Lab were all born out of the spite that Sam's treachery inspired. Gemini and Anthropic have already leapfrogged OpenAI in model success. Even LeCun's new company primarily exists because Meta got bogged down in the generative-model hype, leaving insufficient resources and attention for the kind of real next-generation AI innovation he envisioned. Not a personal treason, but a professional one. All these people are driving toward an artificial general intelligence. "Most of the top researchers that I know believe that AI will become more intelligent than people," Hinton said during his Nobel press conference. "They vary on the timescales. A lot of them believe that that will happen sometime in the next 20 years." There is finally growing consensus that further training, pre-training, and post-training of generative models on data, artificial data, and imaginary data is reaching a dead end. And no one has really moved the needle on a new approach. Yet.
The Next Gen: Beyond Generative Models But LeCun joins a small and fascinating club of founders trying. Alongside him are Ilya Sutskever with Safe Superintelligence and Mira Murati with Thinking Machines Lab. What exactly is each working on? Safe Superintelligence Safe Superintelligence (SSI) bills itself as a lab and, as befits a stealth operation, its website is light on details - e.g.: "We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs." Great. "We plan to advance capabilities as fast as possible while making sure our safety always remains ahead." Excellent. But how? Ilya is famously close-lipped about his work. But a recent podcast appearance with Dwarkesh Patel provides some clues to the way he is thinking about his work with SSI. Specifically, the superintelligence he describes building doesn't sound generative. Rather, he is teaching it how to learn, not just how to regurgitate the most likely string of English words. From Dwarkesh Podcast (11/25/25): Dwarkesh Patel 00:50:45 I see. You're suggesting that the thing you're pointing out with superintelligence is not some finished mind which knows how to do every single job in the economy. Because the way, say, the original OpenAI charter or whatever defines AGI is like, it can do every single job, every single thing a human can do. You're proposing instead a mind which can learn to do every single job, and that is superintelligence. Ilya Sutskever 00:51:15 Yes. Dwarkesh Patel 00:51:16 But once you have the learning algorithm, it gets deployed into the world the same way a human laborer might join an organization. Ilya Sutskever 00:51:25 Exactly. "The main thing that distinguishes SSI is its technical approach," Ilya claimed in the podcast. "We have a different technical approach that I think is worthy and we are pursuing it." He also sounded quite dedicated to the idea of building a superintelligence tuned to a specific set of values - one that cares for sentient life, people, and democracy: I think in particular, there's a case to be made that it will be easier to build an AI that cares about sentient life than an AI that cares about human life alone, because the AI itself will be sentient. And if you think about things like mirror neurons and human empathy for animals, which you might argue it's not big enough, but it exists. I think it's an emergent property from the fact that we model others with the same circuit that we use to model ourselves, because that's the most efficient thing to do. Thinking Machines Lab Murati's Thinking Machines Lab, designed to improve human-to-machine interactions, is decidedly not in stealth mode. The company has already released the beta of its first product, Tinker - a platform allowing researchers to fine-tune open-source models. As reported by WIRED: Murati says that Thinking Machines Lab hopes to demystify the work involved in tuning the world's most powerful AI models and make it possible for more people to explore the outer limits of AI. "We're making what is otherwise a frontier capability accessible to all, and that is completely game-changing," she says. "There are a ton of smart people out there, and we need as many smart people as possible to do frontier AI research." TML has also released several research papers focused on advancing generative models. One, by John Schulman, explored various training methods: In our experiments, we find that indeed, when we get a few key details right, LoRA [low-rank adaption] learns with the same sample efficiency as FullFT and achieves the same ultimate performance. Another, by Horace He, explores how to get past the problem of non-determinism in models. Unlike SSI, Thinking Machines Lab seems to be focused almost exclusively on open-source, generative models. The team is smartly going after the platform play, making models accessible to a wider audience, and do seem to be promoting good, levelheaded analysis of the weaknesses of LLMs. That being said, it's not looking likely that they'll be spinning their cycles on new types of artificial intelligence. Ah, well. It was worth a look. Which brings me back to Yann. Advanced Machine Intelligence Yann LeCun is also a proponent of open-source models - and quite a bit more transparent in his approach than llya Sutskever - in part because world models have been an aspect of his plans for years, and he's published many papers, and given many talks, on his thinking. [The following comments were transcribed by the author from event video:] "This idea of world models is very old," LeCun told the audience in Paris. "I started advertising the fact that this would be the path toward progress in AI about 10 years ago. I was thinking about it for much longer. [] From 2016, it took me another five years perhaps to realize that you could not do it with generative models." And: As humans, we think language is essential to intelligence; but in fact, it's not. It's useful, no question. But animals are pretty smart. They're much smarter than the best robots that we have today. They understand the world really well. And it turns out that understanding the physical world is much more difficult than understanding language. And: You need a system to be able to predict what's going to happen in the world and what's going to happen as a consequence of its actions, and that's really what a world model provides. It's really a completely new blueprint for AI systems. The company I'm creating is really focusing on this - basically, building the next-generation AI systems that will have those capabilities. It's completely orthogonal to what LLMs are doing. These systems will deal with data that is high-dimensional, continuous, noisy - all things that LLMs really can't deal with. LeCun's plan is to base these world models on his Joint-Embedding Predictive Architecture (JEPA) framework, which he outlined in a 2022 paper while at Meta. He has been careful to say that Meta is a partner - but not an investor - in this work. In other words, he has Meta's blessing to use his own research in this new company. As stated in that paper: I submit that devising learning paradigms and architectures that would allow machines to learn world models in an unsupervised (or self-supervised) fashion, and to use those models to predict, to reason, and to plan is one of the main challenges of AI and ML today. One major technical hurdle is how to devise trainable world models that can deal with complex uncertainty in the predictions. While these words were written a few years ago, they remain true. Rohit Bandaru has a fantastic technical deep dive on the architecture here.
Source: Rohit Bandaru
LeCun says his world models will require much less compute than LLMs; he reported a few thousand GPUs to train a model like V-JEPA2. His plan is to make them open source as an alternative to open-source Chinese systems that he worries promote surveillance. It's a pretty fascinating system at a time when new ideas in the AI world seem to be in short supply. And exactly the kind of work that might bring us closer to a general AI. It also seems to have a good amount of overlap with the kinds of things that Ilya is trying to build at Safe Superintelligence. From a speed-to-innovation perspective, this is a good thing. As Ilya said in his Dwarkesh Patel podcast: "[T]here is this Silicon Valley saying that [...] ideas are cheap, execution is everything. People say that a lot, and there is truth to that. But then I saw someone say on Twitter something like, 'If ideas are so cheap, how come no one's having any ideas?' And I think it's true too." Two of the world's top AI minds are now working intently on the future of learning machines. Real learning machines ... not just copying-and-regurgitating machines. That's an idea that could really shake things up. I, for one, will be keeping a close eye on Safe Superintelligence and Advanced Machine Intelligence. Your comments are always welcome.
Sincerely, Berit Anderson
Copyright 2025 Strategic News Service. Redistribution prohibited without written permission. DISCLAIMER: NOT INVESTMENT ADVICE Information and material presented in the SNS Global Report should not be construed as legal, tax, investment, financial, or other advice. Nothing contained in this publication constitutes a solicitation, recommendation, endorsement, or offer by Strategic News Service or any third-party service provider to buy or sell any securities or other financial instruments. This publication is not intended to be a solicitation, offering, or recommendation of any security, commodity, derivative, investment management service, or advisory service and is not commodity trading advice. Strategic News Service does not represent that the securities, products, or services discussed in this publication are suitable or appropriate for any or all investors. We encourage you to forward your favorite issues of SNS to a friend(s) or colleague(s) 1 time per recipient, provided that you cc info@strategicnewsservice.com and that sharing does not result in the publication of the SNS Global Report or its contents in any form except as provided in the SNS Terms of Service (linked below). To arrange for a speech or consultation by Mark Anderson on subjects in technology and economics, or to schedule a strategic review of your company, email mark@stratnews.com. For inquiries about Partnership or Sponsorship Opportunities and/or SNS Events, please contact Berit Anderson, SNS COO, at berit@stratnews.com.
Sunday, December 7, 2025 SPECIAL LETTER: RISK AND OPPORTUNITY IN A TIME OF HYPER-DECENTRALIZED NEWS By Mark Listes | Special Letter | FiReStarter edition This week's issue, by Pendulum CEO Mark Listes, describes the shifting sands of the information environment, the rise of information warfare tactics being deployed across media by a coterie of actors both military and civilian, and what can be done to track and respond to these emerging paradigms in the digital information space. This FiReStarter Special Letter is shareable with non-SNS members.
Thursday, November 27, 2025 SNS SPECIAL ALERT: PIVOT INTO CHAOS II By Mark Anderson This Special Alert is shareable with non-SNS members.
Sunday, November 23, 2025 THE TRILLION-DOLLAR PROMISE: ON DATA CENTERS AND DECEPTION By Evan Anderson The AI race has led to the construction of more data centers than even many of us in the tech sector ever imagined possible. At the same time, the promises being made by companies intending to build out AI tools are proving to be beyond dubious. Read on to find out just who is building all this capacity, how it's going, and where it will go from here
Sunday, November 16, 2025 SPECIAL LETTER: POWERING AI RESPONSIBLY: THE CASE FOR COAL-ASSISTED CCS By John Pope | Special Letter | FiReStarter edition This week's issue, by Carbon GeoCapture CEO John Pope, will take you on a journey through the world of carbon capture. The company, selected for its innovative approach to repurposing old coal mines for carbon capture, twinning with datacenters to reduce climate impact, and the intellect of its leadership, is a standout among those seeking to sequester the world's carbon.
Sunday, November 9, 2025 IS THE FUTURE OF SOLAR IN SPACE? By Berit Anderson Globally, solar deployment continues to accelerate, as China drives cheap solar adoption worldwide. But even built on the backs of Chinese solar-company profits and forced labor, more acceleration is required. The next frontier may be in space.
Note: Some letters may be republished to include subsequent replies.
Subject: SNS SPECIAL ALERT: Pivoting into Chaos II Mark, This is an interesting perspective, but it seems to assume that the "West" has a uniform view, which would be the one of the US. But from Europe, USA imperialism increasingly looks worrying for many people. Furthermore, does it make sense to talk of a "free world" when most of what Europe does is directly or indirectly controlled by the USA? And to your point about AI, shouldn't non-US citizens worry too about what the USA is "doing right now, today, with these very real and dangerous tools"? (see e.g. https://www.authoritarian-stack.info/) Philippe Delvaux Proton.me
Subject: Esther Dyson joins FiRe 2026 Exciting news! Jolene Anderson Co-Founder & Managing Director
Berit, This is great! Love Esther. Pam Miller Corporate Communications & PR Leader | Sustainability Commissioner
Berit, Wow! Thank you! That's a nice biography.. :) Esther Dyson Angel Investor
Esther, We're so lucky to have you onboard! I wish I could take credit for it, but Emma and Sally are the drivers behind our marketing efforts. Hope you had a wonderful Thanksgiving, and thank you again for the connection to [redacted]. It sounds like there's a chance to make it happen next year. Berit Anderson
Berit, Yes, that would be wonderful! Esther
Subject: SNS: THE TRILLION-DOLLAR PROMISE: On Data Centers and Deception Evan, This might have been your best piece ever / in a long time. Didn't [Amory] Lovins say something in San Diego [at FiRe 2025] about not bedding nearly as much energy as they think they do? Paul Shoemaker Executive Director, Carnation Farms
Paul, Thank you! High praise. I certainly enjoyed the writing of it. I believe he did; I think the underlying idea was that energy use could be drastically lower and he had proved that out, but that doesn't always translate to companies acting on the ways they can do so. Link is here, maybe I will rewatch: https://www.youtube.com/watch?v=5AiTKp9q_vY Evan Anderson
Evan, Do y'all think anyone in DC has this on their radar? That will grok, and know what to do about, it? Paul
Paul, Yes, and yet, no. There are a lot of smart people aware of some of these problems, and maybe even the entirety being worse than any component part. I fear they are way, way behind on novel and creative solutions, and even farther on political willpower to properly address the issue. This likely hampers our defenders on a near daily basis in myriad ways. Evan
Subject: Scientists make breakthrough that could render batteries obsolete: 'The beginning of a new generation' Mark, Scott [Foster], et al., If they can get this to really work and reach acceptable consistency, at a competitive cost, this sure seems like a mind blowing advance in innovative technology. Interesting that Europe is leading the way, rather than the US. John Petote Founder, Santa Barbara Angel Alliance |
All, It certainly looks fascinating. I picture little silicon muscles they've made that extract the energy of movement and convert it to electricity. The road to commercialization is, of course, long. Always nice to see something promising come down the line though. Evan
* Mark will be speaking at, and/or attending, the following conferences and events: * January 12-15, 2026: J.P. Morgan Healthcare Conference, San Francisco * May 31-June 3, 2026: Future in Review, Qualcomm Institute, UC San Diego
Strategic News Service (SNS) is a membership-based news organization providing the most accurate source of information about the future of technology as it drives the global economy. It is the publisher of the weekly SNS Global Report, which brings members predictive, deep-dive analysis at the intersection of tech, economics, and geopolitics. SNS hosts monthly virtual Spark salons, offering members access to its global network of leaders in business and industry and allowing them to capitalize on the insight and experience of the broader network. Annually, SNS releases CEO Mark Anderson's Top 10 Predictions for the coming year, which have a 95.3% publicly graded accuracy rating. Founded in 1995, the Global Report is read by industry leaders worldwide. Bill Gates has called it "the best thing I read."
Copyright 2025 Strategic News Service LLC "Strategic News Service," "SNS," "Future in Review," "FiRe," "INVNT/IP," and "SNS Project Inkwell" are all registered service marks of Strategic News Service LLC. ISSN 1093-8494 |









