
![]() |
the escape of the LLMs: losing control of what you don't understand By Mark Anderson ______ Why Read: What if 95% of the hype and spending around AI today is based on a rather interesting idea: that Silicon Valley, once the hub of innovation, is now reduced to not much more than a massive marketing machine? What if Valley mavens cannot help, nor escape, themselves, and are taking everyone else in the world along for a thrilling ride on a very fast road to nowhere? Ask yourself: Who, in today's AI vendor world, is the hero - the ethical and brilliant master innovator you would be willing to bet everything on? Hint: There aren't any. And that, if nothing else, should give all of us much more than pause. And what if, while our attention is diverted by marketspeak, something much more important is happening? What if LLMs have actually, already, escaped human control? If you are a user of LLMs, you'll want to read this issue. - mra ______ It isn't a crime to intentionally misuse nouns, but perhaps, in the world of AI science and marketing, it should be. Or, to put it another way, the Valley mavens who brought us the not-brilliant idea that a rather dumbed-down contextual language system would approach genius level, if only we scaled it up massively - were wrong. Why didn't they learn the first time around, when IBM's Watson became the most embarrassing crash-and-burn event in AI history? Scaling a basic contextual "reader" of Wikipedia turned out to be the nothingburger of the century in AI, later remaindered to a dumpster behind Francisco Partners in San Francisco and now living on in brand name only - the brand that would not die - a perfect example of what's wrong with the industry. Let's make an amusing (but educational) chart of words Valley marketeers have invented to misinform us about products and their attributes: To be clear, all these mathematical and coding achievements, if discussed rationally, have attributes worth capturing and using, if only within the preconditions first understood by their inventors and exploiters. Google would not ship LLMs at all, due to risk; OpenAI's CEO Sam Altman apparently shipped without board approval, warning: "I would never use this for anything important." From Watson to watsonx, from ChatGPT promising AGI to the latest LLM versions promising literally everything, it does seem fair at this stage to say that the Valley machine is all about marketing, not about making great new innovations. In fact, not only is the moatless, shallow-end-of-the-pool innovation pace of LLMs slowing as they run out of global data to "Hoover up" - the latest launches have been disheartening at best, if not downright frightening. Why is o3 hallucinating more than its predecessors? Why does each progressive reasoning model get worse instead of better? Why does the industry sell us easily custom-fitted benchmark improvements, while children in early grades complain about how stupid these products are in practice? How does one explain the absolute disaster of the latest shipments of Grok 4 by xAI and of Llama 4 by Meta? Obviously, the owners of these technologies cannot; they do not understand the technology, hidden all this time by the "black box" problem; so how would they understand it? All of the above is a prelude to the release this week of a series of papers and announcements of shared concern by a refreshingly large group of companies and experts regarding a new cost of this ignorance, one which I find as chilling as they do. What is the problem? The technology is now escaping the ability of its creators to control it. Before we get into this more deeply, it might be a good time to go back and watch Walt Disney's version of The Sorcerer's Apprentice. OK, here we go: |