SNS: THE PRODUCTIVITY WASTELAND IN OUR GPT FUTURE
 

THE PRODUCTIVITY WASTELAND IN OUR GPT FUTURE

By Mark Anderson

Why Read: OpenAI's public goal of achieving "artificial general intelligence" not only is without serious evidence, but now is increasingly understood to be overstated at best, or a fraud at worst. In this week's discussion, we will examine the much larger economic-risk issues flowing from the tech industry's adoption of these unfounded dreams. - mra

________

Note to members: This week, we are celebrating a new milestone in SNS work history: we were the first to recognize and identify a new world-changing enemy alliance, and to give it a name: CRINK. As with our "first" in identifying China's national business model and its existential threat to Inventing Nations, we are proud to continue to work at this level, at times ahead of national governments, three-letter agencies, and all other media.

To learn more about the CRINK discovery and documentation, go to the "Takeout Window" section below, or visit:

Crink: the new autocratic 'axis of evil' | The Week

for an example of external attribution.

 

 

Two Buckets: Current Human Knowledge vs. Future Scientific Discoveries

Today, the amount of spending planned on a global basis, predicated on the idea that there is more to the Generative Pre-trained Transformers (GPT) technology than just scaled learning about language, is in the trillions of dollars and growing.

The cost of data centers, and the energy and water alone required to support them, pencils out to be far beyond our current abilities, worldwide, to pay for them. Adding in the related costs of massive implementation and/or failure in markets not tolerant of hallucination or lack of trust is even further beyond any current commercial or national budget.

More important, perhaps, is a simple initial question, before we get into these dangerous financial waters: Is this idea real, or just a Valley-driven mirage?

In answer, I'd like to propose a simple gedanken experiment, as Einstein might have said:

Imagine that there are two buckets, and both start out empty. We will label the bucket on the left "All Current Human Knowledge." Into this bucket we will put all written works known to date, of every kind, from Sumerian cuneiform to Shakespeare to Feynman, from poetry to Clausewitz - every single word. After all, language is how humans express their current knowledge, be it right or wrong.

We'll label the second bucket, on the right, "All Future Scientific Discoveries." This bucket will contain all possible future knowledge, facts, scientific understanding, and so on; we might think of it as everything knowable, even if unknown today, about the world around us.

(Keep in mind, these are pretty big buckets.)

As we start to fill up Bucket #2, we quickly exceed the volume of Bucket #1, and so, to be balanced, we increase the size of both buckets at the same time, to maintain our visual understanding of their relative size. Again: the volume of everything we know today vs. the volume of all we may know in the future.

Over and over, as we fill Bucket #2, we have to make both buckets much bigger. In fact, this is an exercise without end. Bucket #2 continues to be filled and expanded and filled again, while the contents of Bucket #1 dwindle by comparison until we cannot see them, the level is so low.

This is exactly the story of GPT. Through scaling, it has become more expert at telling us things based on what we already know and have expressed in language. Through scaling, it finds connections among things we already know as a species, that no single person knows - of course. But it doesn't go beyond this, regardless of Valley hype to the contrary. (For more on this, read my chapter on "The Future of AI" in the recently published book Ignite 2034, NTT Press, out of Germany.)

In other words, the implied promise (and competitive threat, based on FOMO) that GPT x.0 will lead anywhere beyond Bucket #1, beyond what is contained in our language, and therefore what is only available today - is a sham.