THE RACE FOR AUTONOMOUS AI: By Mark Anderson Why Read: In this week's discussion, we will replace the hyper hype of ChatGPT and dreams of advanced general intelligence (AGI) flowing from a language app with the grounded and critically important concept of autonomy. At the same time, we will suggest dropping the outworn ideas of the Turing test and Vernor Vinge's (actually well thought-out) singularity and replace them with a reverse Turing test, measuring how we measure up to computing machines. Finally, we will unveil a series of international schemes to steal and distribute Tesla's crown jewels, which, while imperfect, perfectly suit China's Standing Committee. - mra
What Is Autonomy? And Why (Almost) Everyone Cares In this issue, we'll introduce you to an unfolding story that is still in the early stages of discovery. To that end, we have to say at the start that most of the information here will be based on patterns, as we work to unearth more evidence. Longtime SNS members recognize that our charter is to predict what will happen in the world of technology driving the global economy - not to provide court-solid evidence. OK, onward. We'll start by briefly listing the foundational blocks that you've read about here and that brought us to where we are today. The AI Thread Members are aware of our view of the technical uses of generative AI (GAI), ChatGPT, generative pre-trained transformers (GPT), and explainable AI (XAI). For a quick review, you can now see our work in a just-released book on the next decade in technology (Ignite 2034, published in German and English), in the chapter titled "The Future of AI: Pattern Discovery, Explainable AI, and Ethical AI." We could start by suggesting that the hyperbole that continues to come out of OpenAI and its CEO regarding the likelihood of its contextual language program achieving advanced general intelligence (AGI) approaches zero. But don't take our word for it; watch all the companies now backing down from those dreams and getting serious about using ChatGPT for tasks that are language-related-only. As mentioned in a recent GR, here's one interesting new paper that's come out on this false promise: ChatGPT is bullshit | Ethics and Information Technology (springer.com) And a few other waypoints on this journey: 1. Let's replace the outdated idea of the Turing test (measuring human ability to discern whether a computer is or is not a human) with a reverse Turing test (in which a computer decides whether a human is interesting enough to be considered intelligent). 2. Instead of - or in addition to - the AGI hunt, let's focus on autonomy. After all, a compute system that can perform successfully on its own, without human guidance, may be the ultimate intended purpose of AGI in any case. 3. As with AGI, no one has yet succeeded in the hunt for autonomy, although it is likely that Elon Musk and Tesla have pressed the limits of the types of tools (neural networks, or NNs) they're using. 4. We have suggested that NNs will not lead to autonomy, sharing dialogues we have had in Ireland at the University of Limerick and elsewhere - leading to the proposal that autonomy cannot occur without true explainability. 5. It appears likely, therefore, that specifically because of this problem, neither NNs nor the NN-driven GPTs or LLMs will ever deliver autonomy.
|
Recent Issuesscott2022-09-20T16:01:24-07:00
SNS: THE RACE FOR AUTONOMOUS AI: The Key to Global Technical and Economic Dominance