
![]() |
THE MESSY, ANTISOCIAL FUTURE OF AGENTIC AI By Berit Anderson
Why Read: Agentic AI has become ubiquitous. Meanwhile, the hallucination rates of "reasoning models" have never been higher. In this week's issue, we explore the future implications of widespread agentic AI deployments across security, personal, and financial platforms. _______ Society is facing a bizarre technological reality: billions of dollars of sunk investment are currently pushing an inherently broken (albeit extremely helpful, in limited capacities) new tool into the vast majority of technology systems driving our businesses, our finances, our infrastructure, our personal lives, and the global economy itself. There are real potential benefits to the widespread use of agentic large language models - chief among them being the ability to eliminate a huge amount of busy work currently undertaken by humans. Doctors, for example, no longer need to physically type up notes on each patient in an already saturated schedule. Entrepreneurs can suddenly launch low- or no-code apps and websites and create unlimited marketing materials without ever hiring a team. Mid-level managers can answer twice as many questions and provide unlimited guidance to direct reports by reviewing LLM-written emails rather than drafting them themselves. This report is not about those benefits. They are already at the center of the conversation about AI. It is about the challenges and issues we can expect to arise from the blind, widespread adoption of hallucinatory agentic reasoning systems and other forms of LLMs. They are here. They are headed for widespread adoption. It's time we start discussing the real, granular impacts of that.
|