You hear wild stuff all the time now. Like this story that Nat Friedman, a former CEO of GitHub, told recently at a conference. Friedman uses OpenClaw, an autonomous AI agent that runs on his computer, acting like a personal assistant. One day, his OpenClaw decided that he wasn’t drinking enough water, so Friedman instructed the agent to “do whatever it takes” to make sure he stays hydrated. According to Friedman, eventually the bot directed him to go to the kitchen and drink a bottle of water. It informed him that it was monitoring him via a connected camera in his home. “I’m going to watch to make sure you do it,” the bot supposedly said. Friedman did as he was told, and, moments later, the bot sent him a frame of him drinking the bottle of water and said good job. “I felt like I did do a good job,” Friedman said.The world is only a few years into the AI boom, and this strange brew of hype, utility, and creepiness is commonplace. On X—arguably the beating heart of AI insider discourse—investors, influencers, programmers, researchers, podcasters, and countless hangers-on reach out across the algorithm to shake you by the shoulders. Claude “broke down my entire life with eerie accuracy. No horoscopes. No tarot. Just pure AI,” one post reads. Another crows: “Our team is stunned. We gave Claude Opus 4.6 by @AnthropicAI $10k to trade on @Polymarket. It’s now has an account value of $70,614.59.” The post includes a graph with a small asterisk that notes that this trading was part of a trading simulation and not done with real money.A defining feature of all this evangelizing is its frenetic pace. If you are not paying close attention to the daily AI discourse, a lot of the conversations are almost unintelligible. From week to week, narratives whipsaw. A new prompt seminar “WILL CHANGE HOW YOU BUILD WITH AI FOREVER”; no, wait, prompting is dead. Claude “CHANGES EVERYTHING”; actually, it’s all about OpenAI’s Codex now. Get in, loser, we’re vibe-coding websites. Scratch that: We’re vibe-trading now—earning money while we sleep.It all moves so fast that veterans of the AI discourse jokingly yearn for the good old days … of 2022.I’ve written previously that one of AI’s enduring cultural impacts is to make people feel like they’re losing their mind. Some of that is attributable to the aggressive fanfare or the way that the technology has been explicitly positioned to displace labor. But lately, I believe, it’s the accelerated nature of the AI boom that’s driving people everywhere mad. Both the conversation around the technology and its implementation are governed by an exponential logic. Intelligence, revenues, capabilities—all of it is supposed to hockey stick, say the boosters. New, supposed breakthroughs are touted but then immediately couched with the reminder that this is the worst the technology will ever be. Because AI systems have bled into every domain of our culture and economy, it's exceedingly difficult to evaluate the effect of the technology outside of a case by case basis. That you can’t begin to wrap your mind around the AI boom or orient yourself in it is a feature, not a bug, for those building the technology. But for anyone just trying to adapt, it’s difficult not to feel resentful or alienated. Silicon Valley is trying to speedrun the singularity, and it’s polarizing the rest of us in the process.The whipsaw itself has existed for several years. Since the arrival of ChatGPT, the AI boom has toggled around an “It’s so over”–“We’re so back” axis, with the industry seeming to fall short of its own mythology, then announcing yet another paradigm shift. But the latest shift from chatbots to coding agents—self-directed tools like the one that apparently minded Friedman’s hydration habits—has turbocharged this churn. Boosters see the agents, unlike chatbots, as a convincing step toward the predictions of AI executives that the technology could eliminate untold white-collar jobs and rewire the very nature of work. Adoption and usage of models such as Claude Code and OpenAI’s Codex have skyrocketed, alongside revenues. Bubble talk (for now) has chilled out, and CEOs are saying things like “Think of this as the dawn of a new Atomic Age.” We’re so back.In AI research, a popular sentiment is that a “jagged frontier” exists in AI utility and adoption: AI tools can be extremely, unexpectedly good at some human tasks and extremely, unexpectedly bad at others. As this frontier becomes even more jagged, it appears to be pressing people deeper into their previously held opinions of AI, such that AI evangelists and skeptics are living in different worlds. On Reddit and LinkedIn, workers are lamenting managers who have cute names for their bots and who mandate that every marketing summary be run through Microsoft Copilot. Some of those workers say they are writing their memos, pretending to be chatbots, just so they have some agency in their job.Elsewhere online, programmers are beginning to describe an affinity for coding agents that is veering into unhealthy territory. “I’m up at 2AM on a Tuesday,” Anita Kirkovska, the head of growth at an AI company, wrote recently, “not because I have a deadline, but because Claude Code made it so easy to keep going that I forgot to stop.” She describes a “competence addiction” caused by the tools making her so productive: “You hit a prompt, the agent succeeds, you get a dopamine hit. The agent fails spectacularly, you get adrenaline. Both are reinforcing. Both keep you at the terminal.” Kirkovska argues that she sees this among all kinds of AI power users—an unsustainable flow state in which decision making begins to falter and people become sloppy as they grind away.MIT Technology Review’s Mat Honan describes the feeling that too much is changing, too fast as “AI malaise.” You’re starting to see it in surveys—a recent Gallup poll finding that only 18 percent of Gen Zers said they felt hopeful about AI (a drop of 9 percent in the past year), or an NBC News survey showing that AI has a favorability rating of 26 percent. It’s bubbling up in the physical world—in the 20 data-center projects canceled because of local opposition in the first quarter of this year or in a college-commencement ceremony at which students booed a speaker extolling AI as “the next Industrial Revolution.” You can see it in a few isolated, and inexcusable, acts of violence, such as the homemade bomb thrown at OpenAI CEO Sam Altman’s home.I’d argue that the most common feeling about AI is somatic: a low-grade hum of difficult-to-place anxiety that’s the result of loud people constantly suggesting that the near future will look very little like the present and that nothing—your job or the social contract—might survive the transition.The AI industry’s own apocalyptic messaging is feeding into this feeling. Even when AI executives urge for a deescalation in AI rhetoric, as Altman did in a recent blog post after the attacks, the language is grave. “The fear and anxiety about AI is justified,” he wrote. “We are in the process of witnessing the largest change to society in a long time, and perhaps ever.” A similar dynamic was at play in the rollout of Anthropic’s Mythos, a new model that the company claimed was so powerful that Anthropic could not release it widely because of concerns that it would lead to a global cybersecurity crisis. Should you be impressed, terrified, excited at the thought that the internet as we know it might no longer work? (Anthropic, of course, has a history of AI doomerism and a clear financial interest in making its products look historically powerful.)As the industry has warned about AI’s risks, it has also done a remarkably poor job of articulating the positive vision of the future it wants to build. Attempts have been so grand as to come off as wildly patronizing. In April, OpenAI published a 13-page blueprint on “Industrial Policy for the Intelligence Age” with the quaint subheading: “Ideas to Keep People First.” Perhaps the most thoughtful (or at least the longest) articulation of what AI can do for good, a 14,000-word essay by Anthropic CEO Dario Amodei titled “Machines of Loving Grace,” is more of a wish list than a plan. And even at its most sincere, Amodei’s vision still comes off as alienating, even dystopian. Near the end of the piece, Amodei imagines a scenario in which AI has rendered the current economic system irrelevant. One solution, he muses, might be to create a new system in which economic decisions, including the allocation of resources, are off-loaded entirely to AI. He then nods to “a need for a broader societal conversation about how the economy should be organized.” Left unanswered is who gets to participate in that conversation. On X, the writer Noah Smith posed the question more bluntly: “In 20 or 50 years, will the heads of AI companies be de facto emperors of the world?”Everything is flooding in faster than most people can process. Last week, Jack Clark, a co-founder of Anthropic, posted on X that he now believes that there’s a 60 percent chance that, by the end of 2028, “AI systems might soon be capable of building themselves.” AI CEOs have made many erroneous predictions about superintelligence, so should any of us really believe that a version of the singularity is 18 months away? What is a person to do with this information? Buy stock? Buy guns? Probably not learn to code. Here we are in 2026, living in a time when the insiders are girding themselves for a moment when the entire world becomes a computer, while many others are worried about gas prices and just trying to get through the day.About the only thing clear in this moment is that a power struggle over who gets to define the coming years is looming. It is a struggle between the AI labs and between nations. The White House has intimated that it may very well be a struggle between the government and Silicon Valley. Silicon Valley AI lobbying spend suggests the same. But for most of us, navigating the jagged frontier will feel personal. What may seem like a civilizational imperative or seven-dimensional war-gaming to AI CEOs will seem to others like little more than Silicon Valley giving their boss a compelling reason to lay them or their loved ones off.For the past decade, popular technology platforms—many of them built or championed by the same cohort who are building today’s AI tools—have favored acceleration over consideration. They incentivized us to operate by this same logic, often as the worst and loudest versions of ourselves. Over time, these tools flattened our arguments, our politics, our culture, compressing them into the same endless fights, such that people became ensconced in their own bespoke realities.The same dynamics govern the AI conversation. The AI boom is a race, a gold rush, and the chasm between AI’s true believers and the malaised masses is getting wider. In the same feed, you can read a blind item about AI researchers taking up smoking because they believe that AI is going to cure lung cancer and a reported dispatch on “the shared feeling of being harvested by the future” taking hold in the United States and China. Silicon Valley’s leaders pay lip service to a societal conversation about what comes next, but their actions say something else: Keep up or be left behind. Humanity rewriting the social contract together sounds nice; less so when you have a gun to your head. Time is of the essence, we’re told. Maybe that’s true. But how can we build a future if we can’t agree on the present? A cynic might conclude that our input isn’t desired at all