Behind the Curtain: We've been warned

· Axios

Six facts. No hyperbole. All in the past 60 days.

  1. AI is the fastest-growing product category in world history.
  2. One of the latest models is so powerful that its maker won't release it to the public.
  3. OpenAI and Anthropic say their most powerful AI coding models are now building themselves.
  4. AI companies are growing less transparent as models grow more powerful. The federal government requires zero transparency.
  5. AI resentment is building fast. In early April, the San Francisco home of OpenAI CEO Sam Altman was the target of two attacks in the same week. Shaken, he wrote: "The fear and anxiety about AI is justified ... Power cannot be too concentrated."
  6. AI havoc is no longer theoretical: This year's great software rout erased $2 trillion in value as investors realized, week by week, new human tasks that the latest models would wipe out, from coding to real estate services to legal research to financial management.

Why it matters: A year ago, we wrote a wake-up call to business leaders. This one is for everyone: We've been warned — by the data, by the technology, and by the people most responsible for building it — that we've unleashed something powerful, something growing exponentially, and something understood by very few, especially those in power.

Visit moryak.biz for more information.

Between the lines: Think of this as the dawn of a new Atomic Age. The atomic race that culminated in 1945 was the last time our species grappled with the advent of such a transformative, awe-inspiring technology. Its possibility — for both prosperity and destruction — led to the creation of science fiction that imagined everything from utopia to apocalypse.

  • Much of the most viral writing about AI can be considered modern science fiction. "AI 2027," a 2025 attempt to game out superhuman intelligence led by a former OpenAI researcher, ends with AI either supporting a pro-democracy revolution that spans the solar system or the tech undertaking the harvesting of humanity's brains.
  • This year's discourse did much the same. Matt Shumer's viral "Something Big Is Happening" conflated AI's code-generating ability with the arrival of an intelligence with real taste. Citrini Research's "The 2028 Global Intelligence Crisis" imagined a worst-case economic scenario that involved zero effective response from either governments or markets.
  • These pieces drove so much discussion and, in some cases, moved markets because they could be right. To be clear, they're probably not. They imagine edge cases and extremes. But we can't promise you they're wrong. The president can't. The heads of AI companies can't. If anyone claims they can, that's science fiction, too.

The big picture: We've no clue where this ends, and the good or bad that might be unleashed along the way. No one does. But it's increasingly clear that absent better leadership, collaboration and understanding, American society, workers, academic institutions and government aren't remotely ready for what's unfolding.

That starts with grappling with these six realities:

  1. Anthropic is the fastest-growing company in the history of American business. Its annualized revenue jumped from $1 billion at the end of 2024 to $9 billion a year later to $30 billion as of this month. More than 1,000 companies spend over $1 million per year on Claude — a number that doubled in under two months. No company in any era — Rockefeller's Standard Oil, tech-boom Google, pandemic-era Zoom — has scaled organic revenue this fast at this base. (Collectively, the railroad boom of the mid-1800s was the benchmark before now.)
  2. Anthropic built a model that it doesn't plan to release to the general public. Claude Mythos Preview can crack critical security vulnerabilities in the operating systems and browsers that power the modern world. It's that powerful — and that dangerous. The company announced that access to Mythos initially would be limited to 40+ organizations to give cyberdefenders a head start on what's coming. No regulator forced Anthropic to adopt this strategy. It did so on its own. Others might not do the same.
  3. AI is now building AI. Boris Cherny, who runs Anthropic's Claude Code, said in late January that "[p]retty much 100%" of the code written inside Anthropic was AI-generated. Anthropic's official position: "We build Claude with Claude." The same pattern holds at OpenAI, where a senior researcher publicly said he no longer writes code by hand. This is only the beginning. OpenAI's chief scientist set 2028 as the target for a fully autonomous AI researcher. Anthropic's co-founder anticipates a decision to allow AI to recursively self-improve between 2027 and 2030, calling that step "the ultimate risk."
  4. As the models get more powerful, we know less about them. Stanford's 2026 AI Index Report says the Foundation Model Transparency Index dropped from 58 to 40 (out of 100) in the last year. Its direct finding: "The most capable models are now the least transparent." There are no meaningful requirements to disclose details of how models are trained.
  5. A 20-year-old man was charged with traveling from Texas to San Francisco and firebombing Altman's house in the middle of the night, then showing up at OpenAI headquarters and trying to smash his way through the front doors. He had a manifesto that named AI execs as targets. He said he came to burn it down and kill anyone inside. Two days after the Molotov cocktail attack, someone else drove by Altman's home and shot at it.
  6. In 10 weeks, AI agents erased $2 trillion from the combined value of public software companies — the first skill and sector targeted by AI. That's more, relative to the market, than the dot-com bust or the 2008 financial crisis.

Hours after the attack, Altman published a blog post. Read Altman's words slowly:

  • "The fear and anxiety about AI is justified."
  • "AI has to be democratized; power cannot be too concentrated."
  • "[W]e are in the process of witnessing the largest change to society in a long time, and perhaps ever."
  • "We are all learning about something new very quickly; some of our beliefs will be right and some will be wrong, and sometimes we will need to change our mind quickly as the technology develops and society evolves."

The man running the most-adopted AI product in history, hours after someone tried to kill him at his home, publicly acknowledged that the anxiety about his technology is warranted — and neither he nor anyone else knows exactly where we're headed.

The bottom line: Any one of these six facts would be the business story of the decade in a normal industry. Together, in a 60-day window, they describe a technology whose growth, power and risk have outrun the public's understanding — and whose builders are saying so, in their own words.

  • We've been warned.

Go deeper: Anthropic's unprecedented growth.

  • Shane Savitsky contributed.

Read at source