— and It Says a Lot About Where AI Goes Next
Editorial Note: It has since been identified that the code leak aspect of this article was part of an Anthropic April Fool's Day prank. It worked. My hat is off to you! But that is the only aspect of this article that is a prank.
TL:DR
Anthropic’s month is basically the entire AI industry in miniature: huge revenue and product momentum, Claude hitting the top of the App Store, a high-stakes fight with the Pentagon over AI red lines, and a self-inflicted Claude Code leak. The takeaway: frontier AI labs are now judged like infrastructure companies, not research projects.
Massive growth, rapid product launches, a public fight with Washington, and an embarrassing Claude Code leak have all landed at once. That combination is not a side story. It is the new shape of the AI race.
Anthropic has spent years building its reputation as the careful AI company: the lab that wants to move fast enough to matter, but cautiously enough to claim a different moral and operational standard than the rest of the field. That positioning has been reinforced by its public writing on model behavior, safety, and the limits of AI deployment. Which is exactly why its recent stretch feels so revealing. When a company sells prudence as part of the product, its operational mistakes stop looking routine and start looking strategic.
The most immediate flashpoint is the Claude Code leak. Anthropic said a release packaging mistake exposed nearly 2,000 internal source files and more than 500,000 lines of code tied to Claude Code, one of its most important developer products. The company said no sensitive customer data or credentials were exposed and described the incident as human error rather than a security breach. What makes it more damaging is timing: it followed another recent security lapse that reportedly left thousands of internal assets tied to an unreleased model publicly accessible. Even if neither event becomes existential, two self-inflicted disclosures in quick succession raise uncomfortable questions about release hygiene at a company whose brand is built on rigor.
That would already be a bad week. In Anthropic’s case, it is also a bad week happening in the middle of extraordinary momentum. In February, Anthropic announced a $30 billion funding round at a $380 billion post-money valuation. By the company’s own figures, run-rate revenue had reached $14 billion, more than 500 customers were spending over $1 million annually on Claude, eight of the Fortune 10 were customers, and Claude Code alone had surpassed $2.5 billion in run-rate revenue, with weekly active users doubling since January 1. Those are company-reported run-rate numbers, not audited annual revenue, but they still show why the market is paying close attention.
The deeper issue is that what leaked was not the model itself but the product harness around it. In today’s AI market, that distinction matters less than people think. The moat is increasingly not just raw model intelligence; it is the surrounding system of tools, permissions, orchestration, memory, interface decisions, and workflow design that turns a capable model into a dependable agent. Axios reported that the exposed Claude Code material also revealed clues about Anthropic’s roadmap, including deeper memory and more persistent background assistance. That is exactly the layer rivals want to see, because it is the layer that converts benchmark performance into real-world developer adoption.
Anthropic’s own research helps explain why Claude Code has become so strategically important. In a February research post on agent autonomy, the company said the Claude Code user base doubled between January and mid-February. It also reported that the 99.9th-percentile duration of Claude Code turns nearly doubled from under 25 minutes in late September to over 45 minutes in early January. Internally, Anthropic said success rates on users’ hardest tasks doubled while average human interventions per session fell from 5.4 to 3.3. In plain English: developers are not just trying the tool. They are learning to trust it with longer, more autonomous work.
That trust has been reinforced by relentless shipping. Anthropic’s March release notes show memory arriving for free users, inline charts and visualizations inside chat, persistent phone control for Cowork, and a computer-use research preview in Cowork and Claude Code. On the platform side, Anthropic has continued expanding agent tooling and long-context capabilities, with newer Claude models supporting extended thinking and larger context windows, including general availability for a 1 million-token window on select 4.6 models. Put differently, Anthropic is not coasting on one good model cycle. It is turning Claude into a broader work platform.
The consumer side has moved as well. Reuters reported in early March that Claude had climbed to the top of Apple’s U.S. App Store during Anthropic’s public clash with the Pentagon. That does not make Anthropic a consumer-first business; enterprise is still the company’s center of gravity. But it does show something important: Anthropic is no longer just a lab admired by developers, policy people, and enterprise buyers. It is becoming a visible public brand, which means every controversy and every stumble now lands in a much larger arena.
Then there is the political layer. A federal judge temporarily blocked the Pentagon from labeling Anthropic a supply-chain risk after the company argued it was being punished for trying to prevent its AI from being used for fully autonomous weapons or surveillance of Americans. Anthropic’s own public statements made clear it saw the government’s move as unlawful and retaliatory. However one reads the legal merits, the episode shows that Anthropic’s safety posture is not decorative. It is part of its commercial identity, its policy strategy, and its negotiating position with major customers. That can create trust. It can also create friction when state demand collides with corporate red lines.
Anthropic is also still leaning into the research-first image that helped define it. Its March Economic Index report argued that experienced Claude users attempt higher-value tasks and are more likely to get successful outcomes, while API usage is skewing toward more complex and economically valuable work. That kind of research is doing double duty. It gives policymakers a framework for thinking about AI’s labor-market effects, and it gives enterprise buyers a more sophisticated story about where Claude is already creating value. This is part of the broader Anthropic playbook: sell capability, but wrap it in governance, measurement, and institutional seriousness.
That is why this moment matters. “Anthropic is having a month” is not just a good headline for a messy news cycle. It is a concise description of what frontier AI leadership now looks like: explosive demand, fast product iteration, geopolitical entanglement, and a very small margin for operational error. Anthropic still looks like one of the strongest-positioned companies in AI by growth, developer traction, and enterprise adoption. But the same month that proves its momentum also makes clear that frontier labs are now judged less like research organizations and more like infrastructure companies. That is a harsher standard. It is probably also the correct one.