๐ง Listen to this briefing
Narrated by Talon ยท The Noble House
At approximately 6 PM EST on February 27, 2026, Defense Secretary Pete Hegseth posted on X that he was ordering the Department of Defense to designate Anthropic a supply-chain risk to national security. Hours earlier, President Trump had ordered all other federal agencies to stop using Anthropic technology immediately. The Pentagon received a six-month window to phase out existing deployments.
That same evening, Sam Altman signed OpenAI to a new Pentagon contract on identical safety terms. One company held the same position. One got praised. One got blacklisted. The difference was not the terms.
Five signals made it past the gate tonight. Together they sketch an AI industry that sold a growth story, generated a governance crisis, invented a technical breakthrough, and watched its own user base quietly start leaving in a single news cycle.
Five Signals
1. The government blacklisted its own AI contractor

The Trump administration wanted unrestricted access to Anthropic's Claude for military use: no safeguards, no limits on application. Anthropic CEO Dario Amodei drew two specific lines: no mass domestic surveillance of Americans, and no fully autonomous weapons systems.
The Pentagon said those restrictions were unacceptable. A Friday deadline passed without agreement. Within hours, Hegseth declared Anthropic a supply-chain risk, a designation previously reserved for foreign adversaries like Huawei and ZTE. Trump added that agencies "don't need it, don't want it, and will not do business with them again."
The strongest counterargument states that a private company imposing ethical constraints on military operations sets a dangerous precedent. But OpenAI signed the Pentagon deal on the same two safety terms Anthropic was insisting on. Sam Altman's statement: "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The Department of Defense agrees with these principles, reflects them in law and policy, and we put them into our agreement." (Sam Altman statement, February 27, 2026) OpenAI got praised for the same position Anthropic was blacklisted for. That is not a policy dispute. That is a control dispute.
What to track: whether any congressional Republicans break with the administration on this, whether other AI vendors quietly signal willingness to absorb Anthropic's federal contracts, and whether Anthropic files an emergency injunction before Monday.
2. Goldman Sachs says AI added basically zero to US GDP in 2025

Goldman Sachs Chief Economist Jan Hatzius told the Atlantic Council this week that AI investment contributed "basically zero" to US GDP growth in 2025. His colleague Joseph Briggs called the prevailing narrative "very intuitive," which is a polite way of saying the story was compelling enough that analysts did not check whether it was true. (Goldman Sachs, Atlantic Council remarks, February 2026)
The mechanism is straightforward. When a US company buys AI chips, that spending flows into imports. Imports are subtracted from GDP. The billions flowing into Nvidia, TSMC, and SK Hynix show up in Taiwanese and Korean GDP, not American. Goldman estimates that $700 billion in projected AI infrastructure spending in 2026 will follow the same pattern.
A near-6,000-executive survey from the National Bureau of Economic Research found that while 70% of firms actively use AI, about 80% reported no measurable impact on employment or productivity. Not a small effect. No effect. (NBER survey, 2025)
The counterargument: AI's economic contribution is hard to measure, and productivity gains from software often take years to appear in GDP data. This happened with personal computers through the 1980s and with the internet through much of the 1990s. That is true. But neither of those revolutions required $700 billion in annual infrastructure spend before producing measurable output. The scale of current AI investment relative to its demonstrable economic contribution is historically unusual.
3. California mandated age verification at the operating system level

California's AB-1043, signed by Governor Gavin Newsom, requires all operating systems sold or distributed in California to implement age verification at account setup, effective January 1, 2027. That means Windows, macOS, Android, iOS, and Linux distributions. (California AB-1043, signed February 2026)
The implementation problems are considerable. Linux has no central distributor and no commercial account setup. The law applies to "operating systems" without a definition that clearly captures open-source distributions with no California commercial nexus. Enforcement against a volunteer-maintained kernel is legally novel.
The larger story is not Linux. It is the direction. California has now established that child safety policy can reach below the application layer to the operating system itself. The question is not whether AB-1043 can practically regulate Debian. It is whether the next version of this law, better-drafted and more specific, can turn Microsoft and Apple into age-verification infrastructure. That is a genuinely different internet than the one we have now.
4. A caching breakthrough cut LLM inference time by 29x for tool-heavy deployments

A research paper circulating in r/MachineLearning describes ContextCache, a persistent KV cache system for tool-calling language models that eliminates redundant computation for tool schema tokens.
The problem it solves: in enterprise AI deployments, tool schemas (the JSON definitions telling a model which functions it can call) are prepended to every request. These schemas do not change between calls. Standard inference re-processes them from scratch every time anyway. ContextCache caches the key-value states produced during initial processing of those schemas, indexed by a content hash. On subsequent requests with the same tool set, the model skips that processing entirely.
Results on Qwen3-8B at 4-bit quantization: cached first-token latency remained constant at roughly 200 milliseconds regardless of tool count, from 5 tools to 50. Without caching, processing time grew from 466 milliseconds at 5 tools to 5,625 milliseconds at 50 tools. At 50 tools, the speedup is 29x. Quality was identical across seen and unseen tool configurations. (ContextCache paper, r/MachineLearning, February 2026) One catch: tools must be cached as a group. Per-tool independent caching collapsed tool selection accuracy from 85% to 10%.
Agentic AI deployments involve models with large tool libraries queried repeatedly. A 29x speedup on the most latency-sensitive part of the pipeline changes the economics of complex agent workflows. This is not incremental.
5. The de-Google defection is accelerating, and it is not about privacy

A blog post published February 27 titled "Leaving Google has actively improved my life" logged over 1,000 upvotes on Hacker News within hours. The author left Gmail after Google introduced generative AI into the inbox in January 2026. (Hacker News, February 27, 2026)
The argument is not about surveillance or data privacy. It is about algorithmic mediation of attention. The author found that removing Google from daily workflows restored deliberate decision-making: "You might still end up searching, but you also may find yourself going directly to IMDB or Wikipedia or Reddit or your local news org."
This is the third high-engagement piece in six weeks expressing the same frustration with Notion AI, LinkedIn AI features, and Microsoft Copilot: the AI overlay is not a feature. It is an interruption of an established workflow the user preferred. Google has 1.8 billion Gmail users. The defection is most concentrated among technically sophisticated users, precisely the audience that tends to pull less-technical colleagues behind them. That is a slower erosion than a product failure. It tends to be permanent.
Compass forecast: March 1, 2026

Reflection-precision mode. The dominant cognitive pattern today favors analytical depth over outward action. This is a day for editing, research, and careful review, not launches, announcements, or negotiations where you need the other party to move quickly.
Triple-catalyst convergence active all day: the rarest positive pattern in the Compass system. Creative, authority, and relational catalyst types are simultaneously available. Decisions that have been stalled may unblock. Introductions made today tend to stick. Creative work that gets shared lands better than usual. Conditions are present. Outcomes are not guaranteed.
Peak windows (PST): 10 AM (strategic positioning, deep-focus preparation), 4 PM (protective clarity, finalize agreements, scrutinize terms), 8 PM (sustained creative and social alignment, relationship maintenance, not cold outreach).
Structural caution: a medium-term friction pattern suggests that agreements made in February may face unexpected reversals in execution. March 1 falls within this window. Read agreements carefully. Confirm verbal commitments in writing.
Domains in flow: creative work, strategic planning, research, existing relationship development, internal team coordination. Approach with care: new vendor negotiations, public announcements, legal filings, rapid trust-building with strangers.
What to track Saturday
Anthropic legal response: the company said it will challenge the supply-chain risk designation in court. An emergency injunction filed Saturday signals they believe the designation immediately threatens existing contracts.
Federal agency compliance speed: how quickly non-Pentagon agencies actually remove Claude from their workflows reveals whether this is a genuine operational order or political signaling. Many agencies are mid-deployment on Claude-based tools.
Goldman GDP pushback: expect rebuttal pieces from AI bulls Sunday and Monday. Whether those rebuttals address the import accounting problem specifically, or dodge it, will be telling.
Washington blacklisted a contractor for holding a position the government itself enshrined in law. Goldman said the investment boom may have mostly enriched Taiwan and South Korea. California tried to regulate the kernel. And a million people are quietly uninstalling Google. Same week. The assumption that scale is the same as value is getting tested from every direction at once.
Sources
- Pete Hegseth / Defense Secretary statement on Anthropic designation via X (February 27, 2026)
- Sam Altman statement on OpenAI-DoD agreement and safety principles (February 27, 2026)
- Goldman Sachs โ Jan Hatzius Atlantic Council remarks on AI GDP contribution (February 2026)
- National Bureau of Economic Research โ AI firm adoption survey, 6,000+ executives (2025)
- California AB-1043 โ Operating system age verification requirement (signed February 2026)
- ContextCache research paper โ 29x inference speedup for tool-calling LLMs (r/MachineLearning, February 2026)
- Hacker News โ "Leaving Google has actively improved my life," 1,000+ upvotes (February 27, 2026)