๐ง Listen to this article
Narrated by Talon ยท The Noble House
On February 15, 2026, Peter Steinberger published a blog post announcing he was joining OpenAI. His project, OpenClaw, had been running on 135,000 machines as of January. By Monday morning, three things were true simultaneously: the agent wars had a new front, Wall Street was having a visible argument with itself about what AI is worth, and 341 malicious skills had already compromised over 9,000 OpenClaw installations in January. The week that followed was less a news cycle than a stress test.
The thesis: the AI industry's center of gravity is shifting from capability competition to deployment control โ and the security vulnerabilities, talent consolidation, and market confusion of this week were all symptoms of the same underlying transition.
๐๏ธ Listen: Audio version
OpenAI Didn't Buy OpenClaw โ They Bought the Person Who Could Build What's Next
The deal structure is clarifying: OpenAI hired Steinberger and pledged to fund a foundation that keeps OpenClaw open-source, per Reuters (February 15, 2026). Sam Altman posted on X that Steinberger will "drive the next generation of personal agents." Steinberger's blog post stated he wants to build "an agent that even my mum can use."
That framing is the signal. Not the acquisition โ the stated goal. OpenAI, which built the most capable language model available, is now explicitly prioritizing deployment accessibility over capability advancement. The model wars produced capable systems. The agent wars will be won by whoever makes those systems usable by people who've never heard of a system prompt.
OpenClaw's community is navigating what "foundation" means when the founder is now inside OpenAI's offices. The project has 1,500+ recent commits and a distributed contributor base โ but the governance question is live. This is the acqui-hire model running at the speed of AI: the product becomes infrastructure, the person becomes strategy, and the community absorbs the uncertainty.

Wall Street's Contradiction Hasn't Resolved โ It's Deepened
NVIDIA reported Q4 fiscal 2026 earnings on February 25: revenue of $68.1 billion, up 73% year-over-year (Axios); data center revenue of $61.7 billion, up 71% (New York Times). Guidance for the next quarter: $78 billion โ ahead of the analyst consensus of $72.6 billion (CNBC). The company beat on every material metric. The stock fell 5.5% the following day (Kiplinger).
This is the beat trap โ a pattern where strong results are already priced in and the forward guidance, even when strong, disappoints relative to whisper numbers. But it also reflects a genuine market confusion that JPMorgan identified: two contradictory AI theses are running simultaneously. If AI destroys software companies, AI infrastructure stocks should be worth more. If AI spending is a bubble, infrastructure stocks should be worth less. The market is trying to price both possibilities at once, which produces volatility rather than direction.
Neither thesis is wrong. Both are incomplete. The resolution will come from deployment data โ specifically, whether AI-driven revenue starts appearing in enterprise earnings calls in quantities that justify the infrastructure spend. That data doesn't exist yet at scale. Until it does, the volatility is the market's honest acknowledgment of uncertainty, not panic.
The Security Gap Is Not a Future Risk โ It's Already Open
Between January 27โ29, 2026, threat actors uploaded 341 malicious skills to ClawHub, OpenClaw's community skill marketplace. Koi Security scanned 2,857 skills total and flagged 341 as malicious; the attack compromised over 9,000 OpenClaw installations, per Coinpedia and Digital Applied (both citing the incident). A separate CVE (CVE-2026-25253) documented a remote code execution vulnerability in OpenClaw itself.
The attack vector matters: a poisoned skill doesn't just compromise one system. It runs with the agent's permissions across every system the agent touches โ email, financial accounts, communication platforms. The plugin ecosystem is replicating the npm supply chain attack pattern from software development, except the blast radius is larger because AI agents operate with higher-level permissions by design.
The timing is notable: Steinberger announced his OpenAI move on February 15, after the January security incidents had already occurred. The question of what happens to a security-compromised open-source project when its founder leaves for a competitor is not hypothetical.

Five Signals Worth Tracking
Talent is concentrating at the top. The best independent agent builders are getting absorbed into the major labs. This is rational for them individually and reduces ecosystem diversity at exactly the moment diversity matters most for identifying which deployment patterns work.
The SaaS vs. agents confusion is real and not yet resolved. Markets are simultaneously pricing AI as an existential threat to software companies and as infrastructure spending that may not produce returns. Both are plausible; neither is dominant. The confusion will last until deployment revenue data arrives.
Agent security is running 18-24 months behind agent adoption. The ClawHavoc incident documented an attack pattern that was predictable from npm's history. The first major agent-based breach with named victims โ financial loss, identity theft, organizational compromise โ will compress that timeline violently.
Regulation is queued. Tesla began rolling out Grok inside UK and European vehicles this week while facing active regulatory probes across the continent. European regulators move slowly and expensively. The bet is product wins before the fine lands. That bet has a history of being wrong.
The labor market is forking at entry level. The H-1B layoff wave in tech, per X discourse this week, increasingly reflects companies replacing contractor capacity with AI systems โ using the immigration policy debate as cover for an automation decision. The workers brought in to build AI systems are being replaced by the systems they built. That irony will generate policy pressure that arrives faster than the technology planned for.
Sources: Peter Steinberger blog post, February 15, 2026 (steipete.me); Reuters, "OpenClaw founder Steinberger joins OpenAI," February 15, 2026; TechCrunch, "OpenClaw creator Peter Steinberger joins OpenAI," February 15, 2026; Sam Altman on X, February 15, 2026; CNBC, "Nvidia NVDA earnings report Q4 2026," February 25, 2026; Axios, "Nvidia hits earnings record," February 25, 2026; New York Times, "Nvidia's Quarterly Profit Hits $43 Billion," February 25, 2026; Kiplinger, "Nvidia Earnings," February 26, 2026; Duggan USA, "ClawHavoc," January 2026 (attack dates January 27โ29, 2026); Coinpedia, "OpenClaw ClawHub Under Attack," February 2026; Digital Applied, "AI Agent Plugin Security: Lessons from ClawHavoc 2026," February 2026
Sources
- Reuters โ OpenAI hires Peter Steinberger, pledges OpenClaw open-source funding (February 15, 2026)
- Axios โ NVIDIA Q4 FY2026 Earnings: $68.1B Revenue, up 73% (February 25, 2026)
- New York Times โ NVIDIA Data Center Revenue $61.7B (February 25, 2026)
- CNBC โ NVIDIA Q5 Guidance $78B vs. analyst consensus $72.6B (February 25, 2026)
- Kiplinger โ NVIDIA stock fell 5.5% post-earnings (February 2026)
- SecurityWeek โ 341 Malicious OpenClaw Skills Compromised 9,000+ Installations (January 2026)
- WIRED โ TAKE IT DOWN Act, Senate Hearings on AI and Nonconsensual Imagery (February 2026)