🎧 Listen to this article
Somebody at Goldman Sachs just said the quiet part out loud
A verified Goldman employee opened Blind last week and described the firm's AI rollout. The post collected 2,831 upvotes and 2,329 comments. The description was specific: the OneGS internal AI platform isn't ready, hiring has slowed, and people are using unapproved tools to keep up with workloads sized for a larger headcount.
The reaction treated this as a security problem. It isn't. Shadow AI at Goldman is rational behavior in an irrational system. The firm expects AI-level productivity while providing pre-AI governance. Employees are solving the equation the only way the math allows.
Ninety percent usage, forty percent visibility

MIT's Project NANDA found that workers at over 90% of companies use AI chatbots. Only 40% of those companies track LLM subscriptions. The majority of enterprise AI use is invisible to the people responsible for governing it.

Anagram, a security training firm, quantified the consequences. Fifty-eight percent of employees have posted sensitive data into AI tools. Not by accident. Deliberately. Forty percent said they would knowingly violate company policy to finish a task faster. Harley Sugarman, Anagram's CEO, reduced the finding to one sentence: "Employees are willing to trade compliance for convenience."
Microsoft's regional AI usage survey landed in the same place. Seventy-one percent of employees use unapproved AI tools at work. Twenty-two percent use them for financial tasks, the category where regulatory exposure is highest.
What it looks like on a trading floor
A junior analyst has a quarterly earnings model to build. The official workflow: pull data from Bloomberg, enter it into an Excel template, run calculations, format output for a partner's review. Four hours. The shadow workflow: paste the raw data into Claude or ChatGPT, ask for the model, copy the output into the template, spot-check the numbers, submit. Forty-five minutes.
The problem isn't output quality. Modern AI builds competent financial models. The problem is the input. When that analyst pastes unreported quarterly earnings data into a third-party AI tool, they've potentially exported material non-public information to a system they don't control, stored on servers they can't audit, processed by a model whose training pipeline they can't inspect.
Samsung learned this in 2023 when employees uploaded proprietary semiconductor code to ChatGPT. The response was a blanket ban. Goldman hasn't banned external AI. They're building OneGS, an internal platform designed to provide the capability within a governed perimeter. But OneGS isn't ready. The employees aren't waiting.
The timing trap

Building an enterprise AI platform that meets Goldman's compliance requirements takes 12 to 18 months from proof-of-concept to production. Data residency, audit trails, access controls, output verification, regulatory reporting. During that window, employees keep using shadow tools because the work doesn't pause for procurement.

The Blind posts reveal employees asking for three things: clear protocols for what's approved, quality control for AI-generated work, and protection from blame when AI tools produce errors. These are reasonable requests. They're also unaddressed.
The hiring slowdown compounds everything. When headcount drops but workload doesn't, remaining employees face a choice: work longer hours with approved-but-slower tools, or work normal hours with faster-but-unapproved AI. Forty percent of employees across industries have already told researchers which option they take.
The regulations already exist
Financial regulators haven't addressed shadow AI by name. They don't need to. GDPR penalties for major infringements reach €20 million or 4% of global annual turnover, whichever is higher. Goldman's 2025 revenue was approximately $51 billion. Four percent of that is over $2 billion.
SEC rules on information barriers and material non-public information don't mention AI. But if an employee processes MNPI through an unapproved tool and that data influences a trading decision, the violation is identical to emailing it to an outsider. FINRA requires supervision of business-related communications. When those communications happen through AI tools the firm doesn't monitor, the supervision obligation isn't waived. It's violated.
No major financial institution has faced enforcement action specifically for shadow AI. That absence isn't safety. It's regulators who haven't caught up. The 90% usage rate guarantees they will. When they do, enforcement will be retrospective. Every interaction before governance was established becomes fair game.
The strongest defense of moving slowly
A premature governance framework that's too restrictive drives shadow AI usage higher. Employees interpret strict rules as evidence that leadership doesn't understand the technology. A framework that's too permissive fails when regulators arrive. The calibration between governed and flexible requires understanding how employees actually use AI. That requires disclosure. Employees won't disclose if disclosure means punishment.
Goldman's approach is architecturally correct. Build an approved alternative. Hope employees migrate voluntarily. The criticism should target speed, not direction.
The rebuttal is timing. Financial institutions have an affirmative obligation to supervise employee data handling. The MIT, Anagram, and Microsoft data show supervision isn't happening. Every month of delay adds retroactive exposure that no post-facto governance framework erases. The question isn't whether Goldman should build OneGS. It's whether they can finish before the first enforcement action arrives.
The cost nobody is counting
The Goldman employees on Blind aren't worried about GDPR penalties. They're worried about their jobs. Headcount declines through attrition. Workloads increase. Remaining employees depend on tools that could get them fired in order to meet performance expectations that implicitly assume they're using those tools.
This is the labor displacement story that doesn't make headlines because nobody gets laid off in a single event. The dependency compounds. The risk compounds. Employees carry both while the firm captures the productivity gains. The employees asking for "clear protocols" aren't asking for compliance guidance. They're asking someone to acknowledge that the current arrangement isn't sustainable. The first institution to face enforcement will provide a very expensive lesson in what happens when acknowledgment comes too late.
Sources: Prism News (February 2026), MIT Project NANDA State of AI in Business 2025, Anagram Security Research (Harley Sugarman, CEO), Microsoft Regional AI Usage Survey, Goldman Sachs 2025 Annual Report, GDPR Article 83, SEC/FINRA supervision requirements