Goldman’s Claude cutoff in Hong Kong is a warning for every global AI rollout
Goldman Sachs just gave enterprise AI buyers a preview of a problem that will keep spreading: you can standardize on one internal AI toolset and still lose a model overnight when geography and contract terms collide. Reuters reported that Goldman removed Anthropic's Claude from its Hong Kong bankers while keeping OpenAI's ChatGPT and Google's Gemini available on the same internal platform.
That is more revealing than a simple vendor ban. Just two months ago, Reuters reported that Goldman Sachs, the Wall Street bank, had Anthropic engineers embedded with its teams to build AI agents, software that can carry out multi-step tasks, for internal work including transaction accounting and client due diligence. A bank that close to Anthropic still had to carve Claude out of one major office. The pressure is shifting from model quality to whether a vendor's regional rules line up with how global companies actually operate.
The immediate trigger appears to be Anthropic's own geography. Reuters said Anthropic told the Financial Times that Claude was never officially supported in Hong Kong. Anthropic's public supported countries page does not list Hong Kong for either commercial API access or Claude.ai, the company's chatbot product.
Anthropic tightened that stance this month. In a company policy post, Anthropic said its terms bar use of its services in some regions because of legal, regulatory, and security risks, and that it is strengthening restrictions on organizations tied to unsupported jurisdictions including China. Hong Kong is not China in every business workflow, but for compliance teams trying to keep an internal AI platform live across borders, that distinction can stop mattering once a vendor writes the restriction into its terms.
That is why Goldman's internal setup matters. Reuters reported that Hong Kong staff had previously used Claude through the same in-house platform and lost access only in recent weeks, while Gemini and ChatGPT stayed online. Inside one bank, the model market did not break on benchmark scores or employee preference. It broke on which vendor's contract could survive a particular jurisdiction.
There is still a caveat here. We do not have the Anthropic contract Goldman was working from, and Anthropic's statement to the FT leaves open the possibility that Hong Kong had been unsupported for a long time and Goldman only recently enforced the boundary. If so, the novel fact is not a sudden policy change from Anthropic. It is that large enterprises are now hitting enough real-world compliance pressure to make those dormant boundaries visible to employees.
That still leaves a hard lesson for buyers. If one bank can work closely with Anthropic at headquarters while losing Claude in Hong Kong, multinational AI rollouts start to look less like a clean software procurement decision and more like a patchwork of regional permissions. The next thing to watch is whether more global companies end up designing internal AI stacks around the vendors that can stay legally available across every office, even when another model is the one they would rather use.