David Bennett spent years at Lenovo Japan, ran NEC Personal Computers, and spent time as chief customer officer at Tenstorrent before leaving. Shimpei Hara handled GTM and business operations at Tenstorrent before that, with earlier product and corporate strategy stints at Lenovo. The two of them have now launched AI&, a vertically integrated Japanese AI company that runs its own data centers, builds orchestration software for heterogeneous compute clusters, and plans to develop its own AI models and agents. They raised $50 million in seed funding and say they have $2 billion in infrastructure capital committed. Those two numbers have a 40:1 ratio that deserves scrutiny, which I will get to.
AI& is not starting from scratch. The infrastructure and staff of Unsung Fields, a Japanese AI cloud provider, have been folded into AI& — Unsung Fields has ceased operations. This means AI& already has two Japanese data centers up and running, offering more than 1,000 GPUs plus a cluster of Tenstorrent hardware, and 80 existing customers. The company expects to open a third Japanese data center within a month. This is not a company with a PowerPoint roadmap. This is a company that inherited something real and is now dressing it up.
The $2 billion is where the skepticism starts. $50 million in seed funding buys you a seed-stage company. $2 billion in infrastructure capital commitment is a different order of magnitude entirely — it is forty times the seed round, and it is not the same thing as $2 billion in the bank. Bennett and Hara are describing an intent, a facility, or a pipeline, not a locked-in deployment. In Japan right now that is not implausible on its face. GMI Cloud announced a $12 billion, one-gigawatt buildout for Kagoshima roughly five days before this announcement. If you are a Japanese infrastructure entity looking at AI demand projections, writing a term sheet for a $2 billion data center facility is not crazy. But it is not $50 million in the door, and readers should understand the difference before they draw conclusions about what this company can actually build.
The heterogeneous compute angle is the most technically coherent part of the pitch. Hara laid out the logic at the company's launch event in Japan this week: route the right model to the right hardware, disaggregate across AMD and Nvidia for efficient token throughput, and target a 1.5 to 2 times improvement in token throughput as the gains worth optimizing for. The company has a significant Nvidia GPU fleet, is planning to incorporate AMD hardware, and already runs Tenstorrent silicon — which Bennett claims is probably the largest Tenstorrent installation anywhere. "We are not creating complexity in our tech stack for the fun of it," Bennett said. "It is quite simple today, we can route the right model to the right place, that is super easy." That is a reasonable answer to the obvious objection, even if it sidesteps the harder question of whether heterogeneous orchestration at scale actually delivers that throughput gain reliably across customer workloads.
The Tenstorrent connection deserves its own mention. Unsung Fields announced a Tenstorrent Galaxy Wormhole cloud service in April 2025 and a capital alliance with Tenstorrent in July 2025, a strategic technology partnership according to IT Business Today. That infrastructure is now inside AI&. This means Bennett and Hara did not walk into a greenfield opportunity — they absorbed something that already existed, with existing customers and an existing hardware relationship. That is a legitimate building block. It also means the 80-customer figure and the data center footprint are not new growth; they are inherited. The growth story is whether AI& can hold and expand that base.
Japan as a data sovereignty play is the frame Bennett is leaning into hardest. The Japanese market, he told EE Times, is driven by AWS and the big hyperscalers, and there are complexity and price issues dealing with them. "People want to own their data, they want open models, and they also want to know that if they are putting their data into online AI chatbots where that data is going," he said. "We will be able to say it is staying in our data centers in Japan." That is a real pain point in a market where hyperscaler dominance creates legitimate concerns about data residency, latency, and pricing leverage. Whether a 1,000-GPU operation with 80 customers can credibly address that at scale is a different question.
Hara made a more arresting argument at the GTC conference last week, echoing a point Jensen Huang made at the same event: that all employees of all companies will be running 100 AI agents each soon. Scaled to Japan's population, with each agent using one million tokens per day, total power demand would reach a terawatt. "If this is where we are heading, who is going to be the server of AI in Japan?" Hara said. "And how can you serve Japan without breaking the bank and without needing a terawatt of power, because that is definitely not the direction our government is heading towards." He is right that a terawatt is not the direction any government is heading towards. He is also right that this calculation depends entirely on whether the assumptions hold — 100 agents per employee, one million tokens per day per agent — and those assumptions are not settled.
AI& also plans to be Japan's premier AI lab, developing application-specific models for the Japanese market and acting as an incubator for Japanese AI startups that need compute. Bennett frames it as a talent problem: Japan has researchers who want to work on Japan-specific AI and need a place to do that. "We want to be that place to go," he said. That part of the vision is the most speculative and the hardest to evaluate from outside. Model development is expensive, competitive, and dominated by entities with much larger compute budgets. A 1,000-GPU cloud with 80 customers is not obviously the foundation for a foundation model lab, however well-intentioned.
What AI& has going for it: two executives with direct hardware and channel experience in the Japanese market, an existing data center footprint, an inherited customer base, and a technical thesis — heterogeneous compute routing — that is coherent and defensible. What it has working against it: the seed-to-infrastructure ratio is a real question mark, the inherited customer base and footprint make the growth claims harder to evaluate, and the AI lab ambition requires resources and talent that a $50 million seed does not obviously provide. The Japan data sovereignty frame is real. Whether AI& is big enough to act on it is the open question.