Nvidia Bet Billions on Glass. The Performance Specs Are Missing.
Copper is being evicted from AI data centers. Glass is moving in.
Nvidia and Corning announced May 6 that they are building three advanced manufacturing plants in North Carolina and Texas to supply the optical fiber that next-generation AI racks will run on. The deal will expand Corning's U.S. optical connectivity manufacturing capacity tenfold, add more than 50 percent to domestic fiber production, and create over 3,000 high-paying American jobs. Corning shares climbed 12 percent on the news. Nvidia, which received warrants to purchase up to 18 million Corning shares at $180 per share, climbed nearly 3 percent.
That is the supply chain story. Here is the one the press release does not tell.
The announcement described what Nvidia and Corning plan to build and by when. It did not describe how well it will work. Specifically, the companies disclosed peak bandwidth and throughput metrics for their optical connectivity products but omitted the latency figures and power consumption benchmarks that data center engineers say determine whether co-packaged optics actually solves the AI interconnect bottleneck.
The distinction matters more than it might sound. Latency is the time it takes for a signal to travel between chips inside a rack. In a training run across thousands of GPUs, any mismatch in pipeline timing costs compute cycles while chips wait for data. Power consumption determines whether a hyperscaler can fit its target GPU count inside a facility's existing cooling envelope or needs a new one. Both are load-bearing parameters for the actual workload, not secondary concerns.
Corning CEO Wendell Weeks told CNBC in January that moving photons is five to twenty times lower power than moving electrons, and that co-packaged optics brings the light conversion process directly next to the compute chip. Vlad Galabov at Omdia described the physics to CNBC: less energy is wasted traveling a few millimeters across a chip package than traveling across a circuit board. These are genuine improvements. Whether the specific numbers in Nvidia's and Corning's products meet the thresholds hyperscalers need remains undisclosed.
This creates an asymmetric information situation that matters for the competitive landscape. Broadcom, Marvell, and Intel are all developing co-packaged optics products of their own. If Nvidia and Corning cannot or will not publish latency and power specifications, competitors who do publish them gain credibility with the infrastructure architects making buying decisions. The silence is not neutral. It is an opening.
The numbers behind the optics race are worth sitting with. A traditional data center rack required roughly 32 optical fibers. A next-generation AI backend rack is being designed for 20,000 or more, a 625-fold increase, according to industry analysis. Meta has a single hyperscaler campus under construction that will consume 8 million miles of optical fiber, roughly 0.6 percent of everything Corning has produced in its entire corporate history. The U.S. currently has about 160 million fiber miles deployed. Supporting the planned data center buildout through 2029 requires adding 213 million more miles, more than doubling the country's entire installed base in under five years, according to Fiber Broadband Association projections.
Corning, founded in 1851 and inventor of low-loss optical fiber in 1970, has signed deals with Meta and Nvidia within five months of each other. The Meta deal, announced in January, was the largest single fiber contract in Corning's history. The Nvidia deal dwarfs it in manufacturing scale. The sequencing signals that hyperscalers are not merely buying fiber. They are locking in supply years ahead of when they will need it, because they see the optical bottleneck arriving before most of Wall Street does.
Weeks put it plainly in the press release: "As power becomes a bigger and bigger issue, fiber inevitably gets closer and closer to the compute." Nvidia CEO Jensen Huang called co-packaged optics indispensable at GTC 2025. These are not speculative claims. Nvidia's Quantum-X Photonic and Spectrum-X Photonic switches, unveiled at that same conference, are built from the ground up for the technology. Corning's glass is the plumbing.
What the press release did not include was any specification for how the plumbing performs under actual AI workloads. The latency penalty of the optical-electrical conversion at scale, the power draw of the transceivers, the switching reconfiguration times. These are not secrets. They are product specifications that would normally accompany a platform announcement of this magnitude. Their absence is the actual story.
Nvidia has invested $4 billion in laser component makers Coherent and Lumentum since March, part of a broader vertical integration through partnership that extends from silicon to glass. This is a company that has decided the bottleneck for AI is not compute but interconnect, and is spending accordingly. That judgment may be entirely correct. The proof will be in the performance numbers that have not been released yet.
Analysts are divided on the deal's financial merits. Wolfe Research raised Corning's price target to $230, calling it a once-in-a-generation opportunity. Barclays set it at $180, noting the stock trades at 99 times trailing earnings. Both can be right about the structural demand and wrong about whether this announcement resolves the technical questions that will determine whether co-packaged optics ships at scale on schedule or gets pushed another year.
The factories will be built. The fiber will be made. Whether it works well enough to justify the buildout pace hyperscalers have committed to is a question the May 6 announcement leaves open. The specs will come, probably at GTC 2027 or in earnings call disclosures as the products move toward volume deployment. Until then, the real measure of this deal is not the stock jump or the job creation or the 10x manufacturing capacity. It is whether the latency and power numbers close the gap between what AI racks need and what copper can no longer provide.