When EnCharge AI says its analog chip delivers 30 times better AI efficiency than digital GPUs, there is a line item missing from the spec sheet. The company claims 150 trillion operations per second per watt. That number is real. But it excludes the energy cost of converting the results back into digital form — a tax that chips need to pay before anything else can use the output.
According to EnCharge's co-founder Naveen Verma in a recent podcast interview, analog-to-digital converters consume roughly 15 to 18 percent of the chip's energy budget. The fully loaded efficiency is therefore closer to 123 TOPS/W — still a significant advantage over digital silicon, but not the 30x headline on the press release.
EnCharge, founded in 2022 as a Princeton University spinout, has silicon to show for it. The EN100 chip delivers 200 TOPS at 8.25 watts, according to IEEE Spectrum — roughly what a small LED bulb draws. The company says strategic customers will receive samples later in 2025, per EE Times, with volume production targeting 2026–2027. For context, a data center GPU burns 400 watts or more on the same workloads.
The technology, called switched capacitor in-memory computing, stores neural network weights as electric charge on capacitors fabricated between metal interconnect layers on a standard CMOS chip. Verma described the precision verification on the same podcast: "We've measured these things in a lot of detail. It turns out for the kinds of capacitors we use, you see variations that are on the order of 10 parts per million." That works out to roughly 20 bits of effective precision, well beyond what AI inference actually requires — 8-bit arithmetic is standard.
The ADC overhead is not a secret. Verma disclosed it in an EE Times interview. It is on the record. What the company did not do is put the fully loaded number next to the headline figure, which means the spec sheet shows the best-case scenario rather than the real-world one. A 25x advantage over digital silicon is still a step change — enough to make always-on AI assistants on laptops economically viable without a cloud subscription, and enough to make on-device processing a genuine privacy option rather than a marketing bullet. The math survives, just barely.
The bigger risk is timeline. By the time EN100 reaches volume production in 2026–2027, Nvidia and Qualcomm may have narrowed the efficiency gap with next-generation digital inference chips. Qualcomm's AI Hub already runs 10-billion-parameter models on reference hardware at similar power envelopes. The analog advantage is real if the numbers hold. Whether it holds long enough to matter before digital catches up is the open question.
Independent verification is the other open question. The 150 TOPS/W figure is a company claim — no third party has publicly replicated it. EnCharge has disclosed its benchmarks; the reproducibility question remains unanswered.
AWS generates billions annually from AI inference in the cloud. If laptops can run large models locally at 8 watts, the economic case for routing every query through a hyperscale data center weakens. Privacy-conscious deployments that currently send sensitive data to remote servers could process it locally instead. Always-on AI assistants that now require a cloud subscription become a BIOS-level feature. These are the current product roadmaps of every major PC OEM. EnCharge is offering the silicon to back that up.
EnCharge raised $100 million in Series B funding from Samsung Venture, Foxconn, and others. That is serious industrial money funding a production ramp, not a research project. The investment thesis is that analog compute has crossed the line from interesting lab result to shippable silicon — and that the efficiency gap will matter long enough to build a business on.