Mastercard keeps tabs on fraud with new foundation model
Mastercard is building a foundation model for commerce — and it does not need your name to do it Mastercard has developed a large tabular model — an LTM, as opposed to the large language models powering chatbots — trained on billions of anonymized transaction records.

Mastercard is building a foundation model for commerce — and it does not need your name to do it
Mastercard has developed a large tabular model — an LTM, as opposed to the large language models powering chatbots — trained on billions of anonymized transaction records. The company plans to scale that training set to hundreds of billions of payments events, eventually incorporating merchant location data, authorization flows, chargebacks, and loyalty activity. The model learns behavioral patterns, not individual identities. Personal identifiers are removed before training.
This is the bet Mastercard is making with a new foundation model it is building in collaboration with Nvidia and Databricks, and which it plans to highlight at the Nvidia GTC 2026 conference. The goal is not a chatbot. It is an insights engine — a single model that teams across Mastercard can fine-tune for different tasks, replacing what is currently a sprawling portfolio of separately trained and maintained AI systems.
We currently need to build, train and maintain thousands of AI models, each for different markets, use cases or customers, Steve Flinter, Mastercard senior vice president of AI, Machine Learning and Blockchain, wrote in a blog post published March 17. The LTM is designed to compress that.
The technical distinction matters. LLMs are trained on unstructured data — text, images, video — and learn to predict the next token in a sequence. LTMs operate on structured tabular data: multi-dimensional tables where the relationships between fields carry the signal. Mastercard's model learns which relationships in transaction data are predictable, enabling it to identify anomalous patterns that predefined rules miss.
One concrete example the company cites: high-value, low-frequency purchases like a wedding ring. Traditional fraud models often flag these as suspicious — a sudden expensive purchase from a jeweler looks like a compromised card — and generate false positives that create friction for legitimate customers. In Mastercard's internal experiments, the LTM was better able to distinguish genuine high-value purchases from fraud, learning from weaker signals in the data that a human-defined feature set would not capture.
Mastercard is careful about how it frames this. The model is being deployed initially in cybersecurity — augmenting existing detection systems rather than replacing them. The company is building hybrid workflows that layer the LTM's outputs alongside established rule-based and ML-based fraud tools. This reflects both technical prudence and regulatory reality: a payments network operating under PCI DSS and other compliance regimes cannot swap out its decision infrastructure overnight.
The infrastructure story is also significant. The company is running the model on Nvidia's accelerated computing platform, with Databricks handling data engineering and model development. The Nvidia GTC mention is not incidental — it signals that Mastercard is positioning itself as an enterprise reference customer for Nvidia's financial services AI stack, a relationship that gives both companies marketing value in a crowded enterprise AI market.
What is less clear is how the model performs under adversarial conditions — sophisticated fraud rings that adapt to model behavior, edge cases where the training distribution doesn't apply, long-term data drift as consumer behavior evolves. These are the failure modes that vendor blog posts don't address. Mastercard acknowledges that no single model will perform well in all scenarios, which is why the hybrid approach matters — but it also means the LTM's contribution to fraud detection is additive, not transformative, in the near term.
The broader aspiration — a single foundation model across loyalty programs, portfolio optimization, personalization, and analytics — is plausible but unproven. The economics are attractive: one model architecture, one training pipeline, shared infrastructure. The risk is concentration: a single model failure could propagate across multiple product lines in ways that isolated models would not.
This is the less glamorous edge of the foundation model wave. While the industry has focused on text and image generation, tabular foundation models are quietly being built by financial institutions, insurers, and retailers who have spent years accumulating structured data they own outright. That ownership is the whole point — Mastercard doesn't need to license transaction records from anyone. The data is the moat.
Whether the LTM approach works at scale, survives regulatory scrutiny, and delivers the promised efficiency gains will take years to answer. But the direction of travel is clear: the same infrastructure that powers chatbots is being repurposed for the structured world of commerce, and the companies with the most transaction data are in the best position to use it.

