Kaseya’s Agentic IT Claims Haven’t Been Independently Verified
Kaseya says its new IT management platform achieves better than 99.9% accuracy on backup screenshot verification and cuts ticket categorization errors by 80%. None of those numbers have been independently verified. Nobody else has checked either.
The platform, announced April 28 at Kaseya Connect Global in Las Vegas, is architecturally distinct from a recommendation engine: it opens tickets, isolates compromised endpoints, and verifies backup integrity without a human in the loop. Kaseya's CEO Rania Succar called it "AI as an operating system, not a feature" — meaning the system runs the operation rather than advising the operator. The training corpus behind it is substantial by MSP standards: more than 1 billion help desk tickets, 3 exabytes of backup data, and 17 million managed endpoints.
That is also the verification problem. The 80% error-reduction figure comes from Koos Ligtenberg, business unit director at Advisor ICT, a Kaseya customer quoted in Kaseya's own press release. The 99.9% accuracy figure appears only in Kaseya's announcement. No independent MSP, analyst, or third-party test has published corroborating benchmarks. Kaseya had not responded to questions about its measurement methodology prior to publication.
The pattern is not unique to Kaseya. Vendors across the AI infrastructure market routinely announce accuracy, speed, and error-reduction figures at launch — then point to early adopters as evidence. Whether those numbers hold in another shop, with different ticket mix, different endpoint diversity, or less intensive onboarding, is rarely tested before the next announcement cycle.
This matters beyond the MSP channel. As autonomous systems expand into procurement, logistics, and operations, buyers face the same information gap: vendor claims built on vendor-selected data, with no independent benchmark a buyer can audit. MSPs evaluating whether to restructure service delivery around Kaseya's figures cannot determine from the public record whether those numbers reflect genuine performance or favorable early-adopter conditions, intensive implementation support, or selection effects that do not transfer to their own environment.
The consent and commercial data question is also unresolved. MSP ticket data at this scale — representing thousands of end-client environments — was presumably generated through commercial service relationships. Whether MSPs have contractual authority to allow a platform vendor to train on that data, and whether end clients have been notified or consented, is not addressed in Kaseya's public materials.
Frank Merino, chief operations officer at Forthright Technology Partners, a Weston, Florida MSP, appears in ChannelPro Network's coverage as an early adopter without a quantified performance claim attached to his name. His financial relationship to Kaseya — volume-based discounts, co-marketing arrangements, or other incentives — is not disclosed in the public record. That is typical for MSP announcement coverage and not inherently suspicious; it is also not sufficient for a buyer trying to assess credibility.
Kaseya's data depth is a legitimate structural advantage in the channel. Competitors including ConnectWise and Datto have not disclosed comparable training data volumes, which makes the corpus hard to contextualize against industry benchmarks. Whether that advantage translates to the claimed accuracy in production environments — across different client profiles, ticket volumes, and support structures — is a question the public record does not answer.
What to watch: whether Kaseya publishes a methodology for those numbers, whether a customer with no financial relationship to Kaseya independently reproduces them in production, and whether the consent question for commercial ticket data in AI training surfaces as a competitive or legal issue for the channel.