Netskope Ran 14 Million Alerts Through an AI. Here Is What Actually Hit Human Review.
Security teams are drowning. On any given day, a large enterprise might generate millions of potential data-loss events — someone uploaded a file to an unauthorized service, a sensitive document sat in a public folder too long, an unusual data transfer crossed a network boundary. Most of it is noise. Almost all of it gets ignored anyway.
Netskope wants to change the economics of that triage. The company launched Netskope One AgentSkope on May 5, 2026, its architectural foundation for deploying AI agents across its platform. The anchor product is the DLP AISecOps Agent — what Netskope calls the first agentic tool purpose-built for data loss prevention analysis, designed to mimic a junior security analyst in sorting through the noise to surface the cases that actually need human attention. In a beta with a major global consulting firm, the numbers were stark: 14 million daily alerts compressed to roughly 100 cases per day after the AI filtered and ranked what mattered.
The twist, and the part that separates a press release from a real story, is what happened after the compression. Of those 100 daily cases reviewed by human analysts, less than 1 percent were subsequently scored at a critical risk level. The AI had successfully narrowed 14 million alerts to the two or three per day that warranted a senior analyst's attention. But it had also created a new queue of human work that the previous workflow never produced.
That figure, per beta data, means roughly 98 to 99 percent of the cases the AI surfaced for human review were ultimately scored non-critical. It deserves scrutiny before it migrates into every vendor pitch deck this year. The beta customer was already a Netskope customer with tuned DLP policies, which likely inflated precision: an AI working with well-calibrated policy rules has an easier time distinguishing real violations from noise than one starting cold. What's genuine is the narrowing function: instead of burning analyst hours chasing false positives across millions of alerts, the team reviewed roughly two critical cases per day that actually warranted attention.
Netskope's DLP AISecOps Agent analyzes data movement across networks, endpoints, and cloud services to identify policy violations, unusual data flows, and potential insider threats. The Insider Threat AISecOps Agent, currently in private preview, extends the same approach to detect anomalies in user behavior that might indicate a compromised account or malicious insider. The DLP agent reached general availability with the May 5 launch; the insider threat product has not yet shipped broadly. Netskope says this is the first agentic DLP tool in general availability, a claim no competitor has publicly contested. The beta data does not document what the AI missed — the false negative rate, which would complete the triage accuracy picture, is not yet public. Industry analysts see the category gaining momentum. By 2028, AI agents will autonomously manage a quarter of incident response workflows for data security events, according to a Gartner prediction cited in Netskope's announcement. Pete Finalle, an analyst at IDC, told the company that CIOs and CISOs must invest in agentic security automation as a force multiplier for skilled human resources — a framing that treats the technology as augmenting analysts rather than replacing them.
The beta data supports a hybrid model, not full automation. After the AI narrows the alert volume to 100 cases per day, human analysts still review each one. The efficiency gain is in what the AI removes from the queue: the false positives, the borderline policy violations, the noise that burned analyst time without producing signal. The remaining work is smaller in volume but not smaller in consequence. Currently, 40 percent of security alerts industry-wide go entirely uninvestigated due to lack of capacity to review them.
If that hybrid model holds at scale, the implications for security operations run deeper than the efficiency numbers suggest. The traditional Security Operations Center has long relied on a tiered model: junior analysts handle alert triage, senior analysts investigate confirmed incidents. AI that reliably surfaces two critical cases per day from a 14-million-alert stream does not eliminate the junior analyst role — it changes what the job looks like. The alert triage that once trained new analysts to recognize patterns gets replaced by a pre-screened queue of edge cases. Junior analysts either level up into genuine incident investigation faster, or find their traditional entry-point function diminished. For managed security service providers, whose economics depend on billing per alert or per incident processed, the compression math is less favorable than the headline efficiency numbers suggest: fewer alerts processed at the same headcount means unit economics that need renegotiating.
Anthropic's Claude model powers the reasoning layer under AgentSkope. Michael Moore, head of cybersecurity products at Anthropic, said in a Netskope blog post that the model is built for work requiring sustained reasoning and consistent context, while Netskope brings the platform, data, and SecOps expertise to apply it across security workflows. The partnership reflects a broader pattern in enterprise AI: foundation model companies supply reasoning, platform vendors supply data and domain context.
What remains untested is whether that roughly 98-to-99 percent non-critical rate holds outside a pre-existing Netskope deployment, and whether the 40 percent of alerts currently going uninvestigated industry-wide represents a queue that shrinks under AI triage or one that simply gets routed differently. The Insider Threat Agent's move from private preview to general availability, whenever that arrives, will be the more rigorous test of whether the first-of-its-kind claim holds up under production conditions.