The story behind Matters.AI funding journey

Intent Modeling

Intent modeling goes beyond access control — evaluating sequences of permitted actions to detect when legitimate access drifts into misuse before data leaves the environment.

Read with AI

What is Intent Modeling in Security?

Intent modeling in security is the capability to evaluate whether a sequence of actions by an identity, human or non-human, is consistent with legitimate business purpose. It addresses the hardest detection problem in data security: not whether access was authorised, but whether the purpose behind that access aligns with what the access was granted for.

Access control answers: should this identity be able to do this? Intent modeling answers: given that they can, does what they're actually doing make sense?

Those are completely different questions. And most security tooling only asks the first one.

Why access control isn't enough

Every major data incident of the last five years has one thing in common: the credentials used were valid. The access was permitted. The tools used were sanctioned. The destination channels were approved.

An insider who exports a customer database to a staging folder, compresses it, and uploads it to a cloud storage service their organisation already uses for legitimate work hasn't violated a single access control rule. Each step was technically within policy. The access was granted. The tool was allowed. The destination was on the allowlist.

The threat is invisible to any control that only evaluates whether access is permitted. It only becomes visible when you evaluate the sequence as a whole and ask whether the combination of access, selection, transformation, staging, and transfer is consistent with the business purpose for which that access was originally granted.

That's the question intent modeling is built to answer.

How intent modeling works

Intent modeling evaluates the gap between what access allows and what it's actually being used for. It operates across four dimensions simultaneously.

Historical behaviour baseline. For every identity, the system builds a model of what normal looks like. Which systems does this account typically access? What data does it query? In what volumes, at what hours, from which source IPs, in what workflow patterns? Deviations from that baseline are the first signal.

Role and workflow context. Not every deviation is suspicious. An engineer asked to investigate a production issue will access systems outside their normal scope. That's explainable. The intent model factors in role context: is this behaviour consistent with what this role legitimately does, even if it's outside this specific account's historical pattern? A finance analyst accessing HR data is anomalous for their role and history. A senior security engineer accessing the same data during an active investigation is explicable.

Data sensitivity weighting. The same behavioural deviation carries different risk depending on what data is involved. Access to a table containing 3 million customer payment records is categorically different from access to a table of anonymised test data, even if the access pattern looks similar. Intent modeling factors in the sensitivity of the data being accessed when scoring the risk of any observed deviation.

Downstream sequence evaluation. This is where intent modeling departs from generic anomaly detection. A single unusual access is a weak signal. That access, followed by an unusually large export, followed by local file staging, followed by compression, followed by upload to a permitted destination: the sequence is a strong signal. Intent modeling evaluates whether the chain of actions following access is consistent with legitimate use of that access, or whether it describes a pattern characteristic of data staging and exfiltration.

Together, these four dimensions answer the core question: was this access used in a way that aligns with business purpose, or has the behaviour drifted from what legitimate use looks like?

Intent drift: the specific concept that matters

The practitioner term that captures what intent modeling detects is intent drift. It's the moment when behaviour begins to diverge from what is reasonable for a given identity, role, and business context, even though each individual action remains technically permitted.

One query is not drift. That same query, from a new source IP, against a table the identity hasn't touched in 90 days, returning volume 15x their rolling average, followed immediately by a local export operation: that's drift. The individual steps are explainable in isolation. The sequence is not.

Why does this distinction matter? Because every tool that evaluates individual events independently will see each step and find it acceptable. The query didn't exceed the threshold. The export was to a local folder, not an external destination. The upload was to an approved service. No single control fires. The intent model, evaluating the sequence across all four dimensions, identifies the chain as inconsistent with business purpose and raises a high-confidence signal.

That's what intent drift detection catches that event-based detection misses. Not individual violations. Sequences that collectively describe behaviour inconsistent with legitimate access use.

What intent modeling requires to work

Intent modeling is computationally and architecturally demanding. It requires four things to operate effectively.

Continuous data classification. The intent model needs to know what data is involved in every access event, at a semantic level. A table name isn't sufficient. The model needs to know whether that table contains regulated PII, anonymised data, or operational metadata, because the sensitivity of what's being accessed determines how aggressively the intent model should score any deviation.

Per-identity behavioural baselines. The model can't evaluate deviation without knowing what normal looks like for each specific identity. Those baselines need to be maintained continuously and updated as behaviour evolves with role changes, new responsibilities, and seasonal workflow variation.

Cross-system event correlation. Intent emerges across systems, not within them. An access event in DAM, a file creation on the endpoint, a compression operation, an upload through a network channel: these events live in different telemetry streams. The intent model needs to correlate them into a single timeline per identity to evaluate the chain.

Endpoint ground truth. The link between a database access and an external upload is the local operations on the endpoint that connected them. Without kernel-level endpoint telemetry capturing what happened on the device between the query and the transmission, the chain has a gap. Intent modeling is only as complete as the telemetry that feeds it.

Without all four inputs, intent modeling produces either incomplete signals, because the data context is absent, or high false positive rates, because the baselines aren't precise enough to distinguish drift from legitimate variation.


Intent modeling vs behavioural analytics

The terms are related but not identical. Behavioural analytics is the broader discipline that includes baseline modelling, anomaly detection, and risk scoring. Intent modeling is specifically the capability within behavioural analytics that evaluates sequences against business purpose, rather than simply flagging statistical deviations.

A UEBA system that flags an unusual login from a new IP is doing behavioural analytics. A system that evaluates whether the sequence of actions following that login is consistent with the business purpose of the account doing the logging in is doing intent modeling. The second question is harder to answer but produces more actionable signals with fewer false positives.

That's the operational difference. Behavioural analytics identifies unusual behaviour. Intent modeling determines whether unusual behaviour indicates misuse. Not the same question.

Frequently asked questions

What is intent modeling in security?

Intent modeling in security is the capability to evaluate whether an identity's sequence of actions aligns with legitimate business purpose. It evaluates access events, data selection, downstream transformations, and egress behaviour in context of the identity's role, historical patterns, and the sensitivity of the data involved, to distinguish authorised access used legitimately from authorised access used for misuse, exfiltration, or purposes outside business intent.

What is intent drift?

Intent drift is the point at which an identity's behaviour begins to diverge from what is reasonable for their role and business context, even though each individual action remains technically permitted. It's detected not by any single event crossing a threshold, but by a sequence of events that collectively describe a pattern inconsistent with legitimate use of the access granted.

What is the difference between intent modeling and behavioural analytics?

Behavioural analytics identifies unusual behaviour by comparing observed activity against established baselines. Intent modeling goes further, evaluating whether unusual behaviour indicates a departure from legitimate business purpose by examining sequences of actions in context of role, data sensitivity, and downstream movement. Behavioural analytics flags anomalies. Intent modeling assesses whether those anomalies represent misuse.

Why can't access control alone detect insider threats?

Access control determines whether an action is permitted. It cannot determine whether a permitted action is consistent with the purpose for which the permission was granted. An insider exfiltrating data using their own valid credentials, through approved tools, to permitted destinations, violates no access control rule. Intent modeling detects the misuse by evaluating whether the sequence of permitted actions is consistent with legitimate business purpose.

Published May 1, 2026
Share

Ready to see Matters in Action?

Join a specialized 30-minute walkthrough. No sales fluff, just pure visibility and security intelligence.