The story behind Matters.AI funding journey

Database Activity Monitoring

Database Activity Monitoring (DAM) tracks and analyzes database access in real time to detect misuse, insider threats, and anomalies. Learn how it works.

Read with AI

What is DAM (Database Activity Monitoring)?

Database Activity Monitoring (DAM) is a security control that captures and analyzes activity against databases in real time, providing a continuous, tamper-resistant record of who accessed what data, when, from where, and through which query patterns. Its primary function is answering the question that database-level access logs alone can’t answer with confidence: was this access legitimate, or does the pattern of activity indicate misuse?

DAM sits at the boundary of the database itself. That boundary is both its value and its constraint.

How DAM works

At the collection layer, DAM ingests database events: every query executed, every login attempt, every schema change, every export, every row returned. The telemetry sources depend on the deployment model. Agent-based DAM installs a lightweight sensor on the database host, capturing events at the OS or memory level before they reach the network. Agentless DAM uses database audit APIs, native logging facilities, or network-tap methods to collect events without touching the host directly.

Volume is the immediate challenge. A production database in a mid-size enterprise might process millions of queries per day. Most of them are entirely expected: application service accounts running the same parametrized queries they’ve run every day for two years. Logging everything and presenting everything as a flat event stream is technically complete and operationally useless.

That’s why the real work in DAM happens at the behavioral baseline layer, not the collection layer.

A well-configured DAM system builds a dynamic model of normal activity for every identity that touches the database. Not just human users. Service accounts, application identities, automation scripts, ETL pipelines, and reporting jobs each develop their own behavioral fingerprint: typical query patterns, typical query volumes, typical hours of operation, typical source IPs, typical tables accessed, typical rows returned per session.

Deviation from that baseline is the signal. Not any single deviation in isolation. The context of deviation.

A single large SELECT against a customer table at 9am on a Tuesday from a business analyst who runs similar queries every week is baseline. That same query at 2am from a service account that hasn’t touched that table in 90 days, originating from an IP outside the known application subnet, is not. The query is syntactically identical. The context is entirely different. That’s the detection the baseline model enables and that raw log review misses.


What DAM detects

Privilege misuse. Users and service accounts accumulate access over time. A database administrator given broad read access during an incident response three years ago may still have that access, now dormant, now potentially being used. DAM identifies dormant privileged accounts that suddenly activate, DBA-level access being exercised outside of change windows, and access patterns inconsistent with the assigned role.

Mass data exports. An employee authorized to query customer records for a support ticket doesn’t need to return 500,000 rows. A single session that selects substantially more data than the user’s historical average is a signal. Particularly when combined with off-hours timing, an unusual source IP, or a sequence of progressively broader queries across multiple tables. DAM flags the volume, not just the presence of access.

Off-hours access. Legitimate database access follows business rhythms. Most users and applications have predictable activity windows. Access outside those windows isn’t automatically malicious, but it warrants scrutiny. A finance user querying payment records at 3am is not the same as an automated reporting job running a scheduled batch at 3am. DAM separates these cases when the behavioral model is calibrated correctly.

Service account anomalies. Service accounts don’t take holidays, get promoted, or change jobs. Their query patterns should be stable, predictable, and narrow. When a service account starts accessing tables it’s never queried before, running ad-hoc queries rather than parametrized application queries, or connecting from an IP outside the known application tier, that’s a meaningful signal. Compromised service account credentials are a common lateral movement technique. DAM catches the behavioral deviation even when the credential itself is valid.

Schema changes and DDL operations. Unauthorized schema changes, new stored procedures, modified views, dropped audit tables. These are low-frequency events that carry significant risk. DAM logs every DDL operation with full attribution and can alert immediately on changes outside defined maintenance windows.

SQL injection patterns. Malformed or structurally anomalous queries arriving from application tiers can indicate injection attempts against the application layer. DAM sees the query as it arrives at the database, not as the application intended to send it.


DAM vs SIEM: a distinction practitioners get wrong

DAM and SIEM are complementary, not interchangeable. But teams routinely try to use one as a substitute for the other at the database layer, and it doesn’t work cleanly in either direction.

A SIEM aggregates events from across the environment: network devices, identity providers, endpoints, cloud infrastructure, and yes, databases. Its value is cross-environment correlation. It can connect a failed login attempt at the identity provider with an unusual query pattern at the database tier and a file transfer at the endpoint. That cross-system view is what incident investigations need at scoping time.

But SIEM doesn’t understand database semantics. It receives a log entry. It doesn’t know that the query returning 50,000 rows is unusual for this specific user against this specific table, or that the stored procedure that just ran has never been executed in the 18 months of history it has for this account. That behavioral baseline, built against database-specific activity patterns, is what DAM contributes.

So: DAM provides the database-layer behavioral intelligence. SIEM provides the cross-environment correlation. The right architecture sends DAM alerts and events into the SIEM, where they’re correlated with signals from other layers. Neither replaces the other.

The database boundary problem: where DAM’s coverage ends

Here’s the constraint that every DAM deployment eventually surfaces in a real incident.

DAM sees everything up to the point where data leaves the database. The SELECT executes. The rows are returned to the client. The session closes. DAM’s visibility ends there.

What happens next is invisible to DAM entirely. The client application exports the result set to a local CSV. The CSV is copied to a staging folder. The staging folder is compressed. The archive is uploaded to a cloud storage service. All of that happens outside the database, on the endpoint, through the operating system, and out through an approved network channel. None of it appears in database activity logs.

Consider a realistic audit scenario. A regulator asks whether a specific customer’s payment data was exposed during a particular window. The DAM logs confirm the data was queried. They show the query, the account that ran it, the rows returned, the timestamp. They don’t show whether those rows were exported, where they went, whether they were further processed or shared, or whether they ever left the organization.

That’s not a failure of DAM as a control. That’s the scope of what DAM was designed to cover. The database boundary is the visibility boundary. Incidents, however, don’t respect that boundary. They start at the database and continue across systems, applications, endpoints, and channels that DAM was never designed to watch.

Visibility that ends at the database boundary is better than no visibility. It’s not the same as knowing what happened.

Why legacy DAM fails at cloud scale

Traditional DAM tools were built for on-premises data centers. Fixed database instances, predictable topology, network taps at defined perimeter points. The deployment model assumed you knew where all your databases were and could place a sensor near each one.

That assumption doesn’t survive contact with modern cloud architecture.

An enterprise running on AWS has Amazon RDS instances, Aurora clusters, Redshift data warehouses, DynamoDB tables, and potentially Athena query layers over S3, all in the same environment. Add Azure SQL Database and Google Cloud SQL for acquired subsidiaries or product teams that made independent technology choices. Layer in on-prem Oracle and SQL Server instances that haven’t been migrated. The database inventory is neither fixed nor fully known, and it’s constantly changing as engineering teams spin up new instances.

Legacy DAM tools handle this environment badly. They require manual registration of each database instance. They run agents on hosts that may be ephemeral, auto-scaling, or containerized. They produce monitoring gaps when new databases appear before someone has registered them. They generate separate consoles per environment. And they carry performance overhead that database administrators resist in production systems.

Modern DAM operates through agentless, API-first integrations against cloud-native database services. New databases are discovered and covered automatically as they’re provisioned, without manual registration. Telemetry processing is distributed and elastic, handling high event volumes without adding latency to production query paths. One control plane monitors across RDS, Redshift, DynamoDB, Azure SQL, Google Cloud SQL, and on-prem systems simultaneously, producing a unified activity view rather than per-environment silos.

That architectural shift matters for coverage. A DAM tool that doesn’t discover new databases automatically will always have gaps in environments where infrastructure changes faster than the security team’s manual registration process.

DAM use cases

Detecting insider data theft before exfiltration

An employee planning to leave takes a job with a competitor. In the two weeks before their departure, they begin querying customer tables they rarely touched before, returning larger result sets each session, at progressively later hours. DAM detects the behavioral drift: new tables, higher volumes, timing shift. The alert fires before the data reaches their laptop. Not after.

Compliance evidence for PCI DSS and HIPAA audits

Both frameworks require demonstrable access controls and audit trails for cardholder data and PHI respectively. DAM produces the tamper-resistant access logs that auditors require, mapped to the specific data assets in scope, with full attribution. The evidence isn’t assembled manually at audit time. It exists continuously as a byproduct of normal monitoring.

Service account compromise detection

A service account used by a data pipeline application connects from an IP outside the application server range and runs an ad-hoc query against a user credentials table. The application that owns this account never queries that table and always connects from a specific private subnet. DAM flags the anomaly immediately. The credential has been compromised. The detection happens at the database layer, before the attacker completes their reconnaissance.

Monitoring DBA activity in regulated environments

Database administrators have privileged access to everything. Monitoring their activity is both a compliance requirement in many regulated industries and a practical security control. DAM provides a complete, tamper-resistant record of DBA operations, including schema changes, privilege grants, and direct data access that bypasses application-layer controls.

Identifying over-privileged access in practice

A DSPM tool identifies that 40 users have read access to a financial transactions table. DAM tells you which of those 40 actually accessed it in the past 90 days. In practice, it’s usually five or six. The other 34 have access they don’t use, which represents unnecessary risk. DAM usage data directly informs privilege reduction decisions.

DAM as part of a unified security model

DAM answers who touched the data at the database layer. It doesn’t answer what happened to the data after that. It doesn’t answer whether the access was consistent with business intent when viewed across systems and over time. It doesn’t produce the downstream lineage that regulators ask for when they want to know where personal data propagated after a query returned it.

That’s the structural argument for treating DAM as one intelligence layer inside a broader platform, not as a standalone answer.

When DAM is the only layer, investigations stall at the database boundary. The SOC sees that data was accessed. They can’t determine whether it was exported, where it went, or whether the overall sequence, from access to staging to exfiltration through an approved channel, represents a material incident or normal business activity.

When DAM feeds into a unified intelligence model alongside DSPM, behavioral analytics, data lineage tracking, and endpoint telemetry, the picture changes. The query event from DAM connects to the export event logged at the endpoint, which connects to the file staging activity, which connects to the upload destination. The sequence becomes visible. Intent becomes assessable. The blast radius can be scoped in minutes rather than days.

That’s what DAM was always meant to contribute: the database-layer ground truth that feeds a broader investigation narrative. Not the narrative itself.

Frequently asked questions

What is Database Activity Monitoring (DAM)

DAM is a security control that captures and analyzes database access activity in real time, building behavioral baselines for users, service accounts, and applications, and detecting deviations that indicate misuse, compromise, or policy violation. It produces tamper-resistant audit logs and generates alerts on anomalous query patterns, privilege misuse, off-hours access, and mass data exports.

What is the difference between DAM and SIEM

DAM monitors database-specific activity and builds behavioral baselines for database access patterns. SIEM aggregates events from across the entire environment for cross-system correlation and investigation. DAM provides the database-layer behavioral intelligence. SIEM provides the broader context. The right architecture sends DAM alerts into the SIEM so that database anomalies can be correlated with activity at the identity, network, and endpoint layers.

Is DAM agent-based or agentless

Both deployment models exist. Agent-based DAM installs a sensor on the database host, capturing events at the OS or memory level. Agentless DAM uses database audit APIs and native logging facilities. Agent-based deployment provides deeper visibility but carries performance overhead and operational complexity. Agentless deployment is easier to scale across cloud environments where database instances are ephemeral or managed services, and is the dominant approach for modern cloud-native deployments.

What databases does DAM cover

Coverage depends on the specific tool, but a modern DAM solution should cover major cloud-native databases including Amazon RDS, Amazon Aurora, Amazon Redshift, Amazon DynamoDB, Azure SQL Database, and Google Cloud SQL, as well as on-premises systems including Oracle, SQL Server, MySQL, and PostgreSQL. A gap in any of those environments is a gap in your database visibility.

Can DAM detect insider threats

DAM detects behavioral anomalies at the database layer that may indicate insider threat activity: unusual query volumes, access to tables outside normal job function, off-hours activity, and dormant accounts that suddenly activate. It doesn’t detect what happens after data leaves the database, which is where most of the insider threat exfiltration story unfolds. Connecting database-layer DAM signals to endpoint and network activity is required to detect and scope the full insider threat scenario.

Is DAM required for compliance

DAM, or functionally equivalent database access logging with anomaly detection, is required or strongly implied by several major compliance frameworks. PCI DSS requires detailed logging of all access to cardholder data and automated audit trails. HIPAA requires audit controls for systems containing PHI. GDPR and DPDP create accountability requirements that effectively demand knowing who accessed personal data and when. SOC 2 requires monitoring of logical access to production systems. DAM is the most direct technical mechanism for satisfying these requirements at the database layer.

How does DAM fit with DSPM

DSPM tells you what sensitive data exists in your databases and what the access and configuration risk looks like at a posture level. DAM tells you who is actually accessing that data in real time and whether their behavior is consistent with the baseline for their identity and role. DSPM identifies that a table contains high-sensitivity PII and has 40 users with read access. DAM tells you which five actually queried it this month, and whether any of those five behaved anomalously when they did. Both views are necessary.

Published May 1, 2026
Share

Ready to see Matters in Action?

Join a specialized 30-minute walkthrough. No sales fluff, just pure visibility and security intelligence.