TL;DR
For the time-constrained CISO:
- Agentless approaches are valuable but structurally incomplete for runtime security
- Agents introduce risk, but well-engineered agents reduce net organizational risk
- Five capabilities are irreplaceable: runtime observation, in-memory detection, inline enforcement, encrypted path control, and defensible attribution
- The question isn’t “agents vs. no agents” – it’s “which agents, under what controls?”
From Agentless Advocacy to Runtime Realism
For most of my career in cybersecurity, one question has followed me into almost every customer meeting, architecture review, and hallway conversation:
Can we secure the enterprise without installing agents everywhere?
Sometimes it is asked out of optimism. More often, out of fear, ‘fear of performance impact’, ‘operational fragility’, or ‘being responsible for the next production outage’. And for a long time, I was on the side that wanted the answer to be yes.
I come from an agentless CNAPP background. At PingSafe, where I was deeply involved in building and scaling an agentless security platform, the belief was clear: APIs, cloud-native integrations, and control-plane visibility could give security teams what they needed without touching workloads or endpoints.
I genuinely believed that agentless was not just safer, but smarter.
That belief began to evolve later in my career, particularly after PingSafe was acquired by SentinelOne. Suddenly, I was exposed to the other side of the equation: Cloud Workload Protection Platforms (CWPP) and Endpoint Detection & Response (EDR) domains where agents are not a design choice, but a prerequisite.
What I saw there was not marketing; it was reality. Entire classes of runtime threats, in-memory attacks, and last-mile data exfiltration simply do not exist from an API’s point of view.
Who Can Survive Without Agents?
That forced a harder, more uncomfortable comparison.
I started mapping different types of organizations against different security models: those that could survive with predominantly agentless ways, and those that could not.
The conclusion was consistent: the risk of not deploying agents was often far higher than the operational challenges of deploying them.
The more sensitive the data, the more regulated the environment, and the higher the security needs during runtime, the more unavoidable agents became.
At that point, my position shifted from preference to pragmatism.
The industry, in its current form, cannot function without agents. The effectiveness of predicting, detecting, and responding to cyberattacks is directly proportional to the depth and quality of visibility available.
That visibility comes from two places: APIs that describe state, and agents that observe behavior. Each is valuable. Neither is sufficient on its own.
From Avoidance to Governance
Once I accepted that agent deployment is unavoidable, the question changed entirely.
It was no longer “Can we avoid agents?”
It became
How do we deploy agents without turning them into operational or systemic risks?
To answer that, I began speaking with CISOs and security leaders who make these decisions under real-world constraints. The goal was simple: if agents are here to stay, can we create a practical, experience-backed guide that helps enterprises evaluate, deploy, and govern them in a way that minimizes both security and operational risk?
I am writing this three blog series in collaboration with one of the best industry leaders to cover all about agents, its application, how to secure the agents, and its role in data security use cases.
The Pros & Cons of Agent-Based Systems
1. Observe reality at execution time
APIs tell you what already happened. Agents tell you what is happening right now.
Process lineage, user session context, memory usage, file access, and application behavior converge only at runtime. If you are not present at that moment, you are guessing.
2. Detect attacks that never touch disk
Modern breaches increasingly exist in-memory. Credentials, tokens, and sensitive data are processed, moved, and exfiltrated without ever being written to storage.
If your ‘control layer’ starts at logs or network events, you are already late.
3. Enforce before damage occurs
Controls that operate asynchronously are advisory by nature.
Agents can terminate a process, block a file operation, or stop a data transfer before it leaves the system. In regulated and high-risk environments, alerting is not an acceptable outcome.
4. Control encrypted and local-only paths
Encrypted sync tools, developer workflows, proprietary clients, and offline activity routinely bypass network inspection.
If your strategy assumes decryption somewhere else, you have already accepted blind spots.
5. Provide defensible attribution
When something goes wrong, “data was accessed” is not enough. Enterprises need to know who, using which process, at what moment, and under what context.
Only endpoint agents can produce evidence that stands up to audits, regulators, and incident reviews.
The Real Choice Enterprises Are Making
This is not a trade off between risk and control. It is a choice between partial observability and runtime authority. The uncomfortable truth is that some security guarantees especially around data protection, insider risk, and last-mile exfiltration cannot be delivered without agents. Pretending otherwise only moves risk out of sight, not out of the system
What’s Next
Please read the Part-2 of the series here.



