Inside Our 15-Day Engineering Hackathon At Matters.AI
Events

Inside Our 15-Day Engineering Hackathon At Matters.AI

Sony Gupta avatar

Sony Gupta

FEBRUARY 2026

At Matters.AI, we recently wrapped up a 15-day internal hackathon that looked very different from the traditional idea of a hackathon, because the goal was never to ship flashy prototypes or experiment with speculative features, but to step back and rigorously strengthen the core of the platform that everything else depends on.

As a data security company building for large enterprises, we operate in environments where scale is unforgiving, failure is highly visible, and reliability is not a “nice to have” but a baseline expectation. This hackathon was a deliberate pause to confront those realities head-on, giving the team space to build working, production-grade prototypes that directly addressed known weaknesses in the system rather than abstract future problems.

Harsh Sahu, CTO of Matters.AI, speaking during an internal engineering session

Harsh Sahu, CTO, Matters.AI · LinkedIn

Why this Hackathon had to exist

The motivation behind this hackathon traces back to a difficult but formative experience in 2025, when we lost a high-stakes proof of concept despite having a clear technical advantage over competing solutions.

At the time, the company was deeply focused on expanding coverage by rapidly adding integrations and onboarding multiple POCs in parallel in order to accelerate go-to-market momentum. From a capabilities standpoint, the platform was strong, and by most surface-level metrics, we were delivering what customers asked for. However, during one particularly important POC, the system crashed twice, and that single failure outweighed every technical edge we believed we had.

When we conducted a deeper internal review, several uncomfortable truths surfaced. The customer was operating at petabyte-scale, which pushed our platform well beyond the conditions it had been optimized for. There were file types that we did not yet support, which introduced blind spots at exactly the scale where customers expect completeness. The underlying data structures were deeply complex, and at that level of volume and variability, our backend architecture struggled to cope in ways that were not immediately obvious during smaller deployments.

That loss fundamentally reshaped how we think about building Matters.AI. It became clear that coverage alone does not win trust in enterprise data security. Reliability, scalability, platform robustness, and operational confidence are non-negotiable, because when systems fail at scale, the cost is not just technical; it is reputational.

This hackathon was born directly out of that realization.

What we asked teams to focus on

The brief for the hackathon was intentionally narrow and unapologetically unglamorous, because the objective was not innovation at the edges but resilience at the core.

Teams were asked to focus on resolving known platform issues that were impacting reliability and performance, reducing high resource consumption and unnecessary operational costs, and introducing deeper observability so that failures could be detected internally long before customers ever experienced them. In addition, we wanted to improve integration stability under real-world load and bring greater consistency across the UI so that the platform feels cohesive and intuitive rather than fragmented.

In essence, this was a hackathon about foundations, not features, and about doing the hard engineering work that rarely makes it into release announcements but determines whether everything else succeeds.

What stood out across the Hackathon

What stood out most across the fifteen days was not just the quality of execution, but the level of ownership teams demonstrated over long-standing pain points that are typically deferred because they are complex, risky, or deeply embedded in the system.

One of the most impactful outcomes was the work on Classification V2, which delivered a twenty-five-fold reduction in compute and resource utilization compared to the previous version. This was not a superficial optimization, but a fundamental rethink of how sensitive data scanning should operate at scale, with performance, reliability, and observability treated as first-class design requirements rather than afterthoughts.

More broadly, teams consistently chose to tackle problems that directly reduced operational friction, on-call burden, and system unpredictability, which are often invisible from the outside but profoundly shape how fast and confidently a company can move.

🥉Kishan | Rethinking Enterprise Reporting from First Principles

Kishan

Kishan joined Matters.AI as an intern and was recently converted to a full-time engineer, leading a project that addressed a deeply practical but widely felt internal bottleneck: PDF report generation.

Reports play a critical role in audits, compliance workflows, customer reviews, and internal analysis, yet our existing approach makes report development slow and cumbersome. Each new report typically took more than fifteen days to build, followed by another fifteen days for iterations, largely because of tightly coupled HTML and Jinja templates that required backend engineers to manually wire every variable. This created heavy cross-team dependencies, slowed delivery, and made scaling the reporting system increasingly difficult.

During the hackathon, Kishan and his team fundamentally reimagined this workflow by building a React-based, standalone PDF generation service that renders reports using reusable UI components. The service is stateless, accepts structured JSON input, and can be orchestrated using Temporal for both scheduled and on-demand report generation. This design fully decouples frontend and backend responsibilities, allowing backend teams to focus solely on data while frontend engineers build and iterate on reports independently.

One of the hardest challenges they faced was designing an accurate indexing and table-of-contents system, since page numbers are only known after rendering. This classic chicken-and-egg problem was solved through a two-pass rendering approach, where the first pass embeds markers into the document and the second pass parses the output to generate a precise index without manual intervention.

The result was a system that reduced report development time from weeks to one or two days, removed cross-team bottlenecks, and delivered a scalable, production-ready reporting pipeline built for enterprise needs.

🥈Yash & Anish | Building Classification Engine 2.0

Yash & Anish

Yash Raj, a founding backend engineer on the platform team and Anish Pawar, a ML engineer on the ML Team, took on one of the most critical and high-impact systems at Matters.AI by rebuilding the classification engine from the ground up.

Classification is central to both customer trust and business outcomes, as it directly affects POC success, scan reliability, turnaround time, and operational costs. During the hackathon, Yash & Anish architected, designed, and developed Classification Engine 2.0 with a clear focus on reliability, stability, and observability, ensuring that retryability and error reporting were considered from the very beginning rather than bolted on later.

While some inspiration came from past experience, this was largely a new problem statement that required extensive brainstorming and careful system design to balance immediate impact with long-term extensibility. Early iterations surfaced issues where certain errors were not being reported correctly and jobs failed silently, which prompted a redesign of the low-level architecture to ensure that no failure could go unnoticed.

The outcome was a classification system that consistently completes its work, delivers more than twenty-four times greater cost efficiency, achieves scan success rates above eighty-five percent, and dramatically reduces resource footprint while improving scan completion time. Beyond the metrics, the system restored internal confidence, enabling teams across the company to build and sell with greater assurance in the platform’s ability to perform at scale.

🥇Shubham | Eliminating Operational Drag at the Core

Shubham

Shubham, a founding backend engineer on the platform team, focused on one of the most persistent and costly problems facing the organization: operational overhead during POCs and the resulting on-call burden.

The goal was to make POCs faster and largely self-sufficient, reducing the need for constant engineering intervention and firefighting. Drawing from real-world experience at Matters.AI and previous organizations, Shubham identified recurring failure patterns and built practical, production-ready guardrails rather than theoretical solutions.

His work resulted in two key deliverables. The first was a scan-limiting mechanism that allocates resource limits across offerings, ensuring compute and cloud costs do not spiral out of control during POCs, even while crawlers are actively discovering new files. The second was a revamped CSM dashboard built on Appsmith, with stronger validations and clearer visibility, enabling L1 engineers and support teams to manage configurations and on-prem deployments without relying on core engineers.

These changes had immediate real-time impact, enabling POC mode to automatically apply limits, surface usage clearly, and prevent surprise cloud bills. More importantly, they reduced operational friction, lowered on-call burden, and allowed engineering teams to focus on building rather than reacting.

Why this MATTERS

Given the space that we are in, many competitors operate with billions of dollars in funding, but this hackathon reinforced that our advantage does not come from matching spend, but from being sharper, more disciplined, and deeply customer-obsessed.

The team demonstrated a shared understanding that delivering value quickly requires a platform that is stable, scalable, and reliable, and that real speed comes from fixing root causes rather than building around limitations. Instead of avoiding hard problems, engineers took full ownership of them, which is what allows us to compete effectively and move with confidence.

Our product roadmap remains well defined, and the direction we are building toward has not changed. What this hackathon accomplished was strengthening the foundation that the roadmap depends on, making it significantly easier to ship faster, scale with confidence, and deliver value earlier in the customer journey.

In many ways, this hackathon was not about what we build next, but about ensuring that everything we build next matters.

    Inside Our Enterprise Engineering Hackathon at Matters.AI