CASE STUDY 01

Winning the Bid for a National Biometric Platform

Before automation, fingerprint identification was slow.

Cards were collected, processed, and eventually compared against a database. The response might take weeks. In some cases, that meant a person could be released before identification was complete.

The goal was to change that—to move to a system that could respond in near real time, at national scale.

That meant tens of thousands of transactions per day, dozens of different workflows, and a massive database. But the real challenge was not just performance. It was how the work moved through the system.

At the time, transactions were treated as single units. A request would come in and be processed end to end. That approach breaks down when the workload is mixed—urgent and non-urgent, simple and complex—all competing for the same resources.

The system could not meet its requirements that way.

My role was to build a simulation of the system—something that could model how it would behave under real workloads before it was fully built. That meant modeling compute, storage, and the behavior of the matching algorithms themselves.

Working through that, it became clear that the problem was not the algorithms or the hardware alone. It was the definition of the work.

We changed the unit of work.

Instead of treating each transaction as a single block, we broke it into smaller pieces—atomic units that could be scheduled independently. That allowed the system to prioritize urgent requests, keep resources fully utilized, and meet very different response-time requirements at the same time.

The simulation showed that this approach would work. More importantly, it showed that it would work within a smaller hardware footprint than originally assumed.

That mattered.

This was a competitive federal program. There was a defined budget environment, an expected range of bids, and a concept known as “price to win”—the level at which a compliant solution could realistically succeed against competitors.

The system was hardware-intensive. The initial deployment represented a large capital cost, and that cost dominated the proposal. The ongoing operational team was relatively fixed. The primary lever available was the size of the hardware solution.

If the system required more hardware, the bid would be higher. If the bid was higher, it might not win.

The architecture we developed—and validated through simulation—reduced the required hardware footprint while still meeting performance requirements. That directly affected the price that could be proposed.

Without that, the team would have been forced to bid a larger system.

And a larger system may not have been competitive.

The final proposal was not the result of a single idea. It was the work of a strong team across engineering, capture, and program leadership. But this piece—the ability to meet the requirements with a smaller, more efficient architecture—was part of what made the bid viable.

The system was built along those lines. It performed as expected.

And we won.