CASE STUDY 03
Restructuring: TSA Transportation Worker Identification Credential (TWIC) Operations
When I took over the program, it wasn't a performance problem. It was a structural failure.
This was a federal credentialing system tied to port security. Roughly 750,000 workers needed to enroll in the first phase, with ongoing volume that would push total enrollments to around a million over a five-year period. The work spanned more than a hundred ports across the United States and its territories. It was visible, politically sensitive, and not optional.
The prime contractor had already stood the system up. On paper, it existed. In reality, it was collapsing.
The numbers told the story. At steady state, the program was generating roughly half a million dollars a month in revenue. It was losing around three million dollars a month to operate. That gap wasn't going to close on its own. The underlying model made it impossible.
They had built a fixed-cost network: leased offices across the country, staffed with hourly workers whether or not there was demand. In the early surge, that approach worked well enough. Once volumes normalized, it became catastrophic. Costs stayed high. Throughput dropped. Every month made the problem worse.
And the program couldn't be paused. Workers still needed credentials. Ports still needed to function. Congress was watching. The unions were watching. The Transportation Security Administration was watching. If the system failed, it wouldn't fail quietly.
The job wasn't to optimize the system. It was to replace it — without shutting it down.
The first realization was that this wasn't a staffing problem or a scheduling problem. It was an economic model problem. The cost structure was upside down. You couldn't fix it with better management. You had to change how the work was delivered.
We moved from a fixed infrastructure model to a distributed partner model.
Instead of owning and staffing every location, we began building a network of partners who already had facilities and people in place. They would operate on a per-transaction basis. That aligned cost directly with demand. If volume dropped, cost dropped. If volume increased, the system could scale without adding fixed overhead.
That sounds straightforward. It wasn't.
Every location had constraints. They needed to be within a defined distance of a port. They needed to meet accessibility requirements. They needed to pass government certification. Many locations didn't have obvious partners nearby. In some cases, we were negotiating exceptions — five miles versus six — just to make the geography work.
At the same time, we were unwinding the existing system.
There were leases that couldn't be terminated overnight. Labor contracts that had to be honored. Equipment that had to be redeployed. Every decision had financial consequences, and every delay extended the losses.
So we ran the transition as a parallel system.
We built the new network while the old one continued operating. As soon as a new partner location was ready — contracted, trained, certified — we would cut that site over. That required coordination across multiple layers: customer communications, scheduling systems, call center scripts, local readiness. Multiply that by roughly 150 to 200 locations, each with its own timeline, and you get a sense of the operational load.
Every week, we were walking through the entire network, location by location, with both the contractor and the government. Here's where we stand in Tampa. Here's what's blocking Cocoa. Here's what changes next week in Louisville.
Internally, there was pressure. The financial picture kept shifting. I had to explain why we were still writing large checks to legacy providers while, at the same time, making sure we didn't rush transitions that would break service.
There was no clean moment where the system flipped from broken to fixed. It was a continuous reconfiguration, with risk at every step.
But the economics started to turn.
As more locations moved to the partner model, the cost curve bent. Within about six months, the program reached break-even. Within nine, it was profitable. Not dramatically, but sustainably. The bleeding stopped. The system stabilized.
Just as important, the external pressure disappeared. The government was no longer dealing with a failing program. The contractor was no longer facing escalating losses. The network functioned as a normal, predictable operation.
When the next major contract came up for recompete, we weren't trying to defend a broken system. We were the de facto incumbent of a working one. We won that work.
At one point, the original prime contractor approached me about participating under our structure.
That's how far the system had shifted.