Responsive communication
Same-day replies. Clear updates from the field — not monthly readouts.
Case study · Telecom · 5G infrastructure
How I helped a regional cell-tower operator cut more than 10,000 labor hours a year out of their 5G rollout — by measuring the waste, killing the bottlenecks, and moving the slow work off-site.
At a glance
Installing 5G equipment on-site was time-intensive and regularly stalled by unpredictable software updates. A typical site carried three to four modules, each waiting on the last. The fix wasn't new hardware — it was field analytics, parallel work, and moving the slow part off-tower entirely.
Headline stats
10K+
Labor hours saved annually across the client roster.
19%
Reduction in update failure rate from console-signature monitoring.
<1%
Software-update share of total integration time, down from ~20%.
$26.8K
Combined first-year savings from Phases 02 + 03 alone.
Most consultants spend more time making presentations about the work than doing it. I cut the bloat and execute on day one.
01
The challenge
Installing new 5G equipment on-site was a time-intensive process, regularly stalled by unpredictable software updates. A typical site carried three to four modules, and each one needed updates that often had to be chained in phases — legacy first, then latest.
They also failed at random. When that happened, the technician started over.
The ripple effects
Symptoms
Each one on its own timer, each one waiting on the last.
Typical time to push updates to a single module, before any failure or retry.
Tower techs and foremen paid to wait on a software update that might fail anyway.
02
Why I got the call
The operators I was working with didn't want another consultant to describe their problem back to them in a slide deck. They wanted someone on the ground who would measure the waste, call it out, and fix it.
That's what I do. I work fast, I communicate directly, and I bring the numbers.
Three reasons this engagement got the call
Same-day replies. Clear updates from the field — not monthly readouts.
I measure before I suggest. Every process change had to show up in the numbers.
I offered to instrument the work itself — so waste stopped being a story and started being a number.
Most consultants spend more time making presentations about the work than doing it. I cut the bloat and execute on day one.
03
Phase 01 — Measure
Before changing anything, I needed to see where the time was actually going. I set up milestone-based timestamping in Asana with built-in time tracking, so every integration logged the moments that mattered.
Two things dominated. Troubleshooting was bigger, but it varied with site config, hardware, and technician skill — too many confounds to fix cleanly. Software updates were different. Consistent, repeated, solvable with process alone. That’s where I went first.
What got timestamped
04
Phase 02 — The obvious win
The easiest bottleneck to kill was also the most embarrassing: the integrators were running updates one-to-one because they each had one laptop. I bought every integrator an extra laptop.
That's it. That was the first change. It cut roughly 45 minutes of waste per integration.
What it cost. What it returned.
$1,200 × 3 integrators — additional laptops so updates run in parallel.
Based on 2.5 integrations per week across 3 integrators, ~45 min saved each.
Midwest telecom integrator salary basis. Does not include tower-crew time saved.
05
Phase 03 — Catch failures early
Random failures were the next line in the Asana data. They usually surfaced past the fifteen-minute mark, which meant the technician had already burned most of an update cycle by the time the thing gave up.
I opened Chrome's developer tools and started reading the console during updates. The pattern came out fast. Specific console errors consistently predicted a failing update — same error signatures on every failed run, surfaced in the first 3–5 minutes.
Short process change: when those errors appear, interrupt and restart. No more twenty-minute dead runs.
Result
Continued Asana tracking confirmed the drop.
The waste category that started the engagement effectively disappeared.
06
Phase 04 — The biggest lever
The largest savings weren't on-site at all. Once I understood the waste, the question was obvious: why are we doing updates on a tower in the first place?
We didn't have to. With the right power and shelving in a warehouse, I could stage modules, update them in bulk, and ship them to the site already configured.
What changed
Bulk runs in a controlled environment, not on a tower in the wind.
One person managing multiple updates at once.
No more waiting around on the tower for new-module installs.
Not every site needs new equipment — reused gear still has to be updated on-tower. The parallel-update and error-monitoring wins from Phases 02 and 03 still apply to those sites.
Final impact
Labor saved per integration
1–2 hours — 5–10 hours per crew, per week.
Tower-crew wait time
3–8 hours reduced, every site.
Annual labor hours saved
10,000+ across the client roster.
Phase 02 + 03 savings
$26,879 combined per year (integrator labor only).
What was bought to enable it
Laptops. Asana time tracking. A warehouse staging setup. No new vendors.
Crew efficiency up. Field safety up. Delivery timelines tighter. Operational cost down. All from process innovation and real-time field analytics — no new vendors, no re-platforming, no six-month transformation program.
Operational insight plus technical hands-on work. At the scale of a 5G rollout, that's millions in labor savings over time.
Have a similar problem?
Short note is fine. I reply the same day, usually within an hour.
Email direct
Browse more work