"Old Bob" Problem: Why Field Tech Fails
How to translate tribal knowledge into rules that automation can actually understand.
On paper, the new Field Service Management (FSM) software looks perfect. The demo showed optimized routes, instant parts allocation, and AI-driven dispatching.
In reality, you deploy it, and three weeks later:
Technicians are ignoring the tablets.
Dispatchers are manually overriding every single “optimized” route.
Data quality is worse than when you used clipboards.
Why does this happen?
It happens because your operation runs on “Old Bob.”
Old Bob knows you can’t send a 20-foot truck to the downtown loading dock after 8:00 AM. Bob knows that the customer at 12 Main St. has a gate code that isn’t in the CRM. Bob knows which machines need a wrench and which ones need a kick.
When you automate, you are trying to replace Bob’s intuition with binary code. But if you don’t extract Bob’s knowledge first, your shiny new automation will blindly execute logic that fails in the real world.
Automation is a force multiplier. If your current process is chaotic—relying on tribal knowledge and “pencil-whipping” forms—automation will just execute that chaos at light speed.
Here is the practical, boots-on-the-ground guide to fixing your governance before you turn the robots on.
Phase 1: The Teardown (Before You Automate)
You can’t code your way out of a process problem. You have to fix the workflow in the mud before you fix it in the cloud.
1. Exorcise the “Tribal Knowledge”
Every field org relies on shadow processes to get the job done.
The Trap: Automation doesn’t know what Old Bob knows. If you replace him with an algorithm without extracting his constraints, your fleet hits a wall.
The Fix: Map the workflow as it actually happens, not as it’s written in the handbook. Interview the veterans. Find the “shadow rules” they follow.
The Rule: If it’s in a head, it can’t be automated. Get it on paper.
2. Kill the “Use Your Best Judgment” SOPs
Field techs survive on judgment. Software runs on binary logic (0s and 1s). You cannot feed “common sense” into an algorithm.
The Problem: Your safety manual says “Do not operate in unsafe wind conditions.” A human knows what that means. A machine does not.
The Fix: Translate “unsafe” into data. “If wind speed > 30mph, Lockout Boom Operation.”
The Action: Go through your SOPs. Find every vague adjective (”urgent,” “safe,” “clean”) and turn it into a measurable threshold.
3. Define the “Golden Rule” of Routing
Algorithms represent a series of trade-offs. If you don’t tell the machine what to value, it will guess—and it will guess wrong.
The Trap: You ask for “efficient routing.” The AI interprets that as “fewest miles.” It sends your heavy haulers through residential school zones to save 0.4 miles.
The Fix: Explicit constraints.
Constraint A: No left turns across 4-lane highways.
Constraint B: Home by 5:00 PM is more important than fuel savings.
Constraint C: VIP customers get a 2-hour window, everyone else gets 4.
Phase 2: The Rollout (Boots on the Ground)
Don’t deploy from the boardroom. Deploy from the passenger seat.
4. Stop Incentivizing Workarounds
The fastest way to kill a new system is to pay people to ignore it.
The Reality: You give techs a 50-step digital safety checklist. It takes 10 minutes to sync. You also pay them a bonus for completing 6 jobs a day.
The Result: They will “pencil-whip” (fake) the checklist to get the bonus. Your data becomes garbage.
The Fix: Adjust the KPIs. If you want high-quality data input, you have to allow “wrench time” for it. Supervisors must reward the tech who flagged the safety hazard in the app, not just the one who raced through the day.
5. The “Ride-Along” Stress Test
Do not trust the “Success” metrics on your dashboard in the first month.
The Action: Send your operations managers on ride-along.
The Test: Watch the tech’s thumbs. Are they fighting the screen? Are they rebooting the device? Are they writing things on their hand because the UI is too slow?
The Insight: If the tool is harder to use than the problem it solves, the field will reject it. Fix the friction before you scale.
Phase 3: The Reality Check (Handling Exceptions)
The map is not the territory. The GPS doesn’t know the road is flooded.
6. The “Big Red Button” (Authorized Deviation)
Field ops is unpredictable. A rigid system that allows zero deviation is dangerous.
The Rule: Automation executes; Humans navigate exceptions.
The Protocol: Build a clear “Override” path. If the algorithm says “Go,” but the driver sees ice, the driver wins.
The Catch: The driver must tag the override with a reason code (e.g., “Weather Hold”). This turns a failure into a data point you can use to improve the model.
7. Post-Mortems on “The Ghost in the Machine”
When things break, don’t just blame the glitch.
The Action: When a route fails or a part is missing, trace the decision chain.
The Question: Did the AI fail? Did the tech fail? Or did the Governance fail (i.e., we fed the system a bad rule)?
The Mindset: Treat your governance rules like your physical equipment. They need maintenance, lubrication, and occasional replacement.
The Bottom Line: Governance is Your Chassis
Think of your operation like a service truck.
Automation is the engine (speed).
AI is the GPS (direction).
Governance is the chassis and the brakes.
If you drop a jet engine (AI) into a rusted-out chassis (bad governance) and hit the throttle, you won't break a record. You’ll just tear the truck apart.



