Safety in the System We Actually Run
“The system will always blame the hand closest to the lever.” — James Reason
Author’s Note
I didn’t experience safety only from policy manuals or audits. I’ve conducted hundreds of root cause analysis sessions on accidents and participated in too many debriefs after real people got hurt at work. I’ve seen good leaders wrestle with impossible tradeoffs and frontline people carry risk they didn’t create.
At the same time, I’ve witnessed organizations explain outcomes in meticulous detail while deliberately avoiding the decisions that made those outcomes inevitable. This piece is written for those who recognize that experience immediately.
If you manage people in operations, you already know the gap. What’s harder to admit is that the system depends on you to absorb the disconnect.
When a serious injury occurs on the shop or warehouse floor, senior leaders usually descend quickly and act decisively. Their discussion centers on local conditions. On housekeeping, maintenance, and procedure; and the site’s adherence to standards.
The questions are sharp. Accountability snaps downward with immediacy. The implication is clear, ‘this should not have happened’. Senior leaders are quick to assign accountability onto site leadership and close the loop. What does not enter the room is just as clear and far more damning.
Too often, the questions fail to revisit that the maintenance budget that was slashed to save expenses, or that aging equipment was kept in service well past its intended useful life. As was the case in one incident in particular, that I witnessed professionally, the senior leaders failed to connect that the persistent leaking oil across the shop floor was linked directly to capital decisions approved far from the site. Those decisions are spread across time and committees, insulated from the moment of injury. Rather than acknowledge the extent or root cause of the problem, senior leaders, instead made sure the leaking oil stuck to the local team when someone got hurt.
Corporate safety programs, meanwhile, run exactly as designed. Training is completed. Audits done. Incident rate tracked. From a corporate standpoint, the system was compliant. And yet someone still got hurt. That disconnect is not abstract. It is structural. Managers live inside it everyday with scorecards, monthly targets, utilization rates and cost-per-unit expectations. As a manager, you are accountable for outcomes produced by systems you did not design and often do not fully understand, operating within constraints that are explained as fixed rather than chosen.
You are expected to deliver results inside constraints you did not fully set—staffing levels, maintenance cycles, equipment condition, annual volume. You may technically control how those constraints are managed day to day, but you did not control the level at which they were funded. That distinction matters.
Senior leaders will say sites have autonomy: to set schedules, to prioritize maintenance, to decide how staffing is allocated within budget. What goes unspoken is that those choices occur after the most consequential decisions have already been made. Budgets establish the ceiling. Capital plans establish the horizon. Headcount targets establish the margin. Autonomy exists only inside those bounds. You can manage tradeoffs you didn’t choose, but you cannot manage safety you were never funded to create.
When you’re told “Safety First,” what’s actually being communicated is usually far more precise: manage safety without disrupting throughput. You are asked to decide when to interrupt production without knowing whether the risk you’re seeing will materialize. And without the option to materially change the conditions that made the decision necessary. This is the part almost no one says out loud.
Safety effort lives locally. Safety capacity is set elsewhere. And the distance between the two is where managers are asked to make judgment calls that carry real risk, operationally, professionally and even legally.
In theory, stopping work for safety is celebrated. You will be praised for being an advocate. In practice, it is often a professionally risky decision, especially when the feared outcome does not occur. Slowing production for a risk later judged “unfounded” is remembered, questioned, quietly counted against you and never fully forgiven. The production numbers still miss. The deadlines still slip. The explanation still has to be given.
Everyone understands why this happens. We are here to get product out the door. That is the real constraint. The language of safety exists alongside that reality, but it does not override it. Instead, it provides cover while the burden of deciding when to invoke it is pushed onto individuals.
You recognize it because you’ve felt it in your body. It shows up as a pause. A hesitation. A moment where something doesn’t feel right, but nothing is obviously wrong. Stopping feels premature. Continuing feels uncomfortable. Either way, you know you will own the outcome.
If you slow things down and nothing happens, the delay is visible and questioned.
If you do not and nothing happens, the decision disappears. When something does happen, the moment is replayed later as if the risk had been obvious all along.
You know this. It shapes how you lead, whether you say it or not.
Managers are not separate from the frontline experience. You are simply positioned where the contradiction becomes unavoidable. You often see conditions forming before anyone gets hurt. You know when maintenance was deferred too long, when staffing is too thin for the volume, when equipment is being asked to do more than is realistic. You also know exactly how those conditions land—as pressure, the need for shortcuts and workarounds, and quiet judgment calls made by folks without a say on the constraints.
When an operator hesitates and then keeps going, that decision did not start with them. It passed through you. And through the decisions you were handed. And through the ones you were not allowed to make.
That is the part safety language rarely admits. Frontline judgment is not independent. It is shaped, narrowed, and sometimes cornered by conditions managers are expected to normalize. So when something finally breaks and attention snaps to a missed step, a failure to stop, or a rule not followed, everyone is only seeing the last move in a chain that was already set in motion before the day began.
This is why post-incident conversations feel hollow. Not because anyone is lying, but because the story starts too late. It starts at the decision instead of how to avoid the conditions that made the decision inevitable.
Managers feel this because they are the last place where awareness exists before risk disappears again. You know when the operation is being held together by experience and luck. You know the difference between acceptable and merely tolerable. And you know that most days end safely not because the system is safe, but because people compensate.
When that compensation works, nothing is recorded. The system learns the wrong lesson—that the pressure is fine, that the risk was acceptable, that the design does not need to change. When it fails, the system looks for a decision to explain the outcome.
One of the most dangerous side effects of this system is the normalization of deviance. When pressure becomes constant and nothing bad happens, the abnormal starts to feel acceptable. Deferred maintenance becomes routine. Thin staffing becomes “how we operate.” Equipment running past its intended life becomes background noise. Each successful day under strain teaches that the risk is manageable, the margin sufficient, and yesterday’s workaround is today’s standard.
It isn’t recklessness, it’s adaptation. People adjust behavior to match the system they’re actually operating in, not the one described in procedures. Over time, the gap between “how work is imagined” and “how work is done” widens, and the organization slowly drifts into danger without any single moment that feels like a clear violation. When something finally fails, the response treats the last deviation as the cause, rather than recognizing that the deviation had already been normalized long before the day of the incident.
This is not a failure of care, but the result of a system that rewards uninterrupted production, penalizes visible disruption, and examines safety decisions primarily after harm has occurred.
Until organizations are willing to treat capital allocation, maintenance discipline, staffing levels, equipment lifecycle decisions, and schedule pressure as safety decisions, rather than simply financial, this pattern will repeat. Managers will keep absorbing the uncertainty created elsewhere. Frontline workers will continue to carry risk they did not design.
And when something goes wrong, the story will still begin at the wrong place.
Safety does not fail because people do not care. It fails because systems ask people to absorb risk quietly so organizations can protect margins they are unwilling to expose. And then act surprised when that bargain finally comes due.
Once you see that clearly, ignorance is no longer an excuse. And silence becomes a choice. And leadership is measured by which choice you make next.
Post Author’s Note
This way of seeing didn’t start with safety. It comes from years of watching systems do exactly what they were built to do; and watching people absorb the consequences.
Bad systems hurt people. Not because anyone intends harm, but because incentives, constraints, and silence compound over time. I’ve applied this same lens across operations and leadership: follow what’s funded, follow what’s measured, follow what’s tolerated. Safety is simply the place where the cost of poor system design can no longer be abstracted away.



