Listen on Spotify

When was the last restore test?

Cold open

The backup conversation always sounds reassuring until someone asks the question that turns reassurance into paperwork: "Have we tested restore recently?" A pause follows. It is the kind of pause that suggests the backups are emotionally important but operationally unverified.

Everyone wants to believe recovery is possible. Very few organizations enjoy proving it before an incident forces the matter.

HR-Z0 case note: a backup not restored is only a hopeful file.

The horror

Untested restore processes create a dangerous form of confidence:

Symptoms

The symptoms are always recognizable:

  • backups exist, but recovery time is unknown
  • admins assume restore will work because it should
  • downtime risk is underestimated
  • runbooks are incomplete or stale
  • incidents reveal gaps at the worst possible moment

Backups are only half the story. The other half is whether the business can restore what matters, fast enough, with enough clarity, under pressure.

Cost

The cost is not abstract.

  • Time: senior staff lose days to access cleanup, lockouts, and incident retros that should have been prevented by baseline controls.
  • Money: emergency response, audit remediation, and avoidable downtime are the most expensive way to run security.
  • Trust: once access looks random, leadership assumes every control is optional, including the important ones.

The root cause

The lockout or over-permission event is the symptom. Exception culture is the disease.

1

Backup posture is confused with recovery posture

Having copies is not the same as being able to restore them in a credible timeframe.

2

Restore drills are avoided

They are inconvenient, cross-functional, and mildly stressful. Which is precisely why they matter.

3

Ownership is vague

If nobody owns recovery readiness end to end, everyone assumes the platform vendor, IT lead, or future version of themselves will handle it.

4

Exceptions became policy through operational inertia

If nobody owns recovery readiness end to end, everyone assumes the platform vendor, IT lead, or future version of themselves will handle it.

The fix

The fix is not a security memo. The fix is enforced baseline behavior that survives turnover.

1

NorthStar maps recovery expectations against reality

NorthStar identifies which systems and information matter most, what the business assumes about restore, and where the current process lacks evidence.

2

Oort turns backup confidence into recovery discipline

Oort strengthens recovery posture with:

  • restore testing cadence
  • clearer runbooks
  • ownership of backup and recovery checks
  • priority mapping for critical systems and data

The aim is not to become dramatic about disaster. It is to stop being casual about recovery.

3

Oort turns baseline controls into continuous operations

We automate access reviews, exception expiry, backup/restore verification, and sharing enforcement so security does not depend on heroic memory.

Hope is not a recovery strategy, even when it is very sincerely documented.

HR-Z0
HR-Z0
Comms Officer

Comms Officer HR-Z0 (a.k.a. “H.R. Zero”) is Galaxie’s deadpan broadcast voice for the Office Horror Stories series — part dispatcher, part incident historian, part morale damage control.
Built from equal parts helpdesk transcripts, post-mortems, and calendar trauma, HR-Z0 doesn’t “tell stories.” It files reports from the front lines of messy operations — where ownership evaporates, folders time-travel, and a “quick change” becomes a six-month saga.

Give us a call

Available from 9am to 8pm, Monday to Friday.

Send us a message

Send your message any time you want.

Our usual reply time: 1 Business day