It was a lightning strike. When I talked to them about backup they had omitted the crucial step of a backup copy off the premises. They never considered theft, fire, flood or other issues. Unless a complete backup plan is made, adhered to these stories will continue.
What's not funny is these plans are well known to any decent IT staffer. Why they forget the old plans is worthy of a discussion. The sign you don't have a good plan is the need to create one. Or they want to discuss it. After decades, "Are we done yet?"
I have a simple answer why this happens. "You didn't have a backup."
I've been doing grief counseling over a major data loss this week.
Set up was a APC 1000xl UPS into which were plugged a Mac server and two external firewire drives. This is a very small office (less than 4 people) with no resident technical expertise so emphasis was on keeping things as simple as possible.
The server internal HD contained the OS and the externals were configured as a primary shared volume with Time Machine doing hourly, incremental backups to the other. Yes this is a Mac set up.
Remarkably both external drives.. the primary and the backup failed at EXACTLY the same time. After banging my head against the wall over the astronomical odds of tho separate drives in separate enclosures failing withing a hour of each other, I sent the primary to a data recovery company. Their diagnosis was that the drive circuit board had been blown by a power surge.
This would explain a lot... a voltage spike would have the same affect on the other drive as well.
But here's the thing. Due diligence had everything plugged into a UPS that to all appearance was operating normally.
Has anyone ever heard of power spike blowing through or being generated by a UPS?
The only other possibility would be something sent over the firewire cable connecting the drives to the server -or- even more unlikely, an environmental effect like a motor load being too close to the drives.
Anyone seen anything like this before?