Wednesday, March 11, 2009

Top 10 Data Recovery Bloopers

Truth, as the saying goes, is stranger than fiction. The following horror stories are true. The identities of those involved have been omitted, because what happened to them could happen to anyone. 

1) It's the Simple Things That Matter
The client, a successful business organization, purchased a "killer" UNIX network system, and put 300+ workers in place to manage it. Backups were done daily. Unfortunately, no one thought to put in place a system to restore the data to. 

2) In a Crisis, People Do Silly Things
The prime server in a large urban hospital's system crashed. When minor errors started occurring, system operators, instead of gathering data about the errors, tried anything and everything, including repeatedly invoking a controller function which erased the entire RAID array data. 

3) When the Crisis Deepens, People Do Sillier Things
When the office of a civil engineering firm was devastated by floods, its owners sent 17 soaked disks from three RAID arrays to a data recovery lab in plastic bags. For some reason, someone had frozen the bags before shipping them. As the disks thawed, even more damage was done. 

4) Buy Cheap, Pay Dearly
The organization bought an IBM system - but not from IBM. Then the system manager decided to configure the system uniquely, rather than following set procedures. When things went wrong with the system, it was next to impossible to recreate the configuration. 

5) An Almost Perfect Plan
The company purchased and configured a high-end, expensive, and full-featured library for the company's system backups. Unfortunately, the backup library was placed right beside the primary system. When the primary system got fried, so too did the backup library. 

6) The Truth, and Nothing But the Truth
After a data loss crisis, the company CEO and the IT staffer met with the data recovery team. No progress was made until the CEO was persuaded to leave the room. Then the IT staffer opened up, and solutions were developed. 

7) Lights Are On, But No One's Home
A regional-wide ambulance monitoring system suffered a serious disk failure, only to discover that its automated backup hadn't run for fourteen months. A tape had jammed in the drive, but no one had noticed. 

8) When Worlds Collide
The company's high-level IT executives purchased a "Cadillac" system, without knowing much about it. System implementation was left to a young and inexperienced IT team. When the crisis came, neither group could talk to the other about the system 

9) Hit Restore and All Will Be Well
After September's WTC attacks, the company's IT staff went across town to their backup system. They invoked Restore, and proceed to overwrite from the destroyed main system. Of course, all previous backups were lost. 

10) People Are the Problem, Not Technology
Disk drives today are typically reliable - human beings aren't. A recent study found that approximately 15 percent of all unplanned downtime occurs because of human error. 


0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home