Certainly, this is true with. Even if you do everything right--configure the firewall, update antivirus signatures, patch servers--one little mistake can mean an infected server and a full system restore. But can't information technology always restore the system from tape? After all, we might assume, we've been backing up to tape for dozens of years, so backup/restore processes and technology must be rock solid.
Bad assumption. Backup/restore still remains far from a sure thing.
We recently surveyed more than 200 IT staffers, and the results are certainly cause for concern. For example, while most respondents are confident that their backups work, one-fourth said backups fail about 20 percent of the time. I hate to think that my bank, mortgage company or health insurance provider is among that group. Even when backup jobs are completed, 37 percent of the respondents say they had no confidence that the backups were indeed successful. I guess these guys have been burned a few times in the past.
What's the problem? Simply put, backups and recoveries take too long, and the processes consume too many human resources. But as the 1980s rock group Talking Heads once said, it's the "same as it ever was."
Here's an even more unpleasant surprise: In the enterprise market, more than half of IT personnel either worry or have knowledge that critical data is exposed. Additionally, almost one-third say this type of exposure could result in a significant revenue loss or adverse business effect.
Industry pundits talk about pie-in-the-sky IT concepts such as virtualization, utility computing and seamless application integration--yet we can't get backup/restore right. This is an unacceptable situation, but as always, the right mix of people, process and technology can help resolve things.
Meanwhile, the IT department would be best advised to get its collective head out of the sand.
Employees are in the dark because of the failure of the company's business and IT managers to take the time to first jointly map out a plan. Often, the subject only gets discussed when an application goes down--and angry business managers declare that a four-hour system restore will alienate customers, grind productivity to a standstill and cost millions of dollars. Maybe it's me, but it's not easy to understand why departmental managers can't hammer these things out long before something goes bump in the night.
Meanwhile, the IT department would be best advised to get its collective head out of the sand. The backup problem partly exists because, in many cases, this critical task gets shunted over to the most junior person on staff. Chief information officers should begin by reviewing backup processes, administrator skills, technology, tools and metrics on an application-by-application basis. After this initial study, assess the problem; design a solution, implement, test and measure it. E-mail would be a great place to start. Our survey found that 61 percent of respondents believe that e-mail is their most critical application, requiring 100 percent uptime. IT managers should scrutinize their processes soon, because a major system outage would cause grief from the boardroom to the shipping dock.
This most basic rule of IT responsibilities is badly broken, and that puts everyone at risk.
This arena is dominated by start-ups, but users are okay with this. IT has experienced so much heartache in this sphere that it will gladly buy from start-ups today rather than wait around for the big guys to get their acts together.
This most basic IT responsibility is badly broken, and that puts everyone at risk. But the year is young, so there is time to put "Fix the problem" on top of the 2004 priority list. If not, tune into CNET News.com; as you will certainly read about backup/restore-related business failures and compliance problems--and the liability suits sure to follow.