Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Government United States IT News

Anatomy of the VA's IT Meltdown 137

Lucas123 writes "According to a Computerworld story, a relatively simple breakdown in communications led to a day-long systems outage within the VA's medical centers. The ultimate result of the outage: the cancellation of a project to centralize IT systems at more than 150 medical facilities into four regional data processing centers. The shutdown 'left months of work to recover data to update the medical records of thousands of veterans. The procedural failure also exposed a common problem in IT transformation efforts: Fault lines appear when management reporting shifts from local to regional.'"
This discussion has been archived. No new comments can be posted.

Anatomy of the VA's IT Meltdown

Comments Filter:
  • by corifornia2 ( 1158503 ) on Tuesday November 20, 2007 @02:03PM (#21423199)
    This is funny to me. I was hired by the VA in St. Petersburg, Florida a few years ago when Windows 2003 first came out to train all of the NT administrators on the migration to 2003. Of the 60 or so NT administrators, all but three of them were losing their title and becoming helpdesk for their site and "physical hands" for the few remaining administrators.

    A lof of the admins were unhappy about that, as I would have been. I am just curious if the failure to complete the project had to do with the lack of respect for the older employees with NT experience and essentially downgrading those employees.

  • by FBodyJim ( 1136589 ) on Tuesday November 20, 2007 @03:13PM (#21424459) Homepage Journal
    I work for a company that uses the Intersystems Cache database and I have to say that I imagine that Cache is a large part of the problem. The amount of good documentation for Cache lies between very little and none and my company has been on a nationwide search for people experienced with Cache and they too seem few and far between. Of course, I don't know that Cache really is a "worse" or "better" database that Oracle, SQL Server or MySQL for that matter, however, what I do know that is when it comes to experience, common tasks, documentation, examples and just getting things done, Cache lags far behind the others, not to mention Universities are still teaching relational db theory, not object db theory, at least when I graduated Rutgers a few short years ago. I suspect that given the task of merging databases, even large databases, there are plenty of experienced and knowledge SQL Server, Oracle, mySQL guys out on Monster or some other job site that know how to get the job done, efficiently and correctly, and have done the job a few times before. Based on our current and past searches for people capable of even easier tasks within Cache, there aren't many people out there with any Cache experience, never mind good people with Cache experience, and it's easy to fudge a task when you aren't given much good documentation, examples or experience. In a past career, I worked for a healthcare company that used SQL server for electronic medical records (EMRs) and the system worked rather well. There might have been better ways to design the database, stored procs or application code, however, we never had a problem hiring good staff that understood the database design, SQL queries, T-SQL/stored procs and as i said, I can't say the same about trying to hire good people who know and understand Mumps ("M" the language, not the disease) or Cache ObjectScript or find the Cache tools to be easy and intuitive. Just my $.02, and I don't mean to start a DB debate, just stating that it might just also be time for the VA's to move on from MUMPS/Cache to a more widely used and documented database and programming language, find some new blood.
  • Re:It happens (Score:3, Interesting)

    by Critical Facilities ( 850111 ) on Tuesday November 20, 2007 @03:50PM (#21425187)

    because some IT staffer changed a port # at one of their hub data centers without following proper procedure -- that's minor.

    I don't know if I agree with that. "Change Control" or "Change Management" is a crucial part of any Data Center. The fact that these ports were changed without being properly "run up the flagpole" is a glaring mistake with very unfortunate results. I'll bet anyone swapping ports in the future will ask permission several times over before trying it again.
  • by Anonymous Coward on Wednesday November 21, 2007 @01:05AM (#21431461)
    And let us not forget, the Congress that pushed the IT reorganization on DVA following the theft of the researcher's laptop is the same Congress that has not been able to give the DVA a budget on time at the start of the fiscal year for several years running. Running on a continuing resolution where you have no idea of your budget does not help either. For 25 years or more DHCP/Vista was designed and built as a distributed system. Its redundancy prevented widespread disasters such as this. The headlong drive to combine data centers has now given us more single points of failure combined with a complete dependancy on a telecommunications network structure with myriad points of potential failure. I predict that this will not be the last such widespread failure.

Those who can, do; those who can't, write. Those who can't write work for the Bell Labs Record.

Working...