Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Software IT

IT Infrastructure As a House of Cards 216

snydeq writes "Deep End's Paul Venezia takes up a topic many IT pros face: 'When you've attached enough Band-Aids to the corpus that it's more bandage than not, isn't it time to start over?' The constant need to apply temporary fixes that end up becoming permanent are fast pushing many IT infrastructures beyond repair. Much of the blame falls on the products IT has to deal with. 'As processors have become faster and RAM cheaper, the software vendors have opted to dress up new versions in eye candy and limited-use features rather than concentrate on the foundation of the application. To their credit, code that was written to run on a Pentium-II 300MHz CPU will fly on modern hardware, but that code was also written to interact with a completely different set of OS dependencies, problems, and libraries. Yes, it might function on modern hardware, but not without more than a few Band-Aids to attach it to modern operating systems,' Venezia writes. And yet breaking this 'vicious cycle of bad ideas and worse implementations' by wiping the slate clean is no easy task. Especially when the need for kludges isn't apparent until the software is in the process of being implemented. 'Generally it's too late to change course at that point.'"
This discussion has been archived. No new comments can be posted.

IT Infrastructure As a House of Cards

Comments Filter:
  • As a dev, what's the problem with a 24 port gigabit switch as the "core" on a medium sized office? Aside from the fact that 10Gb is becoming popular (has become popular?) in the datacenter? Most desktops are only at the 1Gb level (and most users at below 100Mb), and most inbound internet pipes are much smaller. I don't understand the downfall here.

    Can you elaborate?

  • by D4C5CE ( 578304 ) on Monday May 24, 2010 @06:40PM (#32329740)

    Don’t patch bad code – rewrite it.

    Kernighan & Plauger
    The Elements of Programming Style [wordpress.com]
    2nd edition, 1974 (exemplified in FORTRAN and PL/1!)

  • by FooAtWFU ( 699187 ) on Monday May 24, 2010 @06:43PM (#32329770) Homepage

    "they simply fail to properly advise the units that are making decisions of the cost and consequence of such a short-sighted approach."

    In the defense of IT, those people they're trying to advise aren't always the best at taking advice. (But then again, neither are IT admins always the best at giving it.)

  • No redundancy, is the biggest one. No real layer 3 switching is another.

  • by oatworm ( 969674 ) on Monday May 24, 2010 @06:58PM (#32329902) Homepage
    Ditto this - plus, in a medium-sized office, you're probably not getting 10x24Gb/sec out of your server infrastructure anyway. Your network is only as fast as the slowest component you rely upon; at 10Gb/sec, you're starting to bump into the limits of your hard drives, especially if you have more than a handful of people hitting the same RAID enclosure simultaneously.
  • by JerkBoB ( 7130 ) on Monday May 24, 2010 @06:59PM (#32329914)

    As a dev, what's the problem with a 24 port gigabit switch as the "core" on a medium sized office?

    If all you've got is 24 hosts (well, 23 and an uplink), then it's fine. I suspect that the reality he's alluding to is something more along the lines of multiple switches chained together off of the "core" switch. The problem is that lower-end switches don't have the fabric (interconnects between ports) to handle all those frames without introducing latency at best and dropped packets at worst. For giggles, try hooking up a $50 8-port "gigabit" switch to 8 gigabit NICs and try to run them all full tilt. Antics will ensue... The cheap switches have a shared fabric which doesn't have the bandwidth to handle traffic between all the ports simultaneously. True core switches are expensive because they have dedicated connections between all the ports (logically, if not physically... I'm no switch designer), so there's no fabric contention.

  • by mlts ( 1038732 ) * on Monday May 24, 2010 @07:01PM (#32329934)

    Isn't this taught to death in ITIL 101 that every MBA must go through in order to get their certificate in an accredited college? It sort of is sad that the concepts taught in this never hit the real world in a lot of organizations. Not all. I've seen some companies actually be proactive, but it is easy for firms to fall into the "we'll cross that bridge when we come to it" trap.

  • by seyfarth ( 323827 ) on Monday May 24, 2010 @07:04PM (#32329960) Homepage

    From the original message we read that the "code was also written to interact with a completely different set of OS dependencies, problems, and libraries." This seems to imply that the IT organizations are allowing outside interests to dictate the rules of the game. If there were a stable set of operating system calls and libraries to rely on, then the software vendors would have an easier time maintaining software. I recognize that Linux changes, but the operating system calls work well and API is quite stable. I have used UNIX for a long time and I have compiled programs from 25 years ago under Linux. There have been some additions since then, but the basics of Linux work like the basics of UNIX from 25 years ago.

    At present there are some applications available only on Windows and some only on Windows/Mac OSX. This might be difficult to change, but going along with someone's plan for computing which is based on continued obsolescence seems inappropriate. At least those who are more or less forced by software availability to use Windows should investigate Linux and negotiate with their vendors to supply Linux solutions.

    Computers are hard to manage and hard to program. It is not helpful to undergo regular major overhauls in operating systems.

  • by Darkness404 ( 1287218 ) on Monday May 24, 2010 @07:20PM (#32330092)
    How is it good? It leaves the entire internet vulnerable. It pushes people not towards Linux but towards outdated versions of Windows and more or less guarantees the future has 32 bit OSes.

    Look at what is keeping people from adopting Linux: Small, niche programs.

    With outdated versions of Windows already online, can we afford to push even more people to old, closed, OSes with no future of getting patches?
  • by Grishnakh ( 216268 ) on Monday May 24, 2010 @07:21PM (#32330108)

    I don't think budget is a problem at all here. The problem as described by the article is with vendor-provided software being crufty and having all kinds of problems. The author even mentions that normal free-market mechanisms don't seem to work, because there's little or no competition: these are applications used by specific industries in very narrow applications, and frequently have no competition. In a case like this, it doesn't matter what your budget is; the business requirement is that you must use application X, and that's it. So IT has a mandate to support this app, but doing so is a problem because the app was apparently written for DOS or Windows 95 and has had very little updating since then.

    The author's proposed solution is for Microsoft to jettison all the backwards-compatibility crap. We Linux fans have been saying this for years, but everyone says we're unrealistic and that backwards compatibility is necessary for apps like this. Well, it looks like it's starting to bite people (IT departments in particular) in the ass.

  • Good luck (Score:3, Interesting)

    by PPH ( 736903 ) on Monday May 24, 2010 @09:07PM (#32330906)

    Been there, done that.

    If you've got even a small or medium sized enterprise application (whatever that buzz word means) at a larger company (Boeing, for example), it might have its hooks into a dozen or more peer systems/hosts/databases/whatever. They are all 'owned' by different depatments, installed and upgraded over many years. Each on their own schedule and budget. When one group gets the funds together to address their legacy ball of duct tape and rubber bands, they roll the shiney new hardware in and install the spiffy new app. But everyone else is a few years away from affording new systems. And so the inter-system duct tape is simply re-wrapped.

    The IT department tried selling everyone on architecture standardization. But due to the gradual pace of system upgrading, the plan was out of date before everyone got caught up to the old one. And today's 'standard' architecture wouldn't play nicely with what was state of the art a few years ago (thanks Microsoft). The whole architecture standard ploy is a salesman's pitch to get management locked into their system. Unless you've got a small enough shop that you can change out everyone's desktop and the entire contents of the server room over a holiday weekend (another salesman's wet dream), it ain't gonna work.

    The solution is to bite the bullet and admit that your systems are always going to be duct-taped together. And then make a plan for maintaining the bits of duct tape. There's nothing wrong with some inter-system glue as long as you treat it with the same sort of respect and attention to detail that one would use for the individual applications.

  • by TheRaven64 ( 641858 ) on Tuesday May 25, 2010 @07:10AM (#32333918) Journal

    The correct strategy is to get someone whose job it is to take bribes. You pay them a small salary, which they can add to with any extra gifts that they receive. You tell all of the sales reps that they are the ones with the final purchasing authority. Then you let someone competent actually make the decision.

    Alternatively, you can use a downwards delegation strategy, where the people at the top have to justify purchasing decisions to the tier below them (recursively). You're free to take as many kick-backs as you want, as long as the people implementing your decision agree with it.

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...