Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
IT

Building an IT Infrastructure Today vs. 10 Years Ago 93

rjupstate sends an article comparing how an IT infrastructure would be built today compared to one built a decade ago. "Easily the biggest (and most expensive) task was connecting all the facilities together. Most of the residential facilities had just a couple of PCs in the staff office and one PC for clients to use. Larger programs that shared office space also shared a network resources and server space. There was, however, no connectivity between each site -- something my team resolved with a mix of solutions including site-to-site VPN. This made centralizing all other resources possible and it was the foundation for every other project that we took on. While you could argue this is still a core need today, there's also a compelling argument that it isn't. The residential facilities had very modest computing needs -- entering case notes, maintaining log books, documenting medication adherence, and reviewing or updating treatment plans. It's easy to contemplate these tasks being accomplished completely from a smartphone or tablet rather than a desktop PC." How has your approach (or your IT department's approach) changed in the past ten years?
This discussion has been archived. No new comments can be posted.

Building an IT Infrastructure Today vs. 10 Years Ago

Comments Filter:
  • by Anonymous Coward on Friday November 22, 2013 @12:46PM (#45492141)

    not much else has changed

  • by Archangel Michael ( 180766 ) on Friday November 22, 2013 @12:55PM (#45492267) Journal

    The cloud is fine and dandy until Microsoft Azure is unreachable for several hours ... again ...

    http://www.theregister.co.uk/2013/11/21/azure_blips_offline_again/ [theregister.co.uk]

  • by UnknowingFool ( 672806 ) on Friday November 22, 2013 @01:14PM (#45492431)

    And by the time you've paired an external keyboard in order to key in all that stuff, you might as well just use a laptop PC.

    And when all you have is a hammer, everything looks like a nail. Seriously, I don't use a tablet for my work functions, but I use a smartphone to get my emails on the road. But I am not everybody; I have different needs. Sometimes a laptop isn't the answer for everyone.

    That'd double an organization's spending on operating system licenses because a Terminal Server CAL for Windows Server costs about as much as a retail copy of Windows for the client

    First of all, who says that the organization requires Terminal Server to use a cloud based system? Also, "browser-based" means that the solution can be OS agnostic. For example, SalesForce. In fact, some people might have these things called Macs or maybe *gasp* a Linux machine. Lastly, are you aware that companies can negotiate with MS on enterprise licensing. Not every company pays full retail price for everything.

  • by Anonymous Coward on Friday November 22, 2013 @01:24PM (#45492537)

    That's good, but reality is more like...

    Determine the deadline, if at all possible, don't consult anyone with experience building infrastructure.

    Force committal to the deadline, preferably with hints of performance review impact.

    Ensure purchasing compliance via your internal systems, which minimally take up 30% to 40% of the remaining deadline.

    Leave the equipment locked in a storage room for a week, just to make sure. Or, have an overworked department be responsible for "moving" it, that's about a week anyway.

    Put enormous amounts of pressure on the workers once the equipment arrives. Get your money's work, make them sweat.

    When it's obvious they can't achieve a working solution in 30% (due to other blockers) of the allotted time, slip the schedule a month three days before the due date; because, it isn't really needed until six months from now.

    That's how it is done today. No wonder people want to rush to the cloud.

  • Virtualization (Score:5, Insightful)

    by Jawnn ( 445279 ) on Friday November 22, 2013 @01:26PM (#45492557)
    For good or bad (and yes, there's some of both), virtualization is the single biggest change. It is central to our infrastructure. It drives many, if not most, of our other infrastructure design decisions. I could write paragraphs on the importance of integration and interoperability when it comes to (for example) storage or networking, but let it suffice to say that it is a markedly different landscape than that of 2003.
  • abstraction (Score:4, Insightful)

    by CAIMLAS ( 41445 ) on Friday November 22, 2013 @03:20PM (#45493701)

    The biggest difference in the past 10 years is that everything has been abstracted and there's less time spent dealing with trivial, repetitive things for deployments and upkeep. We support more users now, per administrator, than we did back then by many a massive amplitude.

    No more clickclickclick for various installations on Windows, for instance. No more janky bullshit to have to deal with for proprietary RAID controllers and lengthy offline resilvers. These things have been abstracted in the name of efficiency and the build requirements of cloud/cluster/virtualization/hosting environments.

    We also have a lot more shit to take care of than we did a decade ago. Many of the same systems running 10 years ago are still running - except they've been upgraded and virtualized.

    Instead of many standalone systems, most (good) environments at least have a modicum of proper capacity and scaling engineering that's taken place. Equipment is more reliable, and as such, there's more acceptable cyclomatic complexity allowed: we have complex SAN systems and clustered virtualization systems on which many of these legacy applications sit, as well as many others.

    This also makes our actual problems much more difficult to solve, such as those relating to performance. There are fewer errors but more vague symptoms. We can't just be familiar with performance in a certain context, we have to know how the whole ecosystem will interact when changing timing on a single ethernet device.

    Unfortunately, most people are neither broad or deep enough to handle this kind of sysadmin work, so much of the 'hard work' gets done by support vendors. This is in no small part due to in-house IT staffing budgets being marginal compared to what they were a decade ago, with fewer people at lower overall skill levels. Chances are that the majority of the people doing the work today are the same ones who did it a decade ago, in many locations, simply due to the burden of spinning up to the level required to get the work done. In other places, environments simply limp by simply on the veracity of many cheap systems being able to be thrown at a complex problem, overpowering it with processing and storage which was almost unheard of even 5 years ago.

    The most obnoxious thing which has NOT changed in the past decade is obscenely long boot times. Do I really need to wait 20 minutes still for a system to POST sufficiently to get to my bootloader? Really, IBM, REALLY?!

  • Re:Well... (Score:4, Insightful)

    by AK Marc ( 707885 ) on Friday November 22, 2013 @05:00PM (#45494917)
    2000 managed all sorts of problems with hardware. Drivers lagged, so USB support was crap. Blue screens for plugging in a USB device wasn't just saved for press conferences. 2000 was good so long as all you did was Office. For the marketing department, they all went back to macs. Where they had a variety of monitor sizes and commercial editing packages that Just Worked. Ah, making fun of my slashdot number, when you don't even have one. 2000 was "supposed to be" the first converged OS (95/NT), but failed because it wasn't home-user friendly (not just games). XP managed it, and was really an SP of 2000, but with new OS name, pricing, and marketing.

For God's sake, stop researching for a while and begin to think!

Working...