Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
IT

Building an IT Infrastructure Today vs. 10 Years Ago 93

rjupstate sends an article comparing how an IT infrastructure would be built today compared to one built a decade ago. "Easily the biggest (and most expensive) task was connecting all the facilities together. Most of the residential facilities had just a couple of PCs in the staff office and one PC for clients to use. Larger programs that shared office space also shared a network resources and server space. There was, however, no connectivity between each site -- something my team resolved with a mix of solutions including site-to-site VPN. This made centralizing all other resources possible and it was the foundation for every other project that we took on. While you could argue this is still a core need today, there's also a compelling argument that it isn't. The residential facilities had very modest computing needs -- entering case notes, maintaining log books, documenting medication adherence, and reviewing or updating treatment plans. It's easy to contemplate these tasks being accomplished completely from a smartphone or tablet rather than a desktop PC." How has your approach (or your IT department's approach) changed in the past ten years?
This discussion has been archived. No new comments can be posted.

Building an IT Infrastructure Today vs. 10 Years Ago

Comments Filter:
  • by dyingtolive ( 1393037 ) <brad,arnett&notforhire,org> on Friday November 22, 2013 @11:41AM (#45492075)
    You just put it all in the cloud brah. My boss assured me it'd be okay and he got his MBA from
  • by Anonymous Coward

    not much else has changed

  • Most enterprises rely upon one or more software packages from a vendor, often for critical functions. You can only do what your vendor's software allows. Not everything is tablet friendly or cloud happy.

  • I believe these came into effect about 10 years ago. So aside from all the advances in "the cloud", I'd ask whether that would be secure enough. I mean not just of a bunch of Slashdotters. Ask the potential cloud providers if they are HIPAA compliant and can provide documentation to that effect.

    Use GMail for transferring medical records and I'll guarantee you'll be swamped with ads for everything from Vi@gr@ to funeral services.

    • If only there were some way to look this up:

      • http://aws.amazon.com/compliance/#hipaa
      • https://support.google.com/a/answer/3407054?hl=en
    • VMware's new cloud is signing BAAs (Business Associate Agreements) to ensure HIPPA regulation compliance with it's customers.
      press release [vmware.com]

      How HIPPA works [hhs.gov]
  • The residential facilities had very modest computing needs -- entering case notes, maintaining log books, documenting medication adherence, and reviewing or updating treatment plans. It's easy to contemplate these tasks being accomplished completely from a smartphone or tablet rather than a desktop PC.

    And by the time you've paired an external keyboard in order to key in all that stuff, you might as well just use a laptop PC.

    In addition, some cloud solutions make dedicated desktop application suites or specific configurations unnecessary today. Browser-based options or virtual desktops have added appeal in health organizations because data is less likely to be stored locally on a device.

    That'd double an organization's spending on operating system licenses because a Terminal Server CAL for Windows Server costs about as much as a retail copy of Windows for the client.

    • Re: (Score:2, Insightful)

      And by the time you've paired an external keyboard in order to key in all that stuff, you might as well just use a laptop PC.

      And when all you have is a hammer, everything looks like a nail. Seriously, I don't use a tablet for my work functions, but I use a smartphone to get my emails on the road. But I am not everybody; I have different needs. Sometimes a laptop isn't the answer for everyone.

      That'd double an organization's spending on operating system licenses because a Terminal Server CAL for Windows Server costs about as much as a retail copy of Windows for the client

      First of all, who says that the organization requires Terminal Server to use a cloud based system? Also, "browser-based" means that the solution can be OS agnostic. For example, SalesForce. In fact, some people might have these things

  • Not much difference, really. We're using the same OS. We're using the same hardware, usually, and whatever we need to purchase is absurdly cheap (cheaper than it was 10 years ago). We rely on the Internet as much as we did then: it's important, but not mission-critical (because it's unreliable). Our industry-specific applications still suck. Networking is identical, but a bit faster.
    • by mlts ( 1038732 ) * on Friday November 22, 2013 @12:47PM (#45492773)

      In 2003, Sarbanes-Oxley was passed, forcing companies to have to buy SANs just to stick E-mail for long term storage/archiving.

      For the most part, things have been fairly static, except with new buzzwords and somewhat new concepts. A few things that have changed:

      1: Converged SAN fabric. Rather than have a FC switch and a network switch, people are moving to FCoE or just going back to tried and true iSCSI which doesn't require one to fuss around with zoning and such.

      2: Deduplication. We had VMs in '03, but now, whole infrastructures use that, so having disk images on a partition where only one image is stored and only diffs are stored for other machines saves a lot of space.

      3: RAID 6 becomes necessary. I/O hasn't gone up as much as other things, so the time it takes to rebuild a blown disk is pretty big. So, RAID 6 becomes a must so degraded volumes rebuild.

      4: People stop using tape and go with replication and more piles of hard disks for archiving. Loosely coupled SAN storage in a hot recovery center becomes a common practice to ensure SAN data is backed up... or at least accessible.

      5: VMs use SAN snapshots for virus scanning. A rootkit can hide in memory, but any footprints on the disk will be found by the SAN controller running AV software and can be automatically rolled back.

      6: We went from E-mailed Trojans, macro viruses, and attacks on firewalls and unprotected machines to having the Web browser being the main point of attack for malware intrusion. It has been stated on /. that ad servers have become instrumental in widespread infections.

      7: The average desktop computer finally has separate user/admin access contexts. Before Vista, this was one and the same in Windows, allowing something to pwn a box quite easily.

      8: The OS now has additional safeguards in place, be it SELinux, Window's Low security tokens, or otherwise. This way, something taking over a Web browser may not be able to seize a user's access context as easily.

      9: BYOD has become an issue. Ten years ago, people fawned over RAZR-type devices and an IT person had a Bat Belt of devices, be it the digital camera, MP3 player, the PDA, the pager, the cellphone, and the Blackberry for messaging. Around -05, Windows Mobile merged all of this into one device, and '07 brought us the iPhone which made the masses desire one device, not a belt full.

      10: Tablets went from embedded devices to on desktops and big media consumption items.

      11: Music piracy was rampant, so one threat was people adding unexpected "functionality" to DMZ servers by having them run P2P functionality (AudioGalaxy, eMule, etc.)

      12: We did not have to have a Windows activation infrastructure and fabric in place, where machines had to have some internal access to a KMS box to keep running. XP and Windows Server 2003 had volume editions which once handed a key would update and were happy for good.

      13: UNIX sendmail was often used for mail before virtually everyone switched over wholesale to Exchange.

      14: Hard disk encryption was fairly rare. You had to find a utility like SafeBoot or use loopback encrypted partitions on the Linux side for data protection. This was after the NGTCB/Palladium fiasco, so TPM chips were not mainstream.

      15: One still bought discrete hardware for hosts, because VMs were present for devs, but not really "earned their bones" in production. So, you would see plenty of 2-3U racks with SCSI drives in them for drive arrays.

      Things that have stayed the same, ironically enough:

      1: Bandwidth on the WAN. The big changes came and went after initial offerings of cable and DSL. After that, bandwidth costs pretty much have not changed, except for more fees added.

      2: Physical security. Other than the HID card and maybe the guard at the desk, data center physical security has not changed much. Some places might offer a fingerprint or iris scanner, but nothing new there that wasn't around in 2003. Only major di

      • by CAIMLAS ( 41445 )

        Another big difference which relates to the list you mentioned: almost nobody runs their own in-house mail anymore. It's too expensive (in time and experience, mostly) to maintain efficiently and effectively, in no small part due to spam. Even larger organizations have decided it's not worth the headache.

        If there is in-house hosting of mail, it's due to complex requirements and the headache that migration would be to another system. Many of these have also put in place either Google or Microsoft frontend fi

        • almost nobody runs their own in-house mail anymore.

          My experience is different from yours. I work for an IT service consultancy and we're trying to push a lot of customers to cloud based email but they're all sticking to their guns. No-one around here likes the cloud for key business functions, and the NSA press is keeping them firmly entrenched in their views. For most companies (less than 1000 users) Exchange is trivial to setup and maintain, and can be supported part-time or by outsourced support. Over 1000 users then you have a big enough IT team to loo

          • by mlts ( 1038732 ) *

            My experience mirrors yours.

            Even the PHBs want to keep company E-mail in-house for fear that a provider could use their personal communications stored for 7 years due to SOX rules against them later on.

            I've seen some places tend to have their top brass on an in-house Exchange system, while lower levels might end up on Azure or a cloud provider.

            Exchange is pretty easy to get up and running, especially if AD is in place. It has decent anti-spam filters that you can turn on out of the box for the edge server,

          • One minor problem, Exchange requires Microsoft...

      • by cusco ( 717999 )

        I work in physical security, so will mention some changes that your site may not have implemented but which many larger sites have.

        1) Granularity of access - Formerly if you had an access card it got you into the data center and from there you had free range. Today the data center is (or should be) compartmentalized and access to each area dependent on need.

        2) Rack Access - There are now several brands of hardware that control technicians' access to individual racks, including front and/or rear rack door.

        3

  • by account_deleted ( 4530225 ) on Friday November 22, 2013 @12:08PM (#45492373)
    Comment removed based on user account deletion
    • by Anonymous Coward on Friday November 22, 2013 @12:24PM (#45492537)

      That's good, but reality is more like...

      Determine the deadline, if at all possible, don't consult anyone with experience building infrastructure.

      Force committal to the deadline, preferably with hints of performance review impact.

      Ensure purchasing compliance via your internal systems, which minimally take up 30% to 40% of the remaining deadline.

      Leave the equipment locked in a storage room for a week, just to make sure. Or, have an overworked department be responsible for "moving" it, that's about a week anyway.

      Put enormous amounts of pressure on the workers once the equipment arrives. Get your money's work, make them sweat.

      When it's obvious they can't achieve a working solution in 30% (due to other blockers) of the allotted time, slip the schedule a month three days before the due date; because, it isn't really needed until six months from now.

      That's how it is done today. No wonder people want to rush to the cloud.

  • by mjwalshe ( 1680392 ) on Friday November 22, 2013 @12:12PM (#45492411)
    You'd be doing what we do now except maybe some types of networks that use leaf and spine rather than a tree design.
  • by Chrisje ( 471362 ) on Friday November 22, 2013 @12:17PM (#45492459)

    We've consolidate all office application servers to 5 data centers, one per continent. Then we've rolled out end-point backup for some 80.000 laptops in the field and some 150.000 more PC's around offices across the world which includes legal hold capabilities. Each country in which we're active has a number of mobile device options for telephony, most of them being Android and Win8 based nowadays since WebOS got killed.

    Then we're in the process of building a European infrastructure where we have data centers for managed customer environments in every major market in Europe. I am currently not aware of what's going on in APJ or South America. This is important in Europe however, because managed European customers don't want to see their data end up in the States, and the same goes for those that use our cloud offerings.

    physical local IT staff presence in all countries has been minimized to a skeleton crew, not only because of data center consolidation but also because of the formation of a global IT helpdesk in low cost countries, and the rise of self-service portals.

    The plethora of databases we had internally has been Archived using Application Information Optimizer for structured data archiving. We are our own biggest reference customer in this regard. On top of that we've beefed up our VPN access portals across the world so as to accommodate road warriors logging in from diverse locations.

    Lastly, we use our own Records Management software suite to generate 8.000.000. unique records per day. These are archived for a particular retention period (7 years I believe) for auditing purposes.

    • by cusco ( 717999 )

      In the field of physical security, I've seen customers with 10 independent access control systems scattered around their various facilities condense into a single centralized and monitored system. Access control system panels used to be connected serially to a "server" which was a cast-off desktop PC shoved under a janitor's desk, but now are actual servers in server rooms, monitored and backed up by IT staff, communicating with panels that might be on the other side of the planet.

      Security video was analog

  • Virtualization (Score:5, Insightful)

    by Jawnn ( 445279 ) on Friday November 22, 2013 @12:26PM (#45492557)
    For good or bad (and yes, there's some of both), virtualization is the single biggest change. It is central to our infrastructure. It drives many, if not most, of our other infrastructure design decisions. I could write paragraphs on the importance of integration and interoperability when it comes to (for example) storage or networking, but let it suffice to say that it is a markedly different landscape than that of 2003.
    • The evolution of package management and group policy have made my job much easier. I don't miss the days of of going up and down the rows of desks, popping disks into boxes.
    • Amen to this. I'd say it's the single most important change for network admins in the past 15 years. Our server farm went from a 7 foot stack of pizza boxes with disparate hardware and OSs that we were paying oodles to be parked in a server farm; to one public VM host on the cloud and one private VM host running on my boss's desktop.

  • by Anonymous Coward

    Virtualization and Backups: These go hand in hand. Virtualize then backup a server, if the hardware implodes run it on a toaster oven. This allows people to be more promiscuous with consumer grade hardware for three 9's applications, and thus enables you to deploy more stuff given the software licensing expense is not full-on insane.

    PC Miniaturization: Where you used to buy a purpose built box you can now buy a PC to do the same thing e.g. PBX, Video Conferencing, Security Camera's, Access Card system, et

    • by AK Marc ( 707885 )

      Remember migrating off of win98?

      Nope. IT departments were migrating from NT 3.51 to 2000. Home users were migrating from 98 to whatever you are implying (98SE being next, ME after that, and many waiting for XP). The move from 2000 to XP was easy. XP is what 2000 was supposed to be, so the fundamental differences between 2000 and XP were small, the real difference was that XP worked.

      • by Anonymous Coward

        The only big difference was XP allowed DirectX 9. Windows 2000 always worked. Windows XP IS Windows 2000. You are too young to have fully experienced Windows 2000, Mr. 707885. Oh wait, you experienced it, but because it didn't run your games you poo-pooed it. For getting shit done, Windows 2000 is closer to Windows 7 than Windows XP will ever be.

        • Re:Well... (Score:4, Insightful)

          by AK Marc ( 707885 ) on Friday November 22, 2013 @04:00PM (#45494917)
          2000 managed all sorts of problems with hardware. Drivers lagged, so USB support was crap. Blue screens for plugging in a USB device wasn't just saved for press conferences. 2000 was good so long as all you did was Office. For the marketing department, they all went back to macs. Where they had a variety of monitor sizes and commercial editing packages that Just Worked. Ah, making fun of my slashdot number, when you don't even have one. 2000 was "supposed to be" the first converged OS (95/NT), but failed because it wasn't home-user friendly (not just games). XP managed it, and was really an SP of 2000, but with new OS name, pricing, and marketing.
    • by cusco ( 717999 )

      Hardware is a frack of a lot more stable now too. When was the last time you had a video card or a NIC flake out? In a 900-desktop environment that used to be a daily occurrence.

    • Scaling is now stable. To setup 100 PC's with Windows 2000 is nothing like doing Windows 7.

      Setting up 100 PCs with Windows 2000 was extremely easy. Windows 7 has become much harder because your can't edit the default user registry hive without Windows 7 freaking out. Microsoft still needs a good counterpart to /etc/skel/

  • We have divisions world-wide, but our Corporate/HQ division is located in America and consists of roughly 500 employees. At home, we have three facilities at different locations.

    - The entire computer system is virtualized through VMware using VDIs with help from VMware View, and hosted at a major [unnamed] datacenter in Texas on a private network setup for our Company. We also have an identical setup at an Asian datacenter under the same provider, and both datacenters are linked together through VPN from th

    • >The network infrastructure is setup as a Class C 172.x.x.x

      You mean Class B, or specifically the 172.16/12 private network. It may be further subnetted via CIDR, but only having 256 IPs (Class C) doesn't work well in most enterprise settings.

  • abstraction (Score:4, Insightful)

    by CAIMLAS ( 41445 ) on Friday November 22, 2013 @02:20PM (#45493701)

    The biggest difference in the past 10 years is that everything has been abstracted and there's less time spent dealing with trivial, repetitive things for deployments and upkeep. We support more users now, per administrator, than we did back then by many a massive amplitude.

    No more clickclickclick for various installations on Windows, for instance. No more janky bullshit to have to deal with for proprietary RAID controllers and lengthy offline resilvers. These things have been abstracted in the name of efficiency and the build requirements of cloud/cluster/virtualization/hosting environments.

    We also have a lot more shit to take care of than we did a decade ago. Many of the same systems running 10 years ago are still running - except they've been upgraded and virtualized.

    Instead of many standalone systems, most (good) environments at least have a modicum of proper capacity and scaling engineering that's taken place. Equipment is more reliable, and as such, there's more acceptable cyclomatic complexity allowed: we have complex SAN systems and clustered virtualization systems on which many of these legacy applications sit, as well as many others.

    This also makes our actual problems much more difficult to solve, such as those relating to performance. There are fewer errors but more vague symptoms. We can't just be familiar with performance in a certain context, we have to know how the whole ecosystem will interact when changing timing on a single ethernet device.

    Unfortunately, most people are neither broad or deep enough to handle this kind of sysadmin work, so much of the 'hard work' gets done by support vendors. This is in no small part due to in-house IT staffing budgets being marginal compared to what they were a decade ago, with fewer people at lower overall skill levels. Chances are that the majority of the people doing the work today are the same ones who did it a decade ago, in many locations, simply due to the burden of spinning up to the level required to get the work done. In other places, environments simply limp by simply on the veracity of many cheap systems being able to be thrown at a complex problem, overpowering it with processing and storage which was almost unheard of even 5 years ago.

    The most obnoxious thing which has NOT changed in the past decade is obscenely long boot times. Do I really need to wait 20 minutes still for a system to POST sufficiently to get to my bootloader? Really, IBM, REALLY?!

    • Instead of many standalone systems, most (good) environments at least have a modicum of proper capacity and scaling engineering that's taken place.

      Except that has nothing to do with what year it is.

    • by jon3k ( 691256 )

      The most obnoxious thing which has NOT changed in the past decade is obscenely long boot times. Do I really need to wait 20 minutes still for a system to POST sufficiently to get to my bootloader? Really, IBM, REALLY?!

      With virtualization it's very rare for me to have to reboot a physical host, and guests reboot in a couple of seconds. So overall that situation seems to have improved dramatically. In my environment, at least.

  • "it's easy to contemplate these tasks being accomplished . . ." without security, without reliability, without stability, without privacy, without confidentiality, without accountability, without redundancy.

    If I were to do that, I'd be in breach of at least half of my NDAs, and a few of my SLAs.

  • by Anonymous Coward

    The biggest change has been in management, who are now trained to outsource anything and everything. Their answer to every question is to outsource it. If an organization has developed internal expertise in some in-depth area, the management will outsource whatever it is, even if they throw away the expertise in the process. And they'll probably fire the employees with the now-useless expertise and give themselves bigger bonuses. So the move to the "cloud" is not being driven by technical people, it's drive

  • 10 years ago really wasn't that big a deal. By 2003, VPN (IPSec and OpenVPN) was fairly robust, and widely supported. PPTP was on the way out for being insecure. Internet was most everywhere, and at decent-if-not-great throughput. Go back five or ten years before *that*, and things were much more difficult: connectivity was almost always over a modem; remote offices *might* be on a BRI ISDN connection (128 kb/s), probably using some sort of on-demand technology to avoid being billed out the wazoo due to US telcos doing this bizarre, per-channel surcharge for ISDN. PPP was finally supplanting (the oh, so evil) SLIP, which made things better, assuming your OS even supported TCP/IP, which was not yet clearly the victor -- leading to multiple stacks to include MS and Novell protocols.

    All in all, 2003 was about when things were finally getting pretty good. Leading up to 2000 had been a tough row to how. And let's just not even go before that -- a mishmash of TCP/IP, SNA, SAA, 3270, RS-232, VT100, completely incompatible e-mail protocols, network protocol bridges, massive routing tables for SAPpy, stupid protocols... a 100% nightmare. Very, very glad to have left those days behind.

  • As in, AD was mostly mature, Win2003 was out, Linux was real, and PCs were commodities. An IT infrastructure now vs _20_ years ago on the other hand would be more interesting. Not much has happened since 2003.

  • Since the NSA has been confirmed, I feel that I am obligated to explain to everyone (I work at a corporate level with many other integrated departments) that things have changed, and nothing is secure anymore, so on the level of business buyouts, where secrecy seems to be sooo important, sending all of your email through gmail isn't a good idea anymore, as all of your data is compromised.

    One could almost make a living off of selling slackware boxes running sendmail with mimedefang and spamassassin as the
  • One of the biggest changes I have seen along with some of these others that have been posted is the reduced number of wires we have to run to places. No thicknet, coax, dedicated, or even ethernet lines. WireLESS is the infrastructure and the mobility it allows is wonderful. The reduction of costs is brilliant. Thanks smart people everywhere who keep advancing our profession-this ones for you.

The computer is to the information industry roughly what the central power station is to the electrical industry. -- Peter Drucker

Working...