Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Networking IT

10 Tips For Boosting Network Performance 256

snydeq writes "InfoWorld's Paul Venezia and Matt Prigge provide hands-on insights for increasing the efficiency of your organization's network. From losing the leased lines, to building a monster IT test lab on the cheap, to knowing how best to accelerate backups, each tip targets a typical, often overlooked IT bottleneck."
This discussion has been archived. No new comments can be posted.

10 Tips For Boosting Network Performance

Comments Filter:
  • Get drunk. (Score:2, Funny)

    by Anonymous Coward

    Unplug wires in network closet.

  • Backups (Score:5, Funny)

    by natehoy ( 1608657 ) on Tuesday June 01, 2010 @04:16PM (#32424096) Journal

    I learned from the BOFH that the fastest backups are written to /dev/null.

    • by Amouth ( 879122 )

      i've seen people use /dev/null as temp space.. as long as you don't lose your handle to the file it is still readable..

      • by linzeal ( 197905 ) on Tuesday June 01, 2010 @09:01PM (#32426798) Journal
        When /dev/null starts giving access to all the files it has gobbled up over the years I imagine would be like the gates of hell opening. Dennis Ritchie as pestilence will ride a black horse made of swarming bits astride with other famous Unix dudes (imagine your own!). Sysadmins who have been practicing the arcane arts of administrating access to Hell's one and only 9600 BAUD BBS running Minix will rise hungry for bandwidth, porn access and hot pockets.
    • I use it for my mailbox.

    • Re: (Score:3, Funny)

      by dkleinsc ( 563838 )

      Also, the fastest way to speed up a network is reduce the number of lusers. Completely demoralize them, electrocute them, slip a laxative into their drink, so many options, so little time.

  • Switching from Ts to Cable Internet service at work would get you fired within a week, since within that amount of time you will see downtime.

    • by afidel ( 530433 ) on Tuesday June 01, 2010 @04:47PM (#32424490)
      It depends, we have some sites that have done very well with cable providers on business class accounts (I assume those that have separate channels for business class), and less so with others. Our biggest problem has been the lack of any teeth to an SLA when we did have problems, which is why I would never move our HQ which has nearly half our people and which hosts remote access for the rest. For a remote office where they can always fall back to 3G tethering if they have an outage for a day or two and use our Citrix farm it's a great way to get bandwidth on the cheap.
      • by h4rr4r ( 612664 )

        That is our issue as well, TWC is a horrible cable provider and no "business class" isp offers a real SLA.

        • by PRMan ( 959735 )
          And yet I had TWC Business for 2 years at my office and it went down only twice for about 3 hours total in that time.
      • by Amouth ( 879122 )

        what we did for a bit was run leased lines and a lower quality SLA faster connection. Everything incoming (hosted services) where on the leased lines and all the office NAT traffic was on the faster connection with a pool fail-over to the leased lines..

        it worked well - except it took constant monitoring because when the fast connection went down it never took the interface down - if i have a frame circuit die or even hiccup the interface and the router knows, i have a cable line get cut and the - the router

    • I run IT for a cab company. 250 Cabs. 25 Workstations, 3 Servers and so on. I use TWC Business class for internet with a single T1 and a DSL line. DSL is only for the cameras. I get cheap high speed internet for everyone through the Cable on the rare occasion it has a hiccup we have a T1 line for back up. Internet slows but we keep going.
  • 11. (Score:4, Funny)

    by Monkeedude1212 ( 1560403 ) on Tuesday June 01, 2010 @04:20PM (#32424170) Journal

    Stop your IT Department from visitting Slashdot

    • Re:11. (Score:5, Insightful)

      by jd ( 1658 ) <imipakNO@SPAMyahoo.com> on Tuesday June 01, 2010 @04:28PM (#32424260) Homepage Journal
      1. Ban Quake during work hours
      2. Ban Microsoft shares
      3. Ban NFS
      4. Put users on Linux and servers on NetBSD
      5. Have all web traffic go through Squid caches
      6. Use gigabit or ten gig ethernet for LANs
      7. Ensure the switches can actually carry all the traffic, not just the traffic from one line
      8. Segment the network according to where the traffic is, not where the politics are
      • # Ban Microsoft shares
        # Ban NFS
        # Put users on Linux and servers on NetBSD

        If you are banning MS style shares, and also banning NFS, how exactly *do* you want all your users on Linux desktops to access their data on the BSD servers? Might as well just ban all TCP/IP traffic from the network, and note that you now have much more available bandwidth.

        • Re: (Score:3, Interesting)

          by jd ( 1658 )

          Banning TCP/IP can help, in some circumstances. There are circumstances where shifting the resend mechanism out of the low-level protocol is actually the best option. This basically emulates TCP capabilities over UDP. (This has other advantages. You can multicast UDP, you can't multicast TCP, which helps sending the same data to multiple machines.) NACKing unreceived packets vs. ACKing the received ones also cuts bandwidth usage -- but you've got to be careful. Either the NACKs have to be sent via a reliabl

      • Re:11. (Score:5, Funny)

        by lgw ( 121541 ) on Tuesday June 01, 2010 @05:22PM (#32424910) Journal

        2.Ban Microsoft shares
        3.Ban NFS

        If you ban CIFS and NFS, what's left? Sneakernet has great bandwidth, but the latency sucks and it's a bitch to search.

        • Re:11. (Score:4, Funny)

          by hedwards ( 940851 ) on Tuesday June 01, 2010 @05:48PM (#32425204)
          We use carrier pigeons, they can only take a couple "packets" of 16gb or so, but it only takes a few minutes for them to cross the city. We also use carrier rats internally as they can do the same thing with even higher capacity. We tried to work with carrier snails for a while, not sure why that didn't work out, but the packet never did arrive in San Diego like we expected. Snail mail my ass.
        • Re:11. (Score:4, Informative)

          by jd ( 1658 ) <imipakNO@SPAMyahoo.com> on Tuesday June 01, 2010 @06:01PM (#32425356) Homepage Journal

          What's left:

          • Andrew File System
          • Ceph
          • Lustre
          • GlusterFS
          • POHMELFS
          • Parallel Virtual File System
          • CODA

          There's probably a few others I've forgotten.

          • by lgw ( 121541 )

            Well, those are about as practical as replacing all the Windows boxes at my company, so why not!

          • by afidel ( 530433 )
            GFS is probably the only one of those I would consider production ready based on the user sessions I've attended at various industry tradeshows.
            • Re: (Score:3, Informative)

              by jd ( 1658 )

              AFS has been around a LONG time and I'd hate to be within a mile of you if you go around telling IBM that the distributed file system they ship on their mainframes isn't production ready. However, if you want another option, try Polyserve FS. That is most certainly production-ready.

              • Re: (Score:3, Funny)

                by afidel ( 530433 )
                I was specifically thinking of OpenAFS when reading AFS as I don't work with dinosaur herders =)
          • Re:11. (Score:4, Funny)

            by hawaiian717 ( 559933 ) on Tuesday June 01, 2010 @07:20PM (#32426070) Homepage

            AppleShare! For even more fun, run it over AppleTalk instead of IP.

        • by Amouth ( 879122 )

          SFTP with random block i/o support..

          if you have a client that is worth a shit it works surprisingly well..

      • by Colin Smith ( 2679 ) on Tuesday June 01, 2010 @05:25PM (#32424942)

        Run remote desktops. Bandwidth consumption to the desktop drops dramatically.
        Run your heavy network I/O over the switch stacking fabric, where you've got shit loads of bandwidth. Channel bond.
        Separate access ports/switches and storage network ports/switches. Use jumbo frames on the storage network, but don't route them.
        Prefer shared memory first, then unix domain sockets over TCP/IP/LAN over WAN. Microsecond (or better) latency vs milliseconds or seconds.
        Dedicate servers to applications, take advantage of copy on write & modern memory management.
        Let your VM management hold a significant proportion of dirty pages. WTF is the point of loads of RAM if you insist on running at disk speed? But do use a logged filesystem.
        Use a load management system. Grid Engine, Condor etc.

  • Backup to tape? (Score:2, Interesting)

    by nizo ( 81281 ) *

    Seriously, does anyone backup to tape anymore?

    • Re: (Score:2, Informative)

      by Tinctorius ( 1529849 ) *
      CERN does.
    • I do :(

      I hate my tape system with every ounce of my being. But for the time being, I'm stuck with it.

    • I do. What do you use that's cheap and 1.6 Terabytes in size? Need 4 new ones every Month.

      • by nizo ( 81281 ) *

        Disks. Not only are they way faster than tapes, but they aren't dependent on a tape drive. Picture your backup tapes. Now picture how useless they would be if your tape drive broke (or was destroyed in a disaster). Disks on the other hand can be plunked down into pretty much any machine and accessed.

        I haven't priced tapes lately; how much does it cost just for tapes for one backup? Counting the cost of a tape drive (a spare probably wouldn't be a bad idea; see above) and the cost per tape, and suddenly disk

        • by h4rr4r ( 612664 )

          30 of them does not seem cost effective, also I am not sure I trust disks for long term offsite storage.

          • 30 of them does not seem cost effective, also I am not sure I trust disks for long term offsite storage.

            Doesn't have to be 30 of them, dual layer dvds will hold 8Gigs, I have yet to require more than 2 of them for all my data, period. Granted, that's personal data, your mileage may vary. As for your second point, we're talking classic magnetic tape, right? I dunno...

            • Re: (Score:3, Informative)

              by h4rr4r ( 612664 )

              8Gigs is nothing. We do nightlies into the TBs. If this is not what you do for a living you probably lack the experience to be making valuable input.

              I need 30 blocks of whatever for our rotation system.

        • Re:Backup to tape? (Score:4, Interesting)

          by afidel ( 530433 ) on Tuesday June 01, 2010 @04:48PM (#32424508)
          I have almost 5k tapes offsite on legal hold, how much would that cost in HDD's and storage fees vs tape?
        • Re:Backup to tape? (Score:4, Insightful)

          by QuantumRiff ( 120817 ) on Tuesday June 01, 2010 @04:50PM (#32424536)

          unless they are physically damaged, or get too much voltage applied to them, and fry the boards, etc.

          Tape is designed to be a long term, shelf stable investment. How many old MDF hard drives can you access now? You can go to IBM right now, and order tape drives that work with mainframes from the same era. You will pay out the nose, but they are available.

          • Re:Backup to tape? (Score:5, Insightful)

            by juuri ( 7678 ) on Tuesday June 01, 2010 @05:34PM (#32425022) Homepage

            You sound like someone who has never been responsible for long term backup storage. Stuff isn't just thrown on a tape and stored offsite for years. Responsible DR requires you to constantly be shifting all your long term storage onto new methods, constantly. You wouldn't have MDF hard drives with valuable data on them, or even legacy data as all that data should have been MOVED and VERIFIED onto current media.

        • Re:Backup to tape? (Score:5, Informative)

          by Monkeedude1212 ( 1560403 ) on Tuesday June 01, 2010 @04:52PM (#32424558) Journal

          I look at the tapes, and yes, I know how useless they'll be in about 3 years time, we'll have migrated to a new system that isn't compatible with this. I look at the backup tapes from 1999, and how we don't even have a tape drive for them anymore, but should we need to access them we'll probably hunt them down.

          What kind of disks are you talking about? Well I need over 1TB of space per backup, at the end of each month, 4 different 1+ TB backups to be stored indefinately. So I can't use floppies, CD/DVD/BRD...

          Because Hard Drive Disks go through different mediums too you know, I can't plug my SCSI into a SATA. I am not entirely sure that any hard drive I use today will be accessible 10 years from now. And lets look at the prices for a 2TB hard Drive (since that'd be what I'd need). Let's say I get lucky and get them for $100 each. Tapes I can get for $30.

          By using tapes we get the size we need, though the speed is slow, for the right price. Saving almost $3000 a year by using tapes.

          • by afidel ( 530433 )
            Tape speed isn't slow, LTO4 will do 240MB/s with 2:1 compressible content, your source probably can't keep up with that for most types of backups.
          • Hard disks don't go through revisions as quickly as tapes do. And on top of that you're not going to have to worry about whether or not you can read the HDD. Mainly because you'll know. Either the interface is supported in the other machine or it won't. On top of that any disk made in the last 15 or so years can be read with technology that's readily available today. But really you shouldn't have your disks sitting that long because it ends up being cheaper to dump the older ones onto newer larger ones anyw
          • Re: (Score:3, Informative)

            by Sandbags ( 964742 )

            SATA ports have been on mainboards for nearly 10 years. IDE is a near 20 year old technology and IDE drives are still available. The format methods for disks are current, and data is EASILY migrated from one partition format to another. SATA 6 is backward compatible with SATA I drives and PCI IDE adapters cost about $15. (or USB external adapters)

            Backups should not do 10 years without being migrated, and disk hardware 10m years from now is practically guaranteed to be available to read your disks, and le

        • Re: (Score:2, Insightful)

          by XXeR ( 447912 )

          I haven't priced tapes lately

          That's too bad. If you did, you'd know why many of us still use tape...especially in times like this where every penny matters.

        • How do you backup TB's of data across many drives? Then how do you ensure your disks dont get damaged on the ride to the bank/vault? How do you store hundreds of disks?

          If you're a small business that can get away with backing up to a couple external drives then you probably don't need tapes. If you can afford to have ALL of your data replicated to multiple sites and those sites can keep backups/archives running on live disks then you probably don't need tapes.

          In my case, today I sent out 9 LTO4 tapes
          • Re:Backup to tape? (Score:4, Interesting)

            by turbidostato ( 878842 ) on Tuesday June 01, 2010 @08:23PM (#32426502)

            You are right about tapes being more resilient, cheap and brought in a better form factor (not a minor concern; in fact that's what makes tapes the proper choice most times). But you are wrong in everything else.

            "How do you backup TB's of data across many drives?"

            Just exactly as you do with tapes.

            "how do you ensure your disks dont get damaged on the ride to the bank/vault?"

            By using careful transportation? Heck, if we can move a Ming dinasty jar all across the world, we can certainly move a bunch of SATA disks to the vault.

            "today I sent out 9 LTO4 tapes (each holds upto 1.6TB) to the vault. I couldn't manage 9 disks."

            You must be joking. 9 3.5" disks fit comfortably in a cardboard box protected with bubble plastic. Can't you manage *that*? Really?

            "With tapes I just put them in the tape library and it manages everything itself"

            Do you mean a cheap disk cabin wouldn't do the same? My two 15 SATA disks cabins must be a matter of magic, then.

            "moves them around"

            Of course your tape library moves the tapes around. That's because readers are so expensive that it only has one or two of them instead of fiveteen. Two 8 ports areca cards won't need to move any disk around: it gets enough ports to access all of them at the same time.

            "knows which tape has what data, what can be overwritten, etc. Everyday it gives me a list of tapes to bring back from the vault and it gives me a list of tapes to take to the vault."

            Exactly the same with disks, of course, since that's a matter of software, nothing physical media-related. Oh! and you'll get decent speed for random reads (like when recovering a single file) which you can't dream with tapes.

            "The tapes cost about $40 each. A drive costs probably $1000."

            A LTO4 probably will cost you more 50$ than 40$ but, anyway. Of course, a 2TB disk will cost you about 150$, not 1000$. The cost per GB is still on the side of tapes, but it's not sooo far from disks. And disks can be accessed randomly, and stand for read/write cycles orders of magnitude beyond tapes, so they are fastly coming to odds.

            "My tape library cost like $10,000, it has two drives and holds around 40 tapes."

            A SATA disk cabin will cost you about 1500$, holds 15 disks, will give you simultanous random access to all of them *and* will be easily upgraded to bigger disks when they become affordable.

        • Drop disk, there goes your backup.

          I think the reason most people are still using tape backup systems is that they are required to, they work, and the people that pay the bills trust them. I can say that disk backup is the way to go all I want. The tot box full of dead hard drives says otherwise. Granted thee are desktop and laptop drives not server drives, but the boss does not see that. They just see a box of dead hard drives.

          • by lgw ( 121541 )

            Disk backup only makes sense if you're shipping bits off-site to backup disks, or as an onsite cache of your real backups. Shipping the disks themselves is a bit silly.

            • by afidel ( 530433 )
              So you don't believe in an oh s**t offline backup? If not then I'd hate to be there when you find out why people with experience insist on it.
    • by Pop69 ( 700500 )
      Yup, cheap and high capacity.

      Besides, they're easy to take offsite in a pocket and if a backup isn't off site then it's just a copy.
      • by nizo ( 81281 ) *

        I looked around a bit, and it looks like tapes cost over $100 for 1.5TB of capacity (uncompressed). Throw in $1000 for a tape drive and the whole tape thing isn't looking so hot....

        We take disks off site, though I will grant that dropping them probably isn't a great idea. I've used padded Pelican cases for transport before without any problems thus far.

    • by rm999 ( 775449 )

      Tape is made for deep archiving, meaning you probably won't need to read the data anytime soon, but when you do it will be there. It is cheaper and more reliable than disk for this. Therefore, a lot of people still use them.

      • by nizo ( 81281 ) *

        Now archiving I can see, though then the question becomes, is there a working tape drive that can read these tapes?

        • by h4rr4r ( 612664 )

          You can buy tape drives from decades ago, it gets expensive but you will be able to get them.

  • Don't trust articles that have:

    Created 2010-06-01 03:00AM

    before the "well thought out" advice.

  • Like 'know your apps' means anything in the corporate world, especially when apps are custom built, what are you going to do, replace a custom built app with something else? If it was easy like that then why was it custom built in the first place? Sure, some custom apps can be replaced with out of box stuff, but seriously speaking, most cannot, and then your administrator is in the hands of the geniuses in the management, business, marketing, and software development departments :)

    • Re: (Score:2, Insightful)

      by xianthax ( 963773 )

      i think you misunderstood.

      Know your apps means knowing their bottlenecks and how to alleviate them.

      Some apps have high sustained disk reads, some writes.

      Some have high amounts of random reads, some randoms writes, some both.

      Some apps are I/O bound, some memory bound, some CPU bound.

      The source of the app has nothing to do with your ability to monitor the operation of the app and determine its infrastructure needs.

      • I understood very well, my point is that in an environment with many various apps, it is going to be extremely difficult to first 'know' them and second to optimize for them. I am remembering a few places I worked at, there is no time for an admin to do even normal everyday activities, like various heat tickets, forget about having time and a lab! to do actual studying of apps and possible optimizations.

        Hey, as I said, the advice is wonderful and those who can afford it (like Google I guess) are doing it a

        • A better article would be one that identifies HOW to "know your apps" rather than just telling you that you should.

          What tools are available. How to use them. What to look for in the most common circumstances.

  • by ickleberry ( 864871 ) <web@pineapple.vg> on Tuesday June 01, 2010 @04:31PM (#32424296) Homepage
    Just give Eric Schmidt a call, tell him you have nothing to hide from his company or the government and they will replace all your machines with shiny new Google Chrome OS based "Net tops", put all your data on their servers, give you a brand new direct fibre optic connection to their nearest office and all they want in return is the ability to meticulously sift through your data in order to find the best way to bombard you with text-based ads.

    Everything is more shiny with Google.
    • all they want in return is the ability to meticulously sift through your data in order to find the best way to bombard you with text-based ads.
       

      And to insert their ads on your printed reports.

    • Everything is more shiny with Google.

      Screw mirror finish, i want my car to have a Google finish and blind everyone with the reflection!

    • by Jeng ( 926980 )

      As long as its not flash based web banners.

  • 2Base-TL (Score:5, Insightful)

    by thule ( 9041 ) on Tuesday June 01, 2010 @04:31PM (#32424298) Homepage
    What reason is there to run T1/T3 anymore? I know, by definition, the regulation over T1/T3 guarantees reliability. I have dumped T1's and switch to 2Base-TL (aka Metro Ethernet) and it is extremely reliable. For me, the "more reliable" argument doesn't hold much. The latency is very, very good -- often below 10ms. Even if the network goes down, I can afford some sort of backup link. I'm paying under $1,000/month for 10mbit (symmetrical). The footprint for 2Base-TL is pretty good because it is based on DSL technology. It doesn't have the reach that T1's have, but it isn't bad. The big difference is that is spreads the signal over multiple pairs of wire (in my case, 8 pairs) instead of a single pair.

    If your company has T1's, shed yourself of the "regulated" links and check out 2Base-TL. You will be glad you did.
    • Re:2Base-TL (Score:5, Interesting)

      by BagOBones ( 574735 ) on Tuesday June 01, 2010 @04:46PM (#32424476)

      For our US offices all we can get with a decent SLA is Factors of T1, we get Fiber/Ethernet service in Canada 10x faster for the same cost and SLA.

    • by h4rr4r ( 612664 )

      What kind of SLA do you get on that?

      Uptime is more important than speed to some folks.

    • by afidel ( 530433 )
      We pay $3k for 20Mbit commit 45Mbit burstable on a DS3, I'll take that and a SLA with teeth over your metro ethernet solution considering the annual difference in cost would be wiped out in ~30 minutes of lost productivity.
    • We run T1's everywhere, sometimes two in a bundle. It sucks but it's available almost anywhere, unlike metro ethernet.

      According to AT&T [att.com]

      This service is currently available in the following states:

      • Alabama
      • Florida
      • Georgia
      • Kentucky
      • Louisiana
      • Mississippi
      • North Carolina
      • South Carolina
      • Tennessee
    • Thats handy, if all your offices are in one town. If you have offices all over the country, its difficult to deal with multiple providers, and SLA's, and creating VPN links between them. (and the fun of monitoring all that!)

      Also, with T1's, if you have a bunch going out to different offices, you can have the provider MUX them together, so your core location has one DS3 coming in, carrying all your T1's on it, instead of dozens of CSU/DSU devices plugged into routers.

  • Citrix/VDI/etc (Score:3, Insightful)

    by nurb432 ( 527695 ) on Tuesday June 01, 2010 @04:35PM (#32424326) Homepage Journal

    Get rid of fat clients, that will do wonders to reduce your network bandwidth needs out to the customer. Then beef up the datacenter network.

    • Re: (Score:2, Interesting)

      by xianthax ( 963773 )

      This is probably the first time I've seen the claim that thin clients _reduce_ network traffic.

      Care to elaborate?

      • by afidel ( 530433 )
        VS having everyone drag files and email all over the WAN? Yeah it will significantly reduce your network traffic.
        • Re: (Score:3, Insightful)

          by BitZtream ( 692029 )

          Not.

          Sending an image of an email in outlook that takes half a meg EVERY time it gets viewed is hardly better than sending 15k of html to the client which is cached and displayed locally.

          Theres a reason we don't use HTML instead of just sending prerendered images over the web.

          Resending an large image every time you hit backspace in Word is hardly intelligent use of bandwidth.

          • Re: (Score:3, Informative)

            by afidel ( 530433 )
            Hehe, you have no clue what you are talking about, ICA is completely usable over dialup, there is no half meg image.
      • by nurb432 ( 527695 )

        The last place i was at was where we did this on a large scale we were running oracle apps at the client. Moving to thin client reduced our network use a ton. ( and saved us from having to update our workstations.. why client/server apps need higher end workstations i still beyond me and sort of defeats the purpose. )

        Sure if all your users do is browse pretty pictures on the internet it might not help much, but sticking to regular productivity apps will.

        VDI should help in either case, working on a POC now.

  • Back to 56k (Score:4, Funny)

    by Mishotaki ( 957104 ) on Tuesday June 01, 2010 @04:35PM (#32424328)
    Slow down your internet connection to a single 56k line... then people will stop trying to use it to look for porn and all the useless crap searches they do on google... You'll also save some money with the monthly bills!
  • monitoring tools (Score:4, Interesting)

    by WarJolt ( 990309 ) on Tuesday June 01, 2010 @04:36PM (#32424342)

    once I told a coworker about emule. He downloaded and installed it. The next morning the CFO comes to me.... "Have you ever heard about emule"...the infastructure was screwed up, but instead of fix it they waited for p2p to bring the network to it's knees. The best way to test a network is to see how many simultaneous p2p connections it can handle before crapping out. Needless to say there were some consequenced for that employee.

    • Re: (Score:3, Insightful)

      by BagOBones ( 574735 )

      No the best way is to see if p2p is already blocked.. Tossing bandwidth at the problem is not always the best solution.

      • Re: (Score:3, Informative)

        by clarkn0va ( 807617 )

        Handling p2p is not so much about bandwidth as it is about routing capacity and QoS. There's a reason that a proper Linux-equipped home router can withstand torrents with literally thousands of open connections, while your typical DLink or Trendnet will buckle somewhere around 150, and I don't care what your link speed is.

        Similarly, a good healthy torernt can saturdate just about any WAN link you want to throw at, but only a proper QoS solution will keep a 1mbit connection responsive under a comparable load

  • Mostly Worthless (Score:4, Insightful)

    by Rantastic ( 583764 ) on Tuesday June 01, 2010 @04:46PM (#32424478) Journal

    It frightens me to think that there are people getting paid to take care of enterprise systems that would not already know everything in this article. Mostly, it reads like a thinly veiled ad for VMWare products.

  • The article suggests things that people worth their IT salt should have already implemented, or at least investigated. Really baseline stuff there.

    However one big oversight I see a lot w.r.t. backups and local networks which toss large amounts of data around are configuring jumbo frames. This is often forgotten about when throughput is getting tight.

  • run ntop off a span port or tap. You'll see the majority of your network traffic is from users idling away on things not quite work related. Separate egres traffic on port 80 and 443 with linux htb, tcng or equivalent profiling. Saves you bandwidth that exchange will immediately suck up.
  • Write your HTML in notepad, just like the linked article :)

    Seriously, I was almost shocked to see such a barebones site. Its been that long.

  • by juuri ( 7678 ) on Tuesday June 01, 2010 @05:38PM (#32425078) Homepage

    ... and if you think it is about latency you are mildly retarded, as are the writers of this general knowledge article.

    Leased lines in general have better SLAs but that isn't even much of a point anymore as they cheaper products "claim" to have similar ones. The difference here is how good is that business class dsl/fiber support at 2am? What are the odds they are actually going to be willing to send someone out to the telco closet right away if there is an issue? You buy leased lines because you need *real* support of the SLAs... not this, "well we were down for 5 hours, so how about we credit you a day off!" bullshit.

    It's really scary for what passes for "good advice" these days.

  • by JakFrost ( 139885 ) on Tuesday June 01, 2010 @07:04PM (#32425940)

    Interrupt Moderation = Disable

    Here's a real tip, disable Interrupt Moderation on your Network Adapter Cards to achieve greater bandwidth, as much as 100%+, and lower latency (the two measures of network performance) at the expense of processor utilization due to more hardware interrupts that have to be handled.

    Instructions: In Windows open up Control Panel, Network and Sharing Center, click on Change Adapter Settings, open Properties on your Local Area Connection (sometimes #2, #3, or something if you have more network cards), click on the Configure button, then the Advanced tab, select Interrupt Moderation, change the value to Disabled, while there look for any settings with the word Offload and enable them all, and then click the OK button to make the changes. This will restart your network card driver and make the settings effective.

    Most network cards from popular manufacturers such as Intel, Broadcom, Realtek, etc. hold network packets in a buffer until enough time goes by before raising a hardware interrupt and telling the processor, operating system, and network driver that there are packets waiting to be serviced. By disabling Interrupt Moderation you instruct the network driver and card to raise the interrupt every single time a packet comes in, thus making your processor service the network card much faster thus decreasing latency on the packets held in the buffer and also increasing bandwidth by allowing more packets to flow through faster. This increases your processor utilization by a significant amount 10-30% but if you have a recent dual, quad, hex, octo-core processor and recent network drivers that are multi-threaded with multi-core support and have Receive Side Scaling support then the increased processor utilization is negligible to your computer and if you are running a network server then network performance should be a priority anyway.

    I have personally seen and tested corporate and home LAN environments using Fast Ethernet 100 Mbit/s (~11 MByte/s) go from slow 6-7 MByte/s to 10-11 MByte/s throughput, and Gigabit 1,000 Mbit/s (~100 MByte/s) go from ~30 MByte/s to 95-98 MByte/s speeds due to these changes. No other network driver setting had as much performance impact as Interrupt Moderation.

    IEEE 802.1AX (aka 802.3ad, Cisco EtherChannel)

    For advanced network performance improvement look at link aggregation (channel trunking, link bonding, etc.) using the IEEE 802.1AX [wikipedia.org] (aka 802.3ad, Cisco EtherChannel [wikipedia.org]) protocol support in your Intel and Broadcom network adapters using their Advanced Configuration Utilities on your servers to bundle from 2-8 Ethernet network adapters into one trunk to increase your performance. Just tell your network administrators to enable those features on your ports and find out if they are able to do it if your links are going to the same switch or if they have virtual switching enabled in case your links span switches. Just think about 4 x Gigabit performance if you bundle all 4 NICs on most servers.

    NetCPS

    You can test your own network performance with this simple but great utility called NetCPS. Just be sure to disable Interrupt Moderation on both of the computers on your LAN that you will be using for the performance testing otherwise you won't be able to achieve these numbers if one of the computers can't handle the data as fast as the other one. Try it with your laptop and desktop for example.

    NetCPS [netchain.com] - is a handy utility to measure the effective performance on a TCP/IP network.

    Just execute "netcps.exe -s" on the listening system and then do "netcps.exe computername " on the other computer to use the utility to test the throughput bandwidth. For Gigabit you can use the "-m1000" switch to increase the transferred amount to 1,000 MBytes instead of the default 100. Below is an example.

    • by Anonymous Coward on Wednesday June 02, 2010 @01:31AM (#32428478)

      Nice informative post, interrupt moderation sure sounds interesting. Link aggregation, however, is not as useful as it sounds for the following reasons:

      1. Hardware link aggregation (link aggregation supported in silicon) works by hashing, not by distributing packets evenly across all links that are aggregated. If you can spare some time to ponder about this for a moment, you will be able to see why hashing is used. In real life situations, 4 x 1Gbps links aggregated together never equals 4Gbps throughput.

      2. If link aggregation is handled by the software (which is most likely the case if aggregating multiple NICs on a server) then all it really provides is redundancy. It is very difficult for an average server to process 1Gbps of incoming traffic, let alone generate 1Gbps of worths of traffic. Not to mention the read/write speed of the storage device(s) used in the server.

      (Unless it's using PCIE SSDs in RAID configuration, which would be very interesting and I am dying to find out the throughput of such a configuration!)

      For once I actually know what I am talking about, so maybe I should have created an account before posting this one.

  • by DaveAtFraud ( 460127 ) on Tuesday June 01, 2010 @09:23PM (#32426974) Homepage Journal
    RFC 1925 [rfc-archive.org]

    Cheers,
    Dave

Ocean: A body of water occupying about two-thirds of a world made for man -- who has no gills. -- Ambrose Bierce

Working...