Please create an account to participate in the Slashdot moderation system


Forgot your password?
Networking IT

New Data Center Standard 196

mstansberry writes to tell us that the Telecommunications Industry Association (the people who brought you the CAT standards for unshielded twisted pair cabling) recently published a 148 page document meant to standardize the design considerations for every single aspect of a data center. The standard covers everything from site selection to rack mounting methods.
This discussion has been archived. No new comments can be posted.

New Data Center Standard

Comments Filter:
  • twisting (Score:5, Interesting)

    by ( 463190 ) * on Wednesday August 31, 2005 @01:32AM (#13443469) Homepage
    Speaking of CAT standards, has anyone else had a good look at the differences between CAT3, CAT5, etc?

    CAT5 just seems to be twisted a little tighter, but CAT6 actually modifies the twist gradually, in a cycle that repeats every few feet, with each pair 90 degrees "out of phase" from the next. Plus theres (sometimes) a plastic "spine" in there to maintain spacing and/or bend radius. It's not obvious to me how varying the twists-per-foot along the cable should help - anyone know?
    • Re:twisting (Score:5, Informative)

      by Mr. Flibble ( 12943 ) on Wednesday August 31, 2005 @01:38AM (#13443505) Homepage
      because of the extra twisting there is less crosstalk. The less crosstalk, the more information you can pump down the wire, thus the difference between Cat 3,5 and 6.

      (FYI, crosstalk is interference from a parralell channel in the wire)
      • The question was how exactly does varying the twist rate help? (See uncle post [] for a starting point).
        • by Anonymous Coward
          To "detune" the coils. If the twists were equidistant (especially in the straighter parts) it would resemble a tuned antenna.
      • Re:twisting (Score:4, Informative)

        by william_w_bush ( 817571 ) on Wednesday August 31, 2005 @02:38AM (#13443713)
        also yes and no, the added twisting makes the crosstalk elimination better at different freqencies. at 1ghz a single turn is enough to cause the antenna effect, while at 10mhz (original spec) you need like half a meter or something.
      • Re:twisting (Score:2, Informative)

        by lithium100 ( 624098 )
        I'm no expert but fundamentally it comes down to Maxwell's equations. A current is induced by a magnetic field in a wire by a changing magnetic field - ie; an ac signal of some frequency. Similarly, a changing electrical current in a wire will create a changing magnetic field and hence "crosstalk".

        There is one caveat though and that is only the orthogonal components (to the wire) of the magnetic field induce a current and so by twisting the wires you minimize the orthogonal components.

        At least that what
        • Additionally, twisting 2 wires together means interference from outside sources (like 50 or 60Hz mains hum?) is made as uniform as possible across them. If you put the same signal onto these 2 wires half a cycle out of phase then you can put the signal at the other end through some simple electronics to produce the difference between them and reject the noise. I would have thought that this is an equally important aspect of the twisting.

          Does common mode rejection still play a part in modern ethernet or othe
          • Now you're taking about balanced signals which as you mentioned take "some simpe electronics" which takes "some more money". I would be interesting to see what difference there would be in error rates between balanced and unbalanced commuinications over CAT5

            • All ethernet comms is balenced (differential).
              There are two ways to remove the common mode signal from a differential pair:
              1) an opAmp
              2) a balun transformer.

              The reason why manufacturers use number 1 is because it is cheaper, tunable, and weighs less.
    • Re:twisting (Score:5, Informative)

      by rusty0101 ( 565565 ) on Wednesday August 31, 2005 @01:41AM (#13443521) Homepage Journal
      Not sure entirely myself, however as a thought, having a constant twist (different from one pair to the next, but the same for the length of the cable) could set up a situation where two cables are in phase at several points alog the cable run, and some signal transfer may hapen.

      Varying the twist rate along the run of a pair, as well as doing what you can to keep it out of phase with other pairs by braiding, or other means would make it possible to set up a longer cable run without viable phase transfer points that could cause signal bleed between pairs.

      However that's just conjecture on my part. I am sure someone will come along who can give us the math to show that my conjecture is entirely wrong.

    • reduces the ability for coupling between strands at a regular point. essentially spreading the crosstalk from a different pair between different coupling points. instead of a pair being completely coupled or not the coupling is spread such that 1 whole interference cycle is generated, cancelling out all crosstalk, or at least more than the constant twist.

      tightly twisted pairs take longer to go through than loosely twisted pairs, and so instead of 2 parallel antennae inducting into each other you have two an
      • Re:twisting (Score:3, Interesting)

        oh fyi, this doesn't help at lower speeds, but common lvds serdes nowadays hit upwards of 10Gbps, so this kind of thing makes a big difference at that point because the wavelength is small enough that the difference becomes possible to vary in the cabling.

        basically for 1gbps its just starting, past 5gbps (10g copper spec calls for this though a working group is trying to run 10g on cat5 (boggle)) you need this kind of thing, and it happens to cut down on rf interference and reception, another handy thing, c
    • Re:twisting (Score:4, Informative)

      by SoloFlyer2 ( 872483 ) on Wednesday August 31, 2005 @02:26AM (#13443668)
      Varying the twist rate only helps when u have several cables together ( Think 4 or 5 cables in a conduit ). Basically it just reduces the chance that 2 cables next to each other are going to have exactly the same twist rate.

      In other words it reduces cross-talk between cables :)

    • Re:twisting (Score:5, Informative)

      by DaEMoN128 ( 694605 ) on Wednesday August 31, 2005 @02:37AM (#13443706)
      I am BICC certified. There is a difference between CAT3 and CAT 5. The twists are much more pronounced on CAT 5. It may not look like it, but there is a big differnece. There is also different amounts of twists per pair. CAT 6 is something I havent had much hands on with (we use mostly fiber for that stuff). CAT 3 has much lower frequency response per pair than CAT 5. A good cable tester can actually verify that for me if you can get your hands on one (see network analyser). Just from looking online, CAT 6 has a minimum 250MHz bandwith while CAT 5 has a minimum 100 MHz bandwith per pair. [] check here. It says CAT 5e is the same as CAT 6 but CAT 6 is manufactured to a higher standard. I guess that the fab tolerances are tighter for Cat 6.
    • More twists means less near end crosstalk. Cat 6 also specifies minium bend radiuses. Something a girl I once made out with paid attention to.

      A good article on cat 6 can be found at The Data Center Journal [] It points out things like why you don't want to use anything that would clamp down to tightly on the Cat 6 runs, like nylon tie-wraps. Use velcro instead. That said, I've run 1Gbps through home-made Cat5 with no errors, albiet for short distances (less than 50 feet).

      Still, $250 just to read a standar
    • Re:twisting (Score:3, Informative)

      by ajs318 ( 655362 )
      When you pull a bell-rope, it stretches slightly. Then the stretched bit shrinks back to how it was and a higher-up bit of the rope is stretched. The stretched bit works its way up the rope to the wheel, which only moves when the last bit of rope snaps back. All this happens far too fast for you to see, but it does happen. {You might be able to see it in a Slinky spring loosely stretched out, especially if you have a camcorder that can do slow motion playback}. Due to the physical properties of the rop
  • by Trusty Penfold ( 615679 ) * <> on Wednesday August 31, 2005 @01:32AM (#13443471) Journal
    How boring ... who wants to work somewhere identical to the last place. And identical to your friends' places of work.

    How about letting a bit of originality in once in a while?

    Oh yeah ... and I'm not bloody paying $250 just to make more work for myself.

    • How boring ... who wants to work somewhere identical to the last place. And identical to your friends' places of work.

      When designing a Data Centre, I really don't think the number one priority is to make it an artistic statement or a fun place for the IT staff to hang out in.
    • by Jekler ( 626699 )
      I can't tell if you're being serious. Assuming you're being serious...

      Originality and creativity have certain places in the world. Just because you have guidelines and standards doesn't mean you can't be creative. Programming languages have standards, that doesn't mean programmers can't create original programs. If there were no coding conventions and standards, you'd almost never be able to examine someone elses code. "Wait a second, why are all the integer variables stored as strings? And I think
  • Doomed to failure? (Score:5, Insightful)

    by Toby The Economist ( 811138 ) on Wednesday August 31, 2005 @01:36AM (#13443490)
    Be interesting to see if it's useful.

    When you specify something like a cable, it's straightfoward to get it right, because the job the cable does and the way it's used is very well understood and doesn't vary between users.

    With something complex like a data center, there's so much variance in how they're operated, exactly what they do, where they are, etc...having a standard may well *not* fit everyone's needs, either because their needs were not perceived or understood at the time or because their needs simply cannot be met by the standard.

    • by JanneM ( 7445 ) on Wednesday August 31, 2005 @02:09AM (#13443621) Homepage
      With something complex like a data center, there's so much variance in how they're operated, exactly what they do, where they are, etc...having a standard may well *not* fit everyone's needs, either because their needs were not perceived or understood at the time or because their needs simply cannot be met by the standard.


      But when you have a formal standard, you have something to measure against. Every aspect of the data center design is not only standardized, but the how's, why's and therefore's are spelled out. If you suspect the standard doesn't meet your needs in some respect (a clear lack of surround sound for late-night fps tournaments, say), it makes it clear exactly how your criteria changes the requirements, and it makes it much easier to see how it could impact the rest of the design.

      So even if you use not one single recommendation (we need the disco ball, damnit!), you have something reasonable and well documented to compare against, which makes your job easier.
      • AMEN (Score:3, Insightful)

        by DaEMoN128 ( 694605 )
        I agree completely. The "industry standards" for many things are used alot, even if they dont partularly apply to your site. I would love to see people dress the cables in the back of a cabnet with uniform bending radius' and with a proper service loop. I would love to see that in every data center I work in. I wouldn't have to replace cables when a channel bank gets moved. I would love to see this fiber I have to deal with be secured properly and not cinched down hard enough to break your 1st. 6th, 18
      • by PhYrE2k2 ( 806396 ) on Wednesday August 31, 2005 @08:32AM (#13445011)
        But when you have a formal standard, you have something to measure against. Every aspect of the data center design is not only standardized, but the how's, why's and therefore's are spelled out.

        But not all datacenters are equal for a reason. I've seen maybe 20-30 datacenters in the past few years for various clients and they all have different features, different offerings, and different goals.

        I'll list a few of the big differences I've seen in my experiences:

        Some want to be in the downtown core, close to many businesses, but charging a premium for the space. Others claim that being in the outskirts of the city provide security in the event of any problems (mainly hyped due to 'terrorist attacks').

        Some feel the need for N+2 generators, others more. Some feel that a fallback to city power if their PDUs ever fail is good, and others feel there should be a whole other protected power distribution system (at an extremely high cost for something rarely used).

        Some like cooling each rack from the top, others blow it up every other isle and suck air down on the opposite. Some cool the whole room, claiming lots and lots of cooling units around the outside does the trick.

        Some like the datacenter two stories underground. Others claim that they're a first target for flooding and other problems stereotypically associated with a basement. Others say that the datacenter on the 10th floor of a tower is inaccessable and subject to other security feats of the building.

        Some like dedicated buildings, others like quietly slotting themselves in office towers.

        A few I went to were monitored from 3000+km's away, and others had 24/7 onsite staff. Some had technical electronic keys, and others a simple mailbox key. Some had biometrics, others just a key.

        One I went to even had outter walls capable of withstanding most missiles. Others had windows with only paper over them for security reasons.

        Some let you roam freely by a security personelle and simply log equipment, others weigh you on the way out to make sure that you didn't take anything you didn't show up with without signing it out.

        The point is each of these serves a very different purpose. If you are going to have lots of untrusted people working on equipment, it's important to make sure nobody takes anything. Each one has its advantage and disadvantage, and I don't think any one of them is 'right'- it's just trying to find a solution to problems that experience has provided.

        Is there a right answer with anything? Who is to say that any answer is right or wrong? They're just different solutions to the problem. If power stays up, systems are secure, systems get cooled, and the network is available, who is to say the solution is wrong?

      • So even if you use not one single recommendation (we need the disco ball, damnit!), you have something reasonable and well documented to compare against, which makes your job easier.

        Until it comes time to justify to the PHBs and bean-counters why you didn't follow every recommendation in the standard, to the letter. Sure, YOU know that your custom solution is more appropriate than the baseline, but how are you going to defend it to people that don't understand it as well?
      • I have it on good authority that some of the most impoartant items are missing:

        1) How to determine caffeine / worker ratios, where to put the coffee pot and soda machines, whether and how much to charge, and a list o vendors who still deliver Jolt.
        2) Air lock standards so you can crank up a stereo in the machine room loud enough to drown out the machine noise, without irritating fellow workers in cubes, managers walking around, and customers waiting in the lobby with different musical preferences.
        3) Minimal
    • Large companies or gov't agencies already have standardized data center processes. Go into a cell phone tower equipment room, and you'll find that one room is practically identical to the next in most ways.

      When AT&T built its long-distance microwave network, every center was the same as well. If you're running a nationwide network of branch data centers, you need standardization.
  • by darnok ( 650458 ) on Wednesday August 31, 2005 @01:36AM (#13443493)
    I'm curious to see what this document contains: whether it's an ITIL-like view of the world (e.g. a data centre runs on change management, capacity management, problem management, ...), a hardware based view (e.g. a data centre needs a raised floor to duct cables, air conditioning, secure access, racks, ...) or something else.

    Just not curious enough to pay the price to find out
    • by fyoder ( 857358 ) on Wednesday August 31, 2005 @01:44AM (#13443528) Homepage Journal
      Just not curious enough to pay the price to find out

      Seriously. If all manuals were that expensive there would have been no 'RTFM'. It would have been 'STFM'.

    • Sure they charge $250 to look at it. If they just released it then people whould show what foolishness it is and all the flaws in it. But if you pay $250 to look at it, do you want to admit that you paid $250 for a bogus standard? Do you want to explain to your boss why you spent $250 of the company's money to do that? If you sell your time as a consultant do you want to tell your customers you are someone who was duped into paying $250 to see 148 pages of bogus standards, or do you want to paint yourself a
      • Having something in black and white, printed by a whoopdy-do sounding organisation can easily save you that much when some moron electrical inspector comes around pissing and moaning about code violations because your subcontractors are from out of town and don't belong to the same union the the inspector belonged to.
    • ...don't have raised floors. They use cable trays that are hung from the ceiling. It's a beautiful thing if you've ever had to lay on your belly reaching under the floor trying to fish out a cable while the refrigerated air is blasting your face and drying out your eyes and nose. If this spec wants raised floors, I'll save more than $250 for myself and my company by designing the datacenter without this goofy spec, and use with easy-maintenance stuff like overhead cable trays.
      • Actually, you can use raised floors for another purpose - cooling. Your AC pumps tons of cold air under the floor, and then you have vents under each rack. You use racks without vents up and down the sides, and you end up with a cold wind tunnel blowing right through each of your racks.

        The cleanest data center I've seen did this, ran the power under the floor, and ran all the data through overhead cable trays.
  • /. effect (Score:2, Insightful)

    by OneArmedMan ( 606657 )
    I wonder if they considered defences to the /. effect when writing this.
  • by lamasquerade ( 172547 ) * on Wednesday August 31, 2005 @01:57AM (#13443578)
    I welcome this standard with open arms. I look forward to the not-too-distant day when I will be able to buy 100m(sq) of Standard Data Centre on eBay for $25. No more un-backed-up un-RAIDed hard drives for my mp3s!
  • Does anybody have a link to the document yet, since $1.6 per page of bs is a bit too much. What they did is essentially take all or part of the other standards which are around right now, make a crosssection and dump it in a new document. Sort of standard history or literature thesis writing, no real research, nothing new to tell, just a new cover and 2 weeks at the typewriter.
    • Does anybody have a link to the document yet, since $1.6 per page of bs is a bit too much.

      Slashdot moral concept #7: If item is perceived to suck, stealing - oops sorry, forgot Slashdot moral concept #6 - infringing it is allowed.

      Example: "If $band would put out better songs, maybe I'd buy their album. Until then, I will continue to use BitTorrent to get their material."
      • I will not keep the document, I will not even remember it, I will just glance at it.

        Or everything you can eat before checkout in a supermarket is free, however in a restaurant the rules are different again.
      • This analogy will fail when this new data center standard becomes a matter of enforcement, like those building codes that are created partially by taxpayer money but you have to pay a corporation a big pile of money just to look at them.

        Corporate welfare at its finest.

        Of course, these data center standards haven't reached that point, but who's to say they won't? It's being positioned that way. We'll see whether or not it ends up that way.
        • when this new data center standard becomes a matter of enforcement

          Which won't be anytime soon, because "TIA is accredited by the American National Standards Institute (ANSI) to develop voluntary industry standards for a wide variety of telecommunications products." (from the TIA website []). They don't have the power to write laws.
          • Sample scenario:

            Hi, your datacenter isn't TIA-compliant. We won't sell you xyz hardware. We won't sell you xyz fire-suppression system.

            Hi, I notice your datacenter doesn't have a fire-suppression system. I have to close it, by law, until it's installed.

            Hi, I can't install a fire-suppression system until you bring the datacenter up to TIA-standards.

            Needless to say, TIA doesn't have to make their spec law for it to be able to screw your datacenter over.
            • "Hi, your datacenter isn't TIA-compliant. We won't sell you xyz hardware. We won't sell you xyz fire-suppression system."

              Yeah, like that's going to happen! A company refusing to make money by selling you stuff? No chance. Some company will always be happy to sell it to you. Heck, if you can show me that all companies will act in that way, then you've just pointed me in the direction of my niche market where I can make shedloads of money!
              • Sure they won't REFUSE to sell you equipment, but they could have it void warranty, guarantee or violate lease terms. I think this is a good thing. Suppose you are HP leasing a half million dollar pair of DB servers and the bozo's deploying them have inadequate HVAC...the server room spikes to 95 degrees every time the sun thinks of shining and the servers shut themselves down based on thermal protection on CPU's. Would you guarantee the uptime of those servers? How thrilled would you be replacing fried
  • A data center shall consist of hardware, appropriate HVAC, and gigabytes of pornography.
  • by pokka ( 557695 ) on Wednesday August 31, 2005 @02:09AM (#13443619)
    This is great news for people who host servers in colocation facilities.

    If you've ever tried to find a place to host your server in the past, you've probably found that not only does the price wildly fluctuate between hosts for no apparent reason, but also it's very difficult to determine exactly what you're getting, even if you take the time and effort to actually visit the site.

    I think that the disorganized fashion of colo services allows people to charge ridiculous prices
    and get away with things that they wouldn't be able to do in a more stable competitive environment (like charging ridiculous amounts for bandwidth overage and support).

    With some sort of standard in place, vendors will be forced to compete on more even ground, prices will be more reasonable, and users won't be afraid to leave their current colo provider because the next one could potentially be even worse.. Not that it will be perfect, of course - just somewhat better.
    • "Carrier neutral" facilites in large buildings are your friend. For example the westin building in seattle has many network providers that can be pulled into our cage independant of the company that provides the colo. This forces them to be price competitive as they don't hold our equipment hostage. This is why we are moving a significat number of servers from the harbour centre in Vancouver BC (which is not neutral).

      If you want the best price you have to know what that price is and ask for it. They'll sa
    • If it really would lead to much less expensive colocation, why would the operators switch to a new standard so they can charge less?
      • They wouldn't have a choice. That's the beauty of *fair* competition. Their customers would either demand lower prices, or leave and go with a colocation provider who has them. The reason people don't do this right now is because "colocation" really has no standards - it could be some guy's basement or a deluxe, cardkey controlled secure building. It takes so much effort to find out if the provider you found is a good one, that it's usually cheaper to stick to who you have and not waste your time. But
  • by xmundt ( 415364 ) on Wednesday August 31, 2005 @02:15AM (#13443646)
    Greetings and Salutations.
              Interestingly enough, a quick google search for "data center design" comes up with more hits than one can shake a stick at, ranging from free to fairly inexpensive (under $100.00). I have to admit that I wonder if THIS magnum opus has anything in it that these OTHER resources do not cover.

              It never ceases to amaze me at the number of books out there that are supposed to be useful learning tools that are nothing more than a slightly changed rehash of the man pages for a given program.

              Dave Mundt
    • For a quick overview, there's the book from Sun, "Enterprise Data Center Design and Method", which covers concepts on dealing with airflow, rack placement, power requirements, etc. The first chapter is available from Sun Blueprints:

      Sun also has some articles on disaster planning and such in the Data Center [] section of Sun Blueprints.

      Oh ... and $250 is not a lot of money, when you're dealing with a buildout of millions of dollars, just for the data center (ie, not the actual s

  • Raised Floors? (Score:2, Interesting)

    by Anti-Trend ( 857000 )
    I can't afford the $250 at present, but I wonder if they finally did away with raised floors. It wasn't too bad of idea around 40 years ago, but we've got cool modular racks now that make that concept moot, at least IMHO. Plus raised floors look weird, are fairly expensive to implement (especially for smaller firms with little cash), and get really nasty under there over time. Besides, telco has done without that design for quite a long time, seems to have worked out fine for them.


    • Re:Raised Floors? (Score:4, Informative)

      by DaEMoN128 ( 694605 ) on Wednesday August 31, 2005 @02:47AM (#13443739)
      After making my living as an installer... raised floors are a GOD SEND!! They are a pain, but they look 1000 times better than cable tray or ladder rack with 2000 cables in them. They make routing cables easier. It is no fun setting up some rig in order to hang cable tray 20 ft. off the ground. You dont want ladder rack the whole way... nothing wrong with it, but why when you can use a cable trough with a cover that allows the use of a pull tape. Raised floors help with cooling (forced air from under the cabnet). Also, they dont have to get nasty under there.... just make sure your standards for presentation are clearly stated in the contracts and enforce them.
    • I wonder if they finally did away with raised floors.

      At my colo, they run cold air from the HVAC under the raised floors and suck it up through the cabinets with a fans (and presure from the HVAC). They cool the cabinets, not the entire facility. It's odd being in a colo that is warm after freezing my ass off in previous facilities, but temperature monitering equipment of ours tells us that the closed cabinets stay quite cool.
    • You would have loved the computer room I did some work in during the 1980s - the builders got the dimensions wrong and there was a 4 foot void under the tiles instead of 4 inches!! The owners had to have special pedestals made to hold up the tiles and they actually put some servers under the floor!
      • by wirefarm ( 18470 ) <{ten.cdmm} {ta} {mij}> on Wednesday August 31, 2005 @03:42AM (#13443903) Homepage
        Did they later convert that space to a cubicle farm?
        I think I worked there.
      • The data centre at CERN has a 1 or 2 metre-high void, IIRC. But then, they have rather a lot of cables. It was a deliberate decision.
      • You sure about that? I've never seen a "real" data center with just four inches below the floor. I think the least I've ever seen was maybe 3 feet, and I've been in several you could comfortably stand under.

        The bigger the space, the more serious the data center. I did six months of consulting working in a data center that had a six foot raised floor, which wasn't enough to walk easily under because of the ducting and cable trays, but made moving around and finding stuff a lot easier. They had great filtrati
    • by marquis-cablewitch ( 887511 ) on Wednesday August 31, 2005 @03:47AM (#13443918)
      But without raised floors where am I supposed to hide the bodies?
    • Re:Raised Floors? (Score:3, Informative)

      by Chanc_Gorkon ( 94133 )
      Raised floors are definitely something you need. What we did was run the backbone cable to patch panels in the server rows. The cabling to the servers are overhead racks with troughs for fiber. This was a huge improvement to the old room. We pretty much bought a switch and prewired every panel. Now when we need to add a server, we have out cisco guy add it to the switch config(we also give him the MAC address at this time too), they tell us which jack to use and we get our wire expert to run the cable
    • You must not work in a large data center.

      I can't imagine what our server farm would look like if we didn't have a raised floor for routing cables, fibre, air, etc. Do you have any idea how much heat a rack of 1U servers can put off?

    • I work for a telco. I've never seen a machine room without a raised floor.
    • The last few data centers I've worked in have had raised floors for flood protection. Now that positive-pressure raised floors are out of fashion, there are a lot of raised floors out there. Not for Katrina-sized flood protection, but for garden-variety HVAC and water main leaks; you need an alarm system since the area is hidden.

      I actually was wondering if it might be good to use the raised floors as exhaust plenums. You could suck hot air near the ceiling in via raised stacks. The space was never good for
  • by DaEMoN128 ( 694605 ) on Wednesday August 31, 2005 @03:07AM (#13443809)
    From TFA, this is a "checklist" for CIO's. Last thing I need is my PHB having a list to check off and thinking they are requirements instead of suggestions. You never give a PHB this much info. They dont know what you are doing a good part of the time anyways... and a little info is more dangerous than none.

    It can be a good idea if the techs get a hold of it though and stop giving my 2 inches of slack on these fiber runs and give be a proper service loop with good cable dressing instead of the rats nests I've had to fix recently.
  • by zenst ( 558964 )
    Being unable to lash out and indeed darned if I woudl ever pay $250 for a datacentre for dummies kidna guide given that every case has to be treated on its own merets.

    I wonder if they cover aspects like power phase balancing given alot of places have 3-phase and we all know how box's move about so the aspect of auditing the balancing across the 3 phases comes in. Why well power costs given you pay for the highest usage over the 3 and then there is the UPS aspects and resiliance aspects.

    Oh and DR sites, if
  • The document TIA-842 [] has been issued on April 1, 2005!
    I'd not take in too serious consideration this doc!
  • There's currently no "gold standard" for data centers. This is actually a bad thing, because self-important corporate audity types want some sort of "this is ok" label they can slap on a facility (or worse -- one to look for when selecting a facility). Right now they're all using SAS 70 [] as the standard certification to look for, which is ridiculous because SAS 70 is really more about bean counters and accounting practices than it is about data center facilities. Something actually designed for our indust
  • by Duncan3 ( 10537 ) on Wednesday August 31, 2005 @09:22AM (#13445401) Homepage
    Above sea level. Check.
    Somewhere the US/Chinese government won't be monitor ing everything... still looking.
  • $250US for a farkin PDF?!


  • Having read through the draft of this standard given to me a vendor in preparation for a major infrastructure overhaul, I have to say that this document was a godsend in getting what needed to be done made possible.

    This standard isn't for the SMB or small colo facilities. This is more for the big corporate datacenters (my workplace is approx. 100,000 sq ft, and a 2000+ port SAN). These kind of places don't blink an eye at $250 for a book. Of course, in places like this, a vendor would most likely give a
  • Seriously though, isnt 250 a bit much for a unproven document?

    Perhaps its worth that, but i personally wont take a gamble that it is. Ill just keep doing things that work.

    Would have been nice to read it.

Can't open /usr/fortunes. Lid stuck on cookie jar.