Please create an account to participate in the Slashdot moderation system


Forgot your password?

Video Buying New Commercial IT Hardware Isn't Always Worthwhile (Video) 92

Ben Blair is CTO of MarkITx, a company that brokers used commercial IT gear. This gives him an excellent overview of the marketplace -- not just what companies are willing to buy used, but also what they want to sell as they buy new (or newer) equipment. Ben's main talking point in this interview is that hardware has become so commoditized that in a world where most enterprise software can be virtualized to run across multiple servers, it no longer matters if you have the latest hardware technology; that two older servers can often do the job of one new one -- and for less money, too. So, he says, you should make sure you buy new hardware only when necessary, not just because of the "Ooh... shiny!" factor" (Alternate Video Link)

Robin Miller: This is Ben and he works for a company called MarkITx. Am I pronouncing it right, Ben?

Ben Blair: Absolutely, MarkITx, yeah.

Robin Miller: They're dealers of used enterprise level IT hardware, which means they have kind of an interesting view on what’s hot and what’s not, and what’s going to be hot and what people are buying and what they’re dumping.

So, Ben, what’s hot?

Ben Blair: We see mostly networking gear – well one small correction. We’re a place to buy and sell. We don’t buy and sell ourselves, so we help the buyer and seller come together, but like you said we see all this order flow, and what’s interesting – so what’s hot is mostly networking gear. We see a whole lot of Dell and HP servers as well, and a lot of other kinds of equipment but that’s sort of like the fat tail of things, right, there’s like thousands of things we see once or twice, but a whole lot of things. It’s Cisco, Gear, Juniper, Arista, Aerohive, HP, Dell, IBM servers, kind of across the board.

Robin Miller: Well, either the last video or the one before that we did was about HP's new... no they’re not blade servers, they’re calling them cartridges or something.

Ben Blair: Yeah.

Robin Miller: Basically they stick a whole bunch of single board servers in a chassis. I remember that as blade servers, but that’s because I’m old. So it’s obviously not that

Ben Blair: Heh. I mean, yeah, it’s just people are looking for new ways to get better power density and better weight density really in their cabinets, and it makes a lot of sense. I mean, you raised an interesting question in the email (that led up to this interview), actually, which was sort of what’s driving a lot of these changes and the OEMs and the vendors I think would have you believe that there are new features. You know, okay, great, HP has got a new cartridge based system. You can have higher density and that’s

Robin Miller: Yeah.

Ben Blair: It improves the calculation incrementally, but if you think, I mean, back to mid-1990s and even into early 2000s, the hardware really did drive things. You’d get the next generation of Intel chips and boy, if you’re running a database that had heavy load, you needed to upgrade it. That was your path to better performance. And since we were in the web world and everything has moved to scale out. You got so much more load you couldn’t possibly fit it on a server no matter how awesome it was, right? So we had a smart and hard software problem of, “Let’s make sure this can run on like 4 or 5 or 10 or 100 or 10,000 servers,” and once you solved that problem this was really the thing that changed. It doesn’t matter so much anymore that you have one server that’s more powerful or even an enclosure that’s a little bit more power efficient. I mean, they’re advertising, I don’t know, 80-some-odd-percent power efficiency, which is fantastic. But it’s not going to fundamentally change what you can do any more. You already have to have things scale out and then it’s just a cost trade off, and this is why I’m not trying to talk about why what we’re doing is interesting, but fundamentally that it's the fact that software is eating the world, that means hardware is becoming a commodity, and so it doesn’t matter if you’re buying servers, like HP wants you to believe that if I buy this server, it will reduce my cost; if I bought cheaper servers that would reduce my cost as well, I don’t necessarily need fancier ones. And as long as my software can run equally across all of them because it’s already scaled out, it’s not this need thing, I don’t need a server that’s twice as good, right. I need twice as many servers and I can have them more dense, less dense, those are all then tradeoffs. As much as I’m a technologist and I want to be in charge of all the stuff that’s almost a finance decision, it’s like, look, I just need 50 cores and 80 terabytes of storage to do this and I need this I/O bandwidth, but exactly how that’s delivered to me, I don’t really care anymore.

Robin Miller: The funny thing is, one thing I hear made much of this power savings. I could see that being important to Google. I could see that being important to IBM or Amazon, but I do not know the people I know who are IT guys or like small and medium sized businesses. You have 200 employees in your business, you have 500, you have 1000. Let’s say you have 150 desktops, you have 200 computing devices in a factory or warehouse, and some of them could be tablets, who knows, and you have a bunch of servers, let’s say 50 servers even. How much money are you going to save on electricity? $100 a year?

Ben Blair: Yeah. It’s a good question. I mean, yeah, on a scale of 50 servers, I mean it may be more than that, but let me give you the answer, slightly different question, but one that I can answer to say I have more firsthand experience.

Robin Miller: Yeah.

Ben Blair: Now suppose you’re in a co-lo facility, right? And there the costs are more concrete, like I’m paying $2,000 a month for a cabinet and however many amps of power are going to that cabinet, so my power costs are fixed, but what I can do with that power is sort of where it matters to me, so if I can have servers that are twice as efficient I can put twice as many servers in that cabinet before I need to buy a whole new cabinet for another $2,000 a month or whatever. So there it can matter, but it’s, I mean, I don’t know, a 2x factor; like in power efficiency it is often more like 20%, 30% as long as you are not talking versus something 8 years old.

Robin Miller: But even if it’s 2x and you’re talking at the most in mid-thousands annually, it’s not a world changer. It’s the difference between buying a cheap used car or not buying one.

Ben Blair: Yeah, exactly, and it’s just trading off sort of operational dollars for dollars upfront; like your car example, I can buy a more expensive car today that saves me gas, but how long before that pays off?

Robin Miller: Used equipment, obviously you guys are brokering it, it’s coming through your thing, this is getting hot, you are seeing volumes go up?

Ben Blair: Yeah, volumes are up really significantly. I mean, if you look at it, I am sure you’ve experienced this and probably everyone who's on Slashdot has a closet full of old stuff, right?

Robin Miller: Yeah.

Ben Blair: And this is true whether you are a small shop or if you are a giant bank, except instead of a closet it’s a warehouse, but it’s hard to deal with this stuff and as technologists, this is sort of eye opening for me getting into this, like I always had the view too that I want the thing that’s going to help me do my job and the cool new thing. And then I get the next generation, what do I do with this stuff? I don’t know. I don’t want to throw it away, that’s really wrong, and I got all these other things to do, so just like put it back there somewhere, right.

Robin Miller: Yeah.

Ben Blair: And that’s a fine decision sort of once or twice and then you do it, you know, do it over and over and pretty quickly you got a warehouse. And so it’s crazy. So the good thing for us, it’s a pretty easy conversation to have with someone, hey, all that stuff that you think isn’t worth anything, it’s actually maybe half or a third of it isn’t worth much and we can help you recycle it. But a surprising amount, half, two-thirds, if we’re talking to like financial institutions that are refreshing every year-and-a-half, maybe it’s three-quarters of the stuff they have is perfectly good, and lots of people want to use it. We’re not in the time again -- to go back to like late 1990s, early 2000s – where corporates were increasing fast, you have technically more hardware imrpovements going. But software has changed that. So now if you’re a smart company, if you’re thinking like some of our customers, our buyers are small... mid-size and small companies, the whole business is renting used hardware, right? They may buy it new most of the time, but the minute they have one customer running on it, and then the next one, it’s used in a sense, right? And they’ll run it for 4 or 5 years.

Robin Miller: And we know off-lease, a lot of people lease their computers.

Ben Blair: Yeah.

Robin Miller: And that’s where I first saw used computers, in a commercial and legal sense, was coming off-lease.

Ben Blair: Yeah.

Robin Miller: Like you said, you rent them, and the second you take it home, it’s used.

Ben Blair: Exactly.

Robin Miller: So why not get a used one?

Ben Blair: The economics for buying even one server or one switch, it’s better if we are buying used. But it’s not going to change your world, right. Where it really matters is if you are buying 10 or 30 or 50, right, and then that can make a really huge difference and so I think, you are not talking about like the Windows 95 revolution where everything, suddenly, really was obsolete, you simply couldn’t run the software you wanted on it, and now if you’re running a pizza box and it’s got two quadcores instead of one quadcore or four quadcores or Intel or AMD, I mean, these all make incremental differences, but they don’t say whether you can run a piece of software or not, and either way, if you’re running any kind of sizable infrastructure, your software is already built to be distributed over dozens of machines, and if you have twice as many machines with half the performance, it’s not an even trade off necessarily, in some scenarios, but it’s close to it.

This discussion has been archived. No new comments can be posted.

Buying New Commercial IT Hardware Isn't Always Worthwhile (Video)

Comments Filter:
  • Duh (Score:5, Funny)

    by HornWumpus ( 783565 ) on Tuesday July 22, 2014 @04:21PM (#47510227)

    Used hardware vendor says rack space is your data center on Pentium 3s. News at 11.

    • Re:Duh (Score:5, Insightful)

      by tsa ( 15680 ) on Tuesday July 22, 2014 @04:32PM (#47510323) Homepage

      Electrical energy is also free, apparently.

      • The key is to know what the hell you're doing, and let the spectators watch the show. New or used, buy for what your customer needs - NOT for the free tickets Vendor X gives you as an incentive to buy their gear. Being a professional in this field, here are some pointers.
        - Contact your power company for virtualization incentives to upgrade all that old hardware.
        - Use free software that is commercially supported, like XenServer, QuantaStor, and Zimbra.
        - Stick with a vendor that produces quality gear like Del

        • Re: (Score:3, Informative)

          by FuegoFuerte ( 247200 )

          If you're saying HP doesn't produce quality gear, you have apparently not used their servers. There's a reason they're one of very few top-tier server vendors, and it's because they do produce some great gear. I came from an all-HP shop, and I'm currently in an all-Dell shop. Both manufacturers have their strengths and weaknesses, but all things considered they're approximately equivalent.

          • You are correct. Both have their strengths and weaknesses, but for automated systems there seems to be a need for constant babysitting for HP equipment. Seems like HP support is always showing up to do SOMETHING to a server related to drive firmware or cooling fans, while the Dell equipment just keeps running ad infinitum. HP support has only cost $300K this quarter for the aforementioned issues, and caused downsizing in a department that could use a few more bright staff members. Cooling fans and drive fir

            • by Anonymous Coward

              HP and Dell and IBM do not make hard drives. If HP is updating drive firmware and Dell isn't, for likely the same model drives wrapped in custom sleds and (only sometimes) with certain operational parameters tweaked, you might be surprised to later discover Dell drives "failing" when they are actually just fine except for a Seagate/Hitachi/etc firmware bug, or worse yet, subtle, infrequent, silent data corruption that can go on for months before being discovered when something mysteriously fails to compile

          • by dbIII ( 701233 )

            If you're saying HP doesn't produce quality gear, you have apparently not used their servers

            They also produce such crap that they have been fined for false and misleading conduct in relation to the sale of computer products (Australia's ACCC). It's difficult for the buyer to determine what is top notch HP gear and what is not based on what HP salesfolk are spouting.

            in an all-Dell shop

            Dell used to be mostly ASUS until ASUS went it alone. There's plenty of white-box vendors that are very good in certain seg

          • HP's weakness is their prices.

      • by sjames ( 1099 )

        I picked up a Sunfire v20 for $20. I would have to run it for 8 years non-stop for the electricity cost to add up to the cost of a new more efficient machine of equivalent capability.

        • Re:Duh (Score:4, Interesting)

          by Penguinisto ( 415985 ) on Tuesday July 22, 2014 @05:37PM (#47510787) Journal

          A cheapie SunFire v200/210 will run like a tank, but you'll be crippled by the server's top speed, and they do put out the heat if you push up the load average (and HVAC costs should always be factored in, yo.)

          You'll also need to buy a lot of those pizza boxes to make up for the processing power that you can find in a box half its age, let alone the newer iron.

          Sometimes you have to run the old stuff (I work in an environment where we have testbed boxes, and SunFires are a part of that, along with ancient RS/6000 gear, PA-RISC HPUX gear, etc. I can tell you right now that the old stuff cranks out a lot more heat (and in many cases eats a lot more rackspace) than the equivalent horsepower found in just a handful of new HP DL-360's.

          • by sjames ( 1099 )

            It is used only for administration, serial console for a few devices, crunch some log files, etc. It used to be a backup mail server as well. All well within it's capabilities. It isn't likely to run at high load very often. The run like a tank feature is it's primary reason to be. Since it is the machine used to diagnose problems, it's helpful that it is unlikely to be the machine with a problem.

            Sometimes, old used equipment is exactly the right answer, sometimes it's a terrible idea. The production serve

          • You'll also need to buy a lot of those pizza boxes to make up for the processing power that you can find in a box half its age, let alone the newer iron.

            It entirely depends on what you are doing with it. If the task is not CPU bound on an old box you don't need a lot of them.
            I've got one old sparc box for occasional use for some legacy software from 1996 and 2002 - it flies on a machine from around 2008. Another has a pile of old tape drives of various types hooked up to it, once again, for occasional use

            • The only gain in either situation from replacing them is theoretically increasing longevity. Neither case lends itself to a virtual machine unless the thing running that VM has a sparc processor, in which case there's no point for a VM.

              Well, not entirely "no point"... [] (and I didn't even have to bring up zones ;) )

              • by dbIII ( 701233 )
                There would be a point if I had something else a new sparc could be doing. Until then two old bits of kit with almost zero resale value will keep things going, with no real problems unless both die at once, and even then the original SparcStation10 the original software runs on still turns on but is slowwwww.
                A sparc VM on x86 that actually runs sparc solaris would be nice, and apparently such things were seen in the wild in the past but are unavailable now.
        • Sunfire v20 has a 465w PSU, so figure about 350w under typical usage. Once you figure power and cooling in a typical datacenter environment, cost hits somewhere around $2,900/year (at $25/watt over a 3-year lifespan). So, over 8 years, you're looking at $23,200 for that old Sunfire. I find it hard to believe your new more efficient machine of equivalent capability will cost you nearly $23,000.

          Or, you're running it in your mother's basement where things like power and cooling aren't an issue.

          • by sjames ( 1099 )

            Nice way to toss in an insult just to prove what a bright light you aren't.

            Were you a brighter bulb, you would realize that PS rating and actual consumption often have little to do with each other. In fact it doesn't draw half of what you think it does and electricity doesn't cost as much as you think where I have the box.

            Meanwhile, who said my usage was typical? Certainly not me.

          • so figure about 350w under typical usage

            It doesn't work like that.
            To use a car analogy that would be like assuming each car is moving ten tons all the time just because the motor can do it.

            • It depends on the device. Most manufacturers don't drastically over-spec their PSUs for a purpose-built server, because to do so is highly inefficient. In practice, most enterprise-class devices will use somewhere between 65-80% of their max PSU rating under load. In this case, that's somewhere between 302 and 372 Watts, so I settled on a nice even number sort of in the middle. Since the spec sheet I found only listed max power draw and not typical, I used a reasonable estimate based on typical enterpri

              • by dbIII ( 701233 )
                The specs are for peak consumption for whatever can normally be expected to fit in the box. That design criteria can mean speccing it for a couple of dozen power hungry disks even if they are not in 95% of the servers of that type.

                the point is the same; even if the server only used 200W

                With respect, basing a precise number on a wild guess (eg. $13k vs $10k or $1k) is pointless numerology even if it is a common bad habit.
                In general terms you have a point but in specifics you are probably out an order of mag

              • by dbIII ( 701233 )

                In practice, most enterprise-class devices will use somewhere between 65-80% of their max PSU rating under load

                Where on earth did you get that from? Also why assume stuff is running at full capacity 24/7/365 anyway? I've got some stuff that's fully CPU bound for weeks at a time (geophysical software), but even then it adds up to only about 1/8 of a year. Other places with less time sensitive stuff can queue things up and get 100% usage out of the resources they have but it's not common outside of specifi

            • Also, your car analogy fails, because a computer is nothing like a car.

              • by dbIII ( 701233 )
                The analogy is fine because it describes capability versus actual usage. Reading anything more than that into it is ridiculous - it's an analogy.

                In this case the PSU is capable of supplying a lot more power than a fully fitted out server can consume at maximum load, and then some. If the server doesn't have a dozen disks it's likely to still have the same model of PSU as the one that does, and even that one with a dozen disks is not going to be running them all at maximum power consumption all of the tim
      • Electrical energy is also free, apparently.

        So is HVAC - go figure.

    • For your primary servers, power is a very important cost consideration of course.
      On the other hand, I buy Raritan 16 port IP KVMs that are BETTER than their new models at 90% lower cost. I use them a few times power year. Their better than the new ones because they have a perfectly good web interface I can use from my phone to take care of a server that it down, rather than having to drive to office to use their proprietary control software for the new ones.

      Similarly, I use some very popular 16-bay storage

      • The point being that the capital depreciation can be offset making the TCO lower. It is an important factor in the math. You want to compare TCO of your current kit vs. TCO of optional kit, and a 4K swing might change the winner.

  • by Anonymous Coward

    A guy who is the CTO of a company that deals in used hardware tries to urge people to buy used servers instead of new ones.

    • And the sad part is some CFO will see the video clip, override the CIO's IT Plan for updating their hardware infrastructure and then complain about a lack of 110% uptime

      • And the sad part is some CFO will see the video clip, override the CIO's IT Plan for updating their hardware infrastructure and then complain about a lack of 110% uptime

        Well, I'm burning in a refurbed Dell server right now. But I only demand 100% uptime, so I'm happy.

    • well might make sense for your dev/test infrastructure if you need to build a technical copy /testbed
  • What about power? (Score:4, Informative)

    by MetalliQaZ ( 539913 ) on Tuesday July 22, 2014 @04:25PM (#47510263)

    I can't see the video but in the summary he mentions using two old servers to do the job of one new server. I appreciate the recycling, but it sounds like he is talking processing or I/O equivalence, and usually it is power that is the dominating factor in data center effectiveness. Are two servers really cheaper than one when you factor in electricity, cooling, and rack space?

    • by MightyMartian ( 840721 ) on Tuesday July 22, 2014 @04:50PM (#47510451) Journal

      For some tasks I can understand recycling. I use older hardware to build routers, anti-spam gateways, VPN appliances and the like. Normally these are fairly low-cycle tasks, at least for smaller offices. But I've learned my lesson about using older hardware in mission critical applications. I've set up custom routers that worked just great, until the motherboards popped a cap, and then they're down, and unless you've got spares sitting around, you're in for some misery.

      • by afidel ( 530433 )

        This is why we've got a virtualization first strategy, VMWare HA makes sure even if you lose a box downtime is minimal (and for even more fun use Fault Tolerance and so long as your switches are properly configured you lose nothing since the two VMs run in lock step)

      • that's why you buy Telco grade cisco gear
      • Yep. Cost of new gear cost of maintenance + down time.

        As equipment ages failure probability increases. A power supply in the process of failing isn't always easy to identify as starvation to components can cause odd problems that don't look like a power supply failure. You have to troubleshoot and its harder if it's old equipment because sometimes the standards have passed and getting quick replacement components is not possible.

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Posting AC as my opinion is mine alone:

      Server rooms don't magically expand either. That is why the HP Moonshot, which isn't perfect, is getting a lot of attention. It is far cheaper to buy a dense rack unit than to buy another building, add some kilo-amps of 208-240 VAC UPS and PDUs, as well as the CRACs they require to move the heat out of the building.

      On a microcosm, yes, one can run an old single core P3 at home... but why? A newer machine is far more power and heat efficient, and likely runs an OS th

  • by Anonymous Coward on Tuesday July 22, 2014 @04:26PM (#47510279) on an "upgrade" cycle. All equipment with red LEDs needs to be replaced with equipment with blue LEDs, at least on the front face of the equipment.

    The CEO toured the data center recently and wanted to see blue LEDs on everything.

    • Blue LED's are so last decade.
      Everything is white LED's now.

    • by scsirob ( 246572 )

      I used to work for a hardware vendor who sold equipment to IBM. IBM demanded that all red power LED's be replaced with green ones. IBM users were used to seeing red LED's only when there was a fault with the equipment.

      Bottom line: Sometimes a LED upgrade cycle makes sense..

  • the guy who's whole business model is dependent on companies buying used hardware.
  • So, he says, you should make sure you buy new hardware only when necessary, not just because of the "Ooh... shiny!" factor"

    What's new about this advice? Was it not as useful and applicable 50, 100, and 1000 years ago?

  • Slashvertisement? (Score:5, Insightful)

    by nine-times ( 778537 ) <> on Tuesday July 22, 2014 @04:34PM (#47510347) Homepage

    Guy who sells used computer hardware claims that buying new computer hardware is a bad idea, and that you should buy used gear instead. News at 11.

    Not what this guy is saying is wrong, but there are other unaddressed issues. They cover issues like "power savings", but not the much more important issue of buying an unknown piece of hardware from an unknown vendor, without a warranty. Aside from that, sometimes there are issues of physical constraints-- like I have limited space, limited ventilation, and one UPS to supply power. Do I want to buy 5 servers, or one powerful one?

    Also, it's not true that hardware isn't advancing. In the past few years, USB has gotten much faster, virtualization support has improved, drives and drive interface has gotten faster, etc.

    And sometimes, buying "new" is more about getting a known quantity with support, rather than wagering on a crap-shoot.

    • by Anonymous Coward on Tuesday July 22, 2014 @04:47PM (#47510429)

      One new server crammed with RAM, with a support contract, and with readily available power supplies is preferable by FAR to me and my organization versus 6 old units. Especially considering per-processor licensing fees for Windows and VMWare.

      • by afidel ( 530433 ) on Tuesday July 22, 2014 @05:18PM (#47510657)

        Amen to this, we run ~400 VM's on 14 hosts, using less than 1/3rd the power we did when we ran 160-180 physical boxes and everything is easier to manage, new deployments take minutes instead of weeks. We've saved a few million by not needing to grow our datacenter, probably over a million on Microsoft licensing, and made both my staff and my customers happier. There's no way I'd run things on old physical boxes just to save a few dollars on capital expenses.

      • We consolidated about 20ish old servers (and added new systems) in to two Dell R720xds that are VM hypervisors. Not only does this save on power n' cooling but it is way faster, more reliable, and flexible. It is much easier and faster to rebuild and stand up a VM, you can snapshot them before making changes, if we need to reboot the hypervisor or update firmware we can migrate VMs over to the other host so there's no downtime. Plus less time is wasted on admining them since there are less systems, and they

      • by Anonymous Coward

        Fuck Windows and VMWare, have you seen Oracle's utterly fuckwit virtualisation licensing model?

        Stick an Oracle database on just one VM and you have to licence the entire fucking cluster. Total lunacy, and it's costing them sales.

        Posting anonymously to dissociate this post from my employer and their Oracle relationships.

      • That is also a very good point. Licensing fees are often per-processor or per-machine. If I buy 20 old servers and want to buy Windows Server licensing to go with it, I have to buy a separate version of Windows Standard for each. If I buy a single new, extremely powerful server, I might be able to set up 20 virtual servers, and only have to buy 1 copy of Windows Datacenter. And that's just talking about the OS.

    • What I mean is, most businesses keep everything of importance on their servers. Think of the salaries they pay in total all of their employees who spend time in front of computers each day. Everything you pay them to do is, essentially, tied to those servers. If the server runs a hosted application slowly, then all of your people using that application are forced to work more slowly -- making them less efficient. If a server crashes and people lose access to information until it's brought back up -- even

    • In the past few years, USB has gotten much faster

      I agree with most of your post, but this is simply false. USB 3.0 is a completely new interface, bolted on USB 1/2 to make it seem like a seamless transition.

      I used to think USB is all about selling a new interface with an old name. For example, in a few years we'd have a CPU socket called USB 14.0, but hey, at least it's USB. Now I have a USB 3.0 hard drive, and the mini plug/socket in particular shows how it's just USB 1/2 + 3.0 bolted together. So my new future prediction is USB 17.0 where you have th

      • by dave562 ( 969951 )

        What are you talking about? USB 3.0 is significantly faster than USB 2.0. I work in a business where we have to transfer data on physical media due to the volumes involved. We ship hundreds of drives a month. Our clients refuse to accept anything other than USB 3.0 anymore because the previous generation is too slow.

        • Sometimes.
          A USB3 port, if you plug a USB3 hub into it, and 2 USB2 devices into it will go just as fast, and no faster than a USB2 hub.
          Because that's what it is.
          There are no transaction translators at all.
          There are none even specced in the spec as optional, for high-end vendors to aspire to.

        • I'm sorry if you missed my point. I agree that USB 3.0 is faster than USB 2.0 — in the same way that PCI Express is faster than PCI. Does that mean PCI has gotten much faster, or is PCIe a new interface that has replaced PCI?
      • I agree with most of your post, but this is simply false. USB 3.0 is a completely new interface, bolted on USB 1/2 to make it seem like a seamless transition.

        I've been wondering about that -- Since a USB 3 port has separate pins for ultra-speed and high-speed, shouldn't I be able to plug two devices into the same port?

    • by tlhIngan ( 30335 )

      Not what this guy is saying is wrong, but there are other unaddressed issues. They cover issues like "power savings", but not the much more important issue of buying an unknown piece of hardware from an unknown vendor, without a warranty. Aside from that, sometimes there are issues of physical constraints-- like I have limited space, limited ventilation, and one UPS to supply power. Do I want to buy 5 servers, or one powerful one? ...

      And sometimes, buying "new" is more about getting a known quantity with su

  • by Anonymous Coward

    Oh, great, it's hard enough to replace obsolete equipment as it is. Once management sees this, they'll wonder why we can't keep that old Dell server going a few more years - after all, other companies are buying the same server for this guy. IT will never get another upgrade approved ever again if this gets out. Forget the cost savings of lower-power equipment, and the massive throughput increases in newer drives.

    There is one business case for buying old - when a production machine is gone and you can't get

    • Oh, great, it's hard enough to replace obsolete equipment as it is. Once management sees this, they'll wonder why we can't keep that old Dell server going a few more years - after all, other companies are buying the same server for this guy. IT will never get another upgrade approved ever again if this gets out. Forget the cost savings of lower-power equipment, and the massive throughput increases in newer drives.

      Lucky you getting to keep that old Dell server... We have to keep that old 1983 PDP-11 going.

      • We have to keep that old 1983 PDP-11 going.

        Ebay the PDP11 and buy a dozen Pentium4s with the money.Assuming your PDP11 services 12 users, the performance will be similar, and the electric bill less than half. You may need an extra P4 to act as a tape server, a couple more to act as disk servers, and some others as terminal servers..

        OK, maybe the PDP11 IS more power efficient!

  • by Anonymous Coward

    Little Dell running SQL 2000 on Windows 2000. A whole gigabyte of RAM. Made it through 12 years without an issue. I've been told it's off living on a farm now.

  • Do you actually feel embarrassed about having posted this, or did you do it willingly?
    • by Roblimo ( 357 )

      Thousands of viewers, 10 or 20 complaints. That seems like a pretty good ratio to me.

      And, of course, if you have good ideas for video interviewees, why don't you send them to me instead of complaining? Please make sure to include contact info. My email is robinATroblimo-com.


  • I had to sign up just for this, is this a press release of sorts ?
  • by dave562 ( 969951 ) on Tuesday July 22, 2014 @05:16PM (#47510641) Journal

    I swear we saw an identical article a few months ago.

    Go away.

    We do not want your advertisements. Nobody wants your old gear. I pay you guys to haul it away, not sell it back to me on Slashdot.

  • by Tomsk70 ( 984457 ) on Tuesday July 22, 2014 @05:26PM (#47510715)

    Next Week; Linux rubbish at server tasks says Microsoft Reseller

  • by Tsiangkun ( 746511 ) on Tuesday July 22, 2014 @05:41PM (#47510813) Homepage
    Roblimo has thick skin. He said so in March when he posted an advertisement for this same service in the disguise of a story. This still looks like front page placement of an ad for a friends company. How much does it cost for front page placement ?
  • by Anonymous Coward

    Not everyone can buy used hardware, but for those who can, doing so is a huge money saver ($75K worth of hardware six years ago is now selling for thousands). Case and point, we bought a fully stocked 16-blade system for about $4K with Quad-Core Xeons and 4GB of RAM. People might say that is crap, but not when what you're replacing is already crap of the crap and upgrades are cheap as well. When factoring in clustering, etc, running on used equipment is hardly risky. Support-wise, this stuff usually has sof

  • by thatkid_2002 ( 1529917 ) on Tuesday July 22, 2014 @10:50PM (#47512735)

    When you have SMB type customers then refurbished hardware is great value. They're usually not willing to fork out for a new server. When there is refurbished hardware for a fraction of the price -- still new enough to be reasonably efficient and to add a HP Care Pack or whatever -- why not? Having hardware that is up to scratch is both good for you and good for your customer. Out of dozens of customers of this nature we've never been bitten (and yes, the customer knows the server is refurb + Care Pack).

    It's really great when you get a strong business relationship going with your local refurb business. Getting the pick of the litter really gets your geek juices flowing!

    We did have a reasonably strong virtualization setup too, and that helps as the article suggests.

    The laptop I am typing this on right now is a refurb model that I got for an excellent price a year and a half ago. It's probably the best laptop I've ever had including brand new ones.

  • by George_Ou ( 849225 ) on Tuesday July 22, 2014 @11:16PM (#47512869)
    At one point the interviewer asks "how much money you gonna save on electricity for 50 computers, $50/year"? It's clear he's never even attempted to do the math. An extra 100 watts in California is going to cost $314.91 per year at the typical rate (above baseline) of 35.949 cents per year. That's just the savings on one computer system much less 50 computers.
  • This guy is ignoring two very important factors here involved in purchasing of IT hardware in any enterprise.

    - Hardware is a capital cost whose depreciation can be written off every year on your corporate income tax. After 4 years or so, your hardware actually now has near zero actual capital value to the company. Thus, as long as a company believes they will be around to see the depreciation of the asset fully written down, it is of little advantage to sacrifice performance in order to save some inconseque

You can measure a programmer's perspective by noting his attitude on the continuing viability of FORTRAN. -- Alan Perlis