Second Life for Server Components (ieee.org) 23
Scientists have developed a method to reuse components from decommissioned data center servers, potentially reducing the carbon footprint of cloud computing infrastructure.
The research team from Microsoft, Carnegie Mellon University and the University of Washington demonstrated that older RAM modules and solid-state drives can be safely repurposed in new server builds without compromising performance, according to papers presented at recent computer architecture conferences.
When combined with energy-efficient processors, the prototype servers achieved an 8% reduction in total carbon emissions during Azure cloud service testing. Researchers estimate the approach could cut global carbon emissions by up to 0.2% if widely adopted. The cloud computing industry currently accounts for 3% of global energy consumption and could represent 20% of emissions by 2030, according to computing experts. Most data centers, including Microsoft's Azure, typically replace servers every 3-5 years.
The research team from Microsoft, Carnegie Mellon University and the University of Washington demonstrated that older RAM modules and solid-state drives can be safely repurposed in new server builds without compromising performance, according to papers presented at recent computer architecture conferences.
When combined with energy-efficient processors, the prototype servers achieved an 8% reduction in total carbon emissions during Azure cloud service testing. Researchers estimate the approach could cut global carbon emissions by up to 0.2% if widely adopted. The cloud computing industry currently accounts for 3% of global energy consumption and could represent 20% of emissions by 2030, according to computing experts. Most data centers, including Microsoft's Azure, typically replace servers every 3-5 years.
significantly for most workloads (Score:4, Insightful)
```
older RAM modules and solid-state drives can be safely repurposed in new server builds without compromising performance
```
We're not gonna pretend that faster memory and faster storage doesn't benefit some workloads.
But yes, often it doesn't matter and wasting old hardware is foolish.
Despite using less continuous energy, besides the manufacturing energy and pollution costs, every dollar that goes into the purchase had energy and pollution costs in its creation and 'economists' usually set that to zero, bizarrely.
Some guy on here was bragging how he does software encryption, secure-wipes the drive, then puts a bullet through it.
Yay, more landfill that can't go into RoHS recycling streams. Or something.
A myopic focus on carbon is like 5% of the environmental story.
Re: (Score:2)
Also density, though that has slowed some. There was a time when I had a pile of 2GB ram sticks on my desk. I was asked why we could not use them or sell them, it was because you needed more in a system than they could add up to and the value dropped accordingly. I offered them up as keychains but the boss did not smile.
Re: (Score:1)
My boss just gave away old servers to employees who wanted them, when they were barely even worth selling.
Re: (Score:3)
I couldn't even give away the ram.
Re: (Score:2)
was it registered? 'cause that's unusable in most non-server systems
Re: (Score:1)
Re: significantly for most workloads (Score:2)
Not all ECC is registered.
Second LIfe??? (Score:5, Funny)
Re: (Score:2)
Re: (Score:2)
Yep. Here's [secondlife.com] a community/developer blog, with a developer post from six days ago:
One of the Three R's: reuse (Score:2, Interesting)
Scientists have developed a method
"Buying used hardware" just doesn't sound as sophisticated, but that's what it is.
Scientists have also discovered that the secondary market often has lower prices than retail.
sure, as long as they don't get faster before then (Score:3)
RAM usually doesn't get repurposed because there's usually a new standard by the time you're ready to build new servers.
SSDs get faster with every generation, they are putting them on PCIE5 now for example. HDDs don't necessarily always get faster, sometimes they only get more dense, but SSDs are still speeding up consistently.
Sometimes you haven't got a new standard for RAM in the upgrade period, so yeah you could reuse that. But in 3-5 years, SSDs absolutely will have sped up. If it's just the OS on them then no big deal, hopefully. If you're storing data on them, and many are now doing that, then that's going to affect you.
What Microsoft discovered here is that for a refresh in a specific time scale, reusing those things didn't affect performance. It's not a general finding, it's specific to the time period studied.
Why Is This News? (Score:1)
Virtualization and The Cloud exist for one reason (Score:2, Interesting)
Virtualization and The Cloud exist because most legacy applications don't need the full horsepower of a 386 to accomplish their goals. Of course a cloud workload can run on older hardware. They should run the hardware until it dies. Every new SaaS running on the cloud is just emulating a dBase III application, poorly, and charging you a monthly fee for the BS.
Re: (Score:3)
You sound like old man yelling at cloud. There are reasons why dBase III didn't survive to serve today's application needs. Hell, Ashton Tate didn't even introduce SQL until dBase IV, let alone have many of the features desirable by today's applications for their DB back-end. As someone who has written SaaS relying on only OSS software (who was very proficient in Advanced Revelation - a DB way ahead of its time in the 80s) I can assure you dBase III has nothing on even the most minute open source DB solu
why not go for low hanging fruit- software? (Score:4, Insightful)
If we're talking about energy consumption and density/hardware upgrades... how about a focus on making software more efficient
If software needs fewer clock cycles, or memory, or storage to run... you reduce the need for that hardware sprawl, increase workload and extend the lifecycle of current builds.
I know there are the few coders that take efficiency to the extreme... but the vast majority don't even know what i am talking about. "Compute is cheap"... was and is the mantra of all those coding bootcamps and most academic courses.
Re: (Score:2)
More scary is this is how hardware is now done. I've worked on a sub $10 processor that had 3 AES engines, each slightly d
lowest hanging fruit - bios settings (Score:3)
I don't know of a single server configuration guide that doesn't say something about turning off power management in the bios. This one setting seems to reduce latency on the first hit, but cause a huge increase in the power needed on the system over time
My first take... (Score:1)
I laughed at the idea of someone using old hard drives and server boards to make flying dicks.
Yeah... (Score:4, Insightful)
All of my computers are in their second (or sometimes, higher) lives. Recycled computers still work, processor or memory "pulls" from eBay haven't failed me yet, and at that price, who cares? I'm typing this on a system running a Gigabyte motherboard I got off eBay, and which runs the current version of Linux Mint just fine.
I am perfectly willing to use the castoffs of those who must always have the "latest and greatest", or of companies that require their machines to be under manufacturer warranty. People "usually" take pretty good care of their work PCs, at least the people who have used the Dell Latitudes I've bought second hand.
Add an SSD to an older machine, install Linux, and something that lugs along under Win10 can be quite zippy when running Mint.
Old Server Would Help with my Aerodynamic Studies (Score:2)
Re: (Score:2)
... to do so, I would need 256GB for 5mm, or 512GB or memory for 2.5mm.
Sounds ripe for recoding to use swap on a PCIE connected NVME drive with high read/write speed. These things are 4+TB now, and the speeds are similar to old DDR2 RAM sticks. Is it at all feasible to be looking at revisiting code in this way?
Server lifecycle decisions are made for $ reasons (Score:2)