Server Runs Continuously For 24 Years (computerworld.com) 137
In 1993 a Stratus server was booted up by an IT application architect -- and it's still running.
An anonymous reader writes:
"It never shut down on its own because of a fault it couldn't handle," says Phil Hogan, who's maintained the server for 24 years. That's what happens when you include redundant components. "Over the years, disk drives, power supplies and some other components have been replaced but Hogan estimates that close to 80% of the system is original," according to Computerworld.
There's no service contract -- he maintains the server with third-party vendors rather than going back to the manufacturer, who says they "probably" still have the parts in stock. And while he believes the server's proprietary operating system hasn't been updated in 15 years, Hogan says "It's been extremely stable."
The server will finally be retired in April, and while the manufacturer says there's some more Stratus servers that have been running for at least 20 years -- this one seems to be the oldest.
There's no service contract -- he maintains the server with third-party vendors rather than going back to the manufacturer, who says they "probably" still have the parts in stock. And while he believes the server's proprietary operating system hasn't been updated in 15 years, Hogan says "It's been extremely stable."
The server will finally be retired in April, and while the manufacturer says there's some more Stratus servers that have been running for at least 20 years -- this one seems to be the oldest.
Not "continuously" in the geek sense of the word (Score:5, Insightful)
"It never shut down on its own because of a fault it couldn't handle," said Hogan. "I can't even think of an instance where we had an unplanned shutdown," he said.
This isn't a server that has had an OS uptime of 24 years. This is a computer that they are still using after 24 years that "hasn't crashed". So what. The Amiga still being used from the 80s was a bigger deal. This article is really just an ad for Stratus.
Re: (Score:1)
This article is really just an ad for Stratus.
Or Phil Hogan's résumé
Re: Not "continuously" in the geek sense of the wo (Score:4, Funny)
Definitely not Hogan's resume.
According to his boss Wilhelm Klink, no one has ever successfully left the company.
Re: (Score:1)
Definitely not Hogan's resume.
According to his boss Wilhelm Klink, no one has ever successfully left the company.
I get your joke, but you just confused a large number of young people. Who all need to get off my lawn.
Re: (Score:1)
Re: (Score:2)
Definitely not Hogan's resume.
According to his boss Wilhelm Klink, no one has ever successfully left the company.
alive.
Re: Not "continuously" in the geek sense of the w (Score:1)
Don't try to "add to the joke" if you don't know what you're talking about.
On the show, the POWs were able to come & go as they pleased due to Klink's incompetence.
Re: Not "continuously" in the geek sense of the wo (Score:5, Interesting)
I used to work on Stratus servers, and I think the company was purchased by IBM in the late 90s.
For each running component in the system, there are three physical instances. They use a voting system to drop any disagreement in RAM or the outcome of an instruction. In the 3 years I dealt with them, I never saw a system failure, and the only outages were caused by planned system upgrades. OS stuff. All of the hardware was hit swapped.
These were multimillion dollar machines that basically had the CPU performance of a couple of 68000 CPUs.
I personally witnessed a take out of a Novell 2.x file server which had a 16 year uptime. This was for a school system, and they had forgotten where the file server was. Stuffed in the back of a janitorial closet, and dust covered. That wasn't any sort of fancy hardware.m, but an old microchannel PC.
If you don't need to patch them, computers run for a long time.
Re: (Score:2)
IBM was a reseller for Stratus but Stratus is still an independent company.
Re: (Score:1)
I had a NT 3.51 Server that had an uptime of 8+ years. I was on a project to convert all of the machines from Token Ring to ethernet and saved this one to the last just because I couldn't bare to be the one to take it down. The NT 4.0 servers we had we were lucky to have an uptime of 30 days.
Re: (Score:2)
When I managed NT 4sp6 servers - as soon as Task manager showed around 500 idle hours, it was time for a reboot, because magical shit would start happening..
Re: (Score:2)
Re: (Score:2)
I personally witnessed a take out of a Novell 2.x file server which had a 16 year uptime. This was for a school system, and they had forgotten where the file server was. Stuffed in the back of a janitorial closet, and dust covered.
Was there a sign on the door saying "Beware of the Leopard?"
Re: Not "continuously" in the geek sense of the wo (Score:4, Informative)
I used to work at Stratus. Most components were duplicated (disks, IO boards, power supplies). Everything was hot swap. CPU's were duplicated *pairs*. Each pair ran in lockstep, instruction by instruction, and results were compared within a pair. If one pair pf CPUs disagreed (between the CPUs in that pair), but the other pair agreed (between the CPUs in that second pair), the first pair was taken off line and the second pair continued processing.
Each pair of CPUs (and their associated memory) were on a separate board. The faulty board would light a red light, and the admin could pull that faulty board with the system running. A replacement board could be installed, again with the system running transactions. When the new board was installed, a process would start of synchronizing its memory with the content of the memory of the good board. This was done with the system running (processing bank transactions, for example), so the bus bandwidth between the boards had to be fast enough to be able to handle the rate at which the memory contents on the good board was changing. At a certain point the memory state of the two boards would be in sync and the new board would start processing, again in lockstep with the board that had been running all along. The repair process was so easy that one engineering director there had what he called the "mom test" - he have his mom come in and see if she could fix a system that had been forced to throw a fault. Red light? Pull that board out and put a new one in. New board's lights went red-yellow-green (as memory was brought into sync), and you're running fault-tolerant again. Easy peasy. (When the boards failed, they'd "phone home" and Stratus tech support would know there was a fault often before the customer did. They'd ship a replacement part overnight to the site with the bad part. Anyone who worked at Stratus in those days knew the story of the FedEx driver doing a repair for a customer.)
OS upgrades were done by un-synchronizing the OS drives and upgrading one at a time. One would be taken off line for an OS upgrade. The system would have to go through a restart to run on the upgraded OS drive (that would be done in a planned maintenance window), but once it was running, the second OS drive would be mirrored to the running OS drive, at which point the disks were redundant again. The unplanned downtime was zero, the planned downtime was minimized.
It became a challenge for Stratus when CPUs became nondeterministic (instructions wouldn't necessarily process in exactly the same order, making lock-step processing a real problem). At least one CPU architecture transition was driven by that issue. And clearing the heat from 4 cpus in a small space was a thermal challenge. But they were reliable beasts, expensive enough only to be used for workloads involving real money (e.g., financial transactions). Back when the founder (Bill Foster) was still CEO, it was a great place to work. When he left and the MBA's moved in, the company got sold to Ascend Communications, then Ascend got bought by Lucent, the non-telecom part of the business was spun off (as Stratus Technologies), and yes they are still in business.
Re: (Score:2)
I personally witnessed a take out of a Novell 2.x file server which had a 16 year uptime. This was for a school system, and they had forgotten where the file server was. Stuffed in the back of a janitorial closet, and dust covered.
I'll go one better on that... I know of one that was up for a couple of decades and finally failed, and when they went looking for it, they had to break through some drywall into an odd corner of a closet where it had accidentally been sealed off by construction contractors.
Re: (Score:2)
I'll go one better on that... I know of one that was up for a couple of decades and finally failed, and when they went looking for it, they had to break through some drywall into an odd corner of a closet where it had accidentally been sealed off by construction contractors.
You mean the one everyone read about 16 years ago [theregister.co.uk]
Slashdot Article [slashdot.org]
Re: (Score:2)
Given that the version of the OS that supports that machine hasn't been updated for more than a decade, that machine probably has been running continuously for a lot more than 10 years.
If it was bought in 1993, the CPUs were probably PA-RISC, not 68K. I can't tell for sure, because the picture was not a Stratus machine.
The current generation of CPUs are functionally dual socket zeons in 4U rack enclosures. The heat envelope allows for up to 24 cores. Operating systems supported are VOS (the original propr
Re: (Score:2)
This isn't a server that has had an OS uptime of 24 years. This is a computer that they are still using after 24 years that "hasn't crashed".
Not even that. My understanding is, when Stratus fails over processors it is just a quiet reboot. Didn't turn off the power, yay.
In the good old days the mainframe boys would hot-swap mainframes by running the new processor in lock-step with the old one, even across vendors (It is rumoured that Amdahl made some sales this way.) The Voyager [wikipedia.org] computers have been running for 40 years.
Re: (Score:2)
OK, digging into that... Stratus is doing some weird and cool stuff. Not running processors in cycle-for-cycle lockstep like the mainframe guys (at least, not with their x86 offerings where Intel would never permit the level of systems integration that would be required) but at the memory access level instead, as in, if two processors are running the some code on the same data, then they must access memory with the same pattern. Hard to see how that could be made to work without some kind of hypervisor, whi
Re: (Score:1)
What? (Score:2, Insightful)
'"I can't even think of an instance where we had an unplanned shutdown," he said.'
Um... I should hope so, since if it had, it wouldn't have had 24 years of uptime. And no photo of the output of "uptime"? I'm starting to think that they DID shut it down/reboot it many times, but somehow ignore this in the "uptime". Nonsensical article.
Re:What? (Score:4, Insightful)
if it had, it wouldn't have had 24 years of uptime.
24 years of "uptime" doesn't mean no unplanned shutdowns. It means no shutdowns of any kind. This machine has not done that, and certainly has not been "running continuously".
Re:What? (Score:4, Informative)
http://www.computerworld.com/article/2550661/data-center/this-server-outlasts-two-presidents.html [computerworld.com]
Re: (Score:2)
Agreed. Based on one of the embedded articles reference in the original the server had run for at least 4 years continuously, which I still find impressive.
4 years is 1461 days, which isn't that impressive.
A couple of prod servers I use:
21:09:09 up 1507 days, 10:21, 0 users, load average: 0.00, 0.01, 0.05
21:10:00 up 1651 days, 9:03, 0 users, load average: 0.02, 0.33, 0.32
And there are some irreplaceable Unix boxen from the mid 90s too, but they don't have as high uptimes, due to needs for repairs and parts cannibalization.
Re: (Score:2)
Our software used to rely on Stratus servers until we were forced to rely on eBay for spares... before that we had no downtime recorded for 8 years before i came, and none for 7 years after.
I managed Novell servers with uptimes over 3000 days, which for Novell servers wasn't at all unusual. For most of these we didn't rely on the IDE drivers for data, so we avoided the clock problems by kicking the driver in the head.
Re: What? (Score:2)
That's funny. No, really, you're funny.
BS title (Score:5, Insightful)
Re:BS title - actually, probably true (Score:5, Insightful)
No one would consider that kind of architecture now; much too expensive, when other solutions are available now. The key word in the previous sentence is "now". Probably not an ad for Stratus, they don't really exist anymore.
The equivalent now is a server farm. There are systems (server farms) that have been running for over a decade.
Re: (Score:2)
Don't mainframes kinda do the same thing today? I know they're not exactly mainstream but not everyhing is well suited to being done by a farm, where you really need serialization and global consistency. If you're Facebook or Google the page doesn't have to perfectly reflect changes someone did 0.01 second ago. If you're doing bank transactions or booking tickets then you really need to know if there's still money in the account or the seat is still free. NoSQL is great if you don't need all the guarantees
Re: (Score:1)
"not exactly mainstream?" Unless you're talking about real industries, like insurance, I suppose. Sure, Facebook and the like can get by with never-consistent kill-them-all-sort-them-later distributed farms of commodity PCs, but there are still some businesses which need a modicum of reliability in their data processing.
There are probably on the order of 10000 System z installations. Yes, that's a small number relative to x64, but it's still very much "mainstream", particularly when you look at how they're
Re: (Score:1)
For those who want to see an actual awesome uptime, there was a Reddit thread yesterday about a Cisco 2500 that has been up 20 years [reddit.com], without any interruption or downtime, since January 29, 1997.
Re: (Score:2)
Is it also pwned by every major government on the planet?
Re: (Score:2)
Depending on what features you are using the attack surface can be very small on these, so even if you don't have an out-of-band management system (or no management system, if you don't need to change the config enough for running to the closet with a console cable and a laptop to be a chore) they can be pretty much hack-proof.
Re:BS title (Score:4, Informative)
Why do you say that's less impressive than running 24 years continuously? Any non-trivial application requires servicing eventually.
And how will you even be able to tell if that is the case?
In a virtualization environment; I have servers with 7 year uptimes. Of course, they have occasionally been vMotioned between hosts -- in some cases, servers have been checkpointed, Suspended for a few hours, then resumed in another datacenter without any operating system reboot, so if you go by OS uptime they've been up for 10 years.
Sometimes a server application can become stalled or break, So it's not provided continuous service, but there's no visible indication on the server, no administrative indication in the log, etc.
Re: (Score:2)
Yeepp.... Probably systems that weren't doing any substantial amount of processing or I/O.
Re: (Score:3)
Re: (Score:2)
Yes, it was Y2K compliant. So was Linux, Mac OS and just about every other OS on the planet EXCEPT Windows.
Re: (Score:1)
24 years without 'unplanned' shutdowns (Score:4, Interesting)
Not quite the same as 24 year uptime. In the same vein, I have a Sun server that is still running since the mid-90's, part of a medical device and used to compile very particular software code for an old small-bore MRI system. We shut it down when the power goes out (very rare), but it's SCSI drives are still good.
Re: (Score:2)
Re: (Score:2)
Sun Servers did as well, they were one of the first machines besides mainframes that even had hot-swappable CPU and RAM, Solaris kernels could be upgraded without a reboot.
lol (Score:2)
Loved the 8086 and 8088 (Score:5, Interesting)
And no, those were the model numbers, not the CPU, which was the M68 series.
About the only thing non-redundant was the clock card. Voice of Experience. The power supplies had built in UPS's. Funny thing on the 808X systems, the power switch had "Off", "On", and past "On" was another state, which I forget what it was called. But if you replaced hardware while running, you'd push it up (it was spring loaded) to get it to IPL the new hardware.
I loved it because you could fold up 24 physical processors into 12, 6 or 4 logical with quorum voting. Get a bad CPU? It wouldn't miss a clock cycle, it's just lock it out and keep going. You could also run it completely unfolded.
These days, folks would say "so what?" - but "back in the day", your PC had a single core. It was a big deal. And even today, if you get a Check CPU, the system crashes on a PC.
Is it still the same server? (Score:4, Insightful)
"Over the years, disk drives, power supplies and some other components have been replaced but Hogan estimates that close to 80% of the system is original," according to Computerworld.
Then is it still considered the same server? https://en.wikipedia.org/wiki/... [wikipedia.org]
Personally, I have a computer that lives in a case I got in 2003. I am on motherboard #4, power supply #2, processor #2, memory modules #6 & #7, hard drives #4 & #5, etc. However, I still consider it to be the same computer. Perhaps there is something psychological about it, but the name (or in this case the case) has a special significance even if all the guts have been swapped out.
Re:Is it still the same server? (Score:4, Insightful)
Reminds me of the farmer who had the same ax for 25 years. He'd replaced the handle 4 times and the head twice... but it was the same ax.
Re:Is it still the same server? (Score:5, Funny)
If you change "Theseus" to "farmer" and "ship" to "axe," is it still the same philosophical problem?
Re: (Score:1)
Did that axe you're grinding belong to your grandfather?
Re: (Score:1)
Who can say? I used the axe to chop the ship apart and built a temple with a golden pavilion from the timber.
Re: (Score:2)
I do not know if he changed it but I have one of my great grandfather's ball and peen hammers and I have changed the handle once where I also refinished it and painted it.
Re:Is it still the same server? (Score:5, Informative)
These machines are not like desktops. The hardware and software is extremely tightly coupled. Multiyear uptimes are not uncommon on Stratus VOS machines.
Full disclosure, I'm a former Stratus Employee.
Re: (Score:2)
Re: (Score:3)
If Microsoft says it's still the same computer for Windows OEM licensing purposes, so a new license purchase is not required, then I'll say it's still the same server.
Re: (Score:2)
Personally, I have a computer that lives in a case I got in 2003. I am on motherboard #4, power supply #2, processor #2, memory modules #6 & #7, hard drives #4 & #5, etc. However, I still consider it to be the same computer
I'm sure a lot of us have systems that have been upgraded so many times that the only part being the same is the case (Lian-Li cases last forever) and name.
However, my main server, serving DNS, DHCP, SMTP, POP3S, HTTP, NTP, Samba, NFS and NIS, is still the same as it was in 2001. PIII-S, 512 MB RAM, and while hard drives have been replaced over the years, it's still running well, with an OS up-to-date as per today. It's power frugal enough that it doesn't even have a CPU fan, despite the system having run
Re: (Score:2)
This is more a philosophical question along the lines of ship of Theseus [wikipedia.org].
All the components of your cells are replaced about every 7 years. Are you still the same person every 7 years?
Re: (Score:2)
Personally, I have a computer that lives in a case I got in 2003. I am on motherboard #4, power supply #2, processor #2, memory modules #6 & #7, hard drives #4 & #5, etc.
My FreeBSD router has been running since about 2000 with a Celeron 300A and Supermicro P6SBA Revision 2.0 motherboard with 384M of ECC SDRAM.
I upgraded the storage from a 600MB hard drive to compact flash a couple years ago for faster booting and lower power. The power supply has been replaced once.
Other than power outages, occasional software updates, and routine maintenance for cleaning dust and maintaining the fans, the only failure has been when the ice machine upstairs sprung a leak which managed to d
Stratus has proprietary redundant *everything*. (Score:5, Informative)
Stratus has proprietary redundant *everything* on their machines, and runs in lockstep; they literally have two of everything in there... two motherboards, two cpus, two sets of RAM, etc. If anything weird happens on one side, they fail over to the other motherboard running in lockstep on the other blade in the chassis. Combine that with running an extremely conservative set of drivers that are known stable, and you can get six nines out of the thing. Stratus is typically used for credit card processing and banking applications where it's not ever acceptable to have a machine down for the time it takes to reboot. Really, really, really expensive though. You wouldn't want to use one of these for anything normal.
Re: (Score:2)
Environment Canada used to run a similar architecture from Tandem for processing weather data. They wanted the "real timey" aspects of how it dealt with data, but the extreme data processing redundancy was a bit of a problem ("don't lose my money" is massive overkill for a temperature value that's updated at least hourly) and they ended up doing some deep O/S development to cut 150 disk writes per data element
Re: (Score:2)
A lot of manufacturing shops use them to run production lines (where the computer crashing can cause the entire line to shut down).
They are also part of the 911 system.
The other reason one occasionally wants voting hardware is to detect failures. If the numbers you are crunching are really important, you want to be sure you get the right answer. I certainly hope that the people designing self-driving cars are using voting computers, redundant sensors, and redundant actuators. I don't want a glitch in som
"character-driven interface" (Score:2)
Even though the system has a character-driven interface, similar to an old green screen system, the users "like the reliability of it, and the screens are actually pretty simple," said Hogan.
Is there any other way to run a serious server?
For some perspective (Score:1)
Windows 95 had a bug that made it crash when the uptime hit 2^32 miliseconds, or 49.7 days. Since Windows usually crashed much sooner anyway, it took Microsoft years to notice that bug.
Re: (Score:1)
For some more perspective, parent post is informative because everyone familiar with Win95 has already left /. in disgust.
Novell... (Score:2)
Re: (Score:2)
A1000 here, although the monitor isn't working, so there's some jury rigging to get it to work with a more modern CRT.
But in terms of longevity of a computing device in regular use, my slipstick from the 60s is still in my pocket every day. Plastic coated pearwood is durable.
Re: (Score:2)
Likely AmigaOS will have been rebooted many many times in all those years, and chances are you've replaced the battery and possibly the motherboard capacitors too in all that time.
Re: (Score:2)
NetWare ... they could still ping it on the network.
They didn't use enough concrete.
Meh, there's a lot of these out there... (Score:1)
Yep. Alternative title: server doesn't run Windows (Score:2)
Yeah the system in the article has been down for maintenance for various reasons. It hasn't CRASHED since it was put into service. A computer that doesn't crash? That's impressive - if it runs Windows. I've been running servers since the mid 1990s and I'd say MOST of them have never crashed.
Re: (Score:2)
0. Many Stratus servers are single-purpose. They didn't need updates once the systems were stsble.
1. Stratus OS were never subsceptible to viruses, botnets, etc. They aren't Windows, or anything like it.
Re: (Score:2)
There is no way to make a 100% secure networked operating system
Got a mathematical proof for that statement? Because that's what's requited for such a claim.
Re: It should have been retired 15 years ago. (Score:2)
Note that quantum encryption is being challenged. I'm pretty sure proving it's not possible is evident. Now the question you should have asked was if successful attacks on systems could be completed in a meaningful period of time... Which is almost a stupid question.
So far, however, absolute security seems unattainable in practice. And those who are successful probably don't disclose it, so we don't know...
Re: (Score:2)
Out of morbid curiosity, what qualifies as "supported"?
Telephone switches had uptimes of decades (Score:1)
Old-school pre-1990s telephone switches - you know, those nearly-building-sized things that kept thousands or tens of thousands of phones in a city working - had uptimes measured in decades.
Short of either a scheduled replacement or a physical disaster, they kept running and running and running.
What words mean (Score:2)
Stable means unchanged. If it hasn't been updated in 15 years, of course it's stable.
Happy with less (Score:2)
First Rule of Server Uptime... (Score:2)
Re: (Score:1)
Have you ever used a real server?
Re: (Score:2)
It's stupid to upgrade when there's no reason to upgrade.
Doing that would be a sign of a shitty sysadmin, dear PFY.