Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Businesses IT Hardware

Server Runs Continuously For 24 Years (computerworld.com) 137

In 1993 a Stratus server was booted up by an IT application architect -- and it's still running. An anonymous reader writes: "It never shut down on its own because of a fault it couldn't handle," says Phil Hogan, who's maintained the server for 24 years. That's what happens when you include redundant components. "Over the years, disk drives, power supplies and some other components have been replaced but Hogan estimates that close to 80% of the system is original," according to Computerworld.
There's no service contract -- he maintains the server with third-party vendors rather than going back to the manufacturer, who says they "probably" still have the parts in stock. And while he believes the server's proprietary operating system hasn't been updated in 15 years, Hogan says "It's been extremely stable."

The server will finally be retired in April, and while the manufacturer says there's some more Stratus servers that have been running for at least 20 years -- this one seems to be the oldest.
This discussion has been archived. No new comments can be posted.

Server Runs Continuously For 24 Years

Comments Filter:
  • by suso ( 153703 ) * on Saturday January 28, 2017 @07:36PM (#53756545) Journal

    "It never shut down on its own because of a fault it couldn't handle," said Hogan. "I can't even think of an instance where we had an unplanned shutdown," he said.

    This isn't a server that has had an OS uptime of 24 years. This is a computer that they are still using after 24 years that "hasn't crashed". So what. The Amiga still being used from the 80s was a bigger deal. This article is really just an ad for Stratus.

    • This article is really just an ad for Stratus.

      Or Phil Hogan's résumé

      • by Anonymous Coward on Saturday January 28, 2017 @08:06PM (#53756629)

        Definitely not Hogan's resume.

        According to his boss Wilhelm Klink, no one has ever successfully left the company.

        • by Anonymous Coward

          Definitely not Hogan's resume.

          According to his boss Wilhelm Klink, no one has ever successfully left the company.

          I get your joke, but you just confused a large number of young people. Who all need to get off my lawn.

        • by Lisias ( 447563 )

          Definitely not Hogan's resume.

          According to his boss Wilhelm Klink, no one has ever successfully left the company.

          alive.

    • by moofrank ( 734766 ) on Saturday January 28, 2017 @08:15PM (#53756647)

      I used to work on Stratus servers, and I think the company was purchased by IBM in the late 90s.

      For each running component in the system, there are three physical instances. They use a voting system to drop any disagreement in RAM or the outcome of an instruction. In the 3 years I dealt with them, I never saw a system failure, and the only outages were caused by planned system upgrades. OS stuff. All of the hardware was hit swapped.

      These were multimillion dollar machines that basically had the CPU performance of a couple of 68000 CPUs.

      I personally witnessed a take out of a Novell 2.x file server which had a 16 year uptime. This was for a school system, and they had forgotten where the file server was. Stuffed in the back of a janitorial closet, and dust covered. That wasn't any sort of fancy hardware.m, but an old microchannel PC.

      If you don't need to patch them, computers run for a long time.

      • IBM was a reseller for Stratus but Stratus is still an independent company.

      • by Anonymous Coward

        I had a NT 3.51 Server that had an uptime of 8+ years. I was on a project to convert all of the machines from Token Ring to ethernet and saved this one to the last just because I couldn't bare to be the one to take it down. The NT 4.0 servers we had we were lucky to have an uptime of 30 days.

        • by sr180 ( 700526 )

          When I managed NT 4sp6 servers - as soon as Task manager showed around 500 idle hours, it was time for a reboot, because magical shit would start happening..

      • by Ed Avis ( 5917 )
        "an old microchannel PC" - so relatively fancy in fact. The quality and reliability of IBM's Micro Channel machines (and their small number of licensees) was a notch or two above the typical AT clones of the time. In particular they were designed with some attention to airflow and cooling, rather than just a box with a fan in it, so would be more likely to survive a dust-covered existence.
      • I personally witnessed a take out of a Novell 2.x file server which had a 16 year uptime. This was for a school system, and they had forgotten where the file server was. Stuffed in the back of a janitorial closet, and dust covered.

        Was there a sign on the door saying "Beware of the Leopard?"

      • by Anonymous Coward on Sunday January 29, 2017 @06:21AM (#53758587)

        I used to work at Stratus. Most components were duplicated (disks, IO boards, power supplies). Everything was hot swap. CPU's were duplicated *pairs*. Each pair ran in lockstep, instruction by instruction, and results were compared within a pair. If one pair pf CPUs disagreed (between the CPUs in that pair), but the other pair agreed (between the CPUs in that second pair), the first pair was taken off line and the second pair continued processing.

        Each pair of CPUs (and their associated memory) were on a separate board. The faulty board would light a red light, and the admin could pull that faulty board with the system running. A replacement board could be installed, again with the system running transactions. When the new board was installed, a process would start of synchronizing its memory with the content of the memory of the good board. This was done with the system running (processing bank transactions, for example), so the bus bandwidth between the boards had to be fast enough to be able to handle the rate at which the memory contents on the good board was changing. At a certain point the memory state of the two boards would be in sync and the new board would start processing, again in lockstep with the board that had been running all along. The repair process was so easy that one engineering director there had what he called the "mom test" - he have his mom come in and see if she could fix a system that had been forced to throw a fault. Red light? Pull that board out and put a new one in. New board's lights went red-yellow-green (as memory was brought into sync), and you're running fault-tolerant again. Easy peasy. (When the boards failed, they'd "phone home" and Stratus tech support would know there was a fault often before the customer did. They'd ship a replacement part overnight to the site with the bad part. Anyone who worked at Stratus in those days knew the story of the FedEx driver doing a repair for a customer.)

        OS upgrades were done by un-synchronizing the OS drives and upgrading one at a time. One would be taken off line for an OS upgrade. The system would have to go through a restart to run on the upgraded OS drive (that would be done in a planned maintenance window), but once it was running, the second OS drive would be mirrored to the running OS drive, at which point the disks were redundant again. The unplanned downtime was zero, the planned downtime was minimized.

        It became a challenge for Stratus when CPUs became nondeterministic (instructions wouldn't necessarily process in exactly the same order, making lock-step processing a real problem). At least one CPU architecture transition was driven by that issue. And clearing the heat from 4 cpus in a small space was a thermal challenge. But they were reliable beasts, expensive enough only to be used for workloads involving real money (e.g., financial transactions). Back when the founder (Bill Foster) was still CEO, it was a great place to work. When he left and the MBA's moved in, the company got sold to Ascend Communications, then Ascend got bought by Lucent, the non-telecom part of the business was spun off (as Stratus Technologies), and yes they are still in business.

      • by skids ( 119237 )

        I personally witnessed a take out of a Novell 2.x file server which had a 16 year uptime. This was for a school system, and they had forgotten where the file server was. Stuffed in the back of a janitorial closet, and dust covered.

        I'll go one better on that... I know of one that was up for a couple of decades and finally failed, and when they went looking for it, they had to break through some drywall into an odd corner of a closet where it had accidentally been sealed off by construction contractors.

      • Given that the version of the OS that supports that machine hasn't been updated for more than a decade, that machine probably has been running continuously for a lot more than 10 years.

        If it was bought in 1993, the CPUs were probably PA-RISC, not 68K. I can't tell for sure, because the picture was not a Stratus machine.

        The current generation of CPUs are functionally dual socket zeons in 4U rack enclosures. The heat envelope allows for up to 24 cores. Operating systems supported are VOS (the original propr

    • This isn't a server that has had an OS uptime of 24 years. This is a computer that they are still using after 24 years that "hasn't crashed".

      Not even that. My understanding is, when Stratus fails over processors it is just a quiet reboot. Didn't turn off the power, yay.

      In the good old days the mainframe boys would hot-swap mainframes by running the new processor in lock-step with the old one, even across vendors (It is rumoured that Amdahl made some sales this way.) The Voyager [wikipedia.org] computers have been running for 40 years.

      • OK, digging into that... Stratus is doing some weird and cool stuff. Not running processors in cycle-for-cycle lockstep like the mainframe guys (at least, not with their x86 offerings where Intel would never permit the level of systems integration that would be required) but at the memory access level instead, as in, if two processors are running the some code on the same data, then they must access memory with the same pattern. Hard to see how that could be made to work without some kind of hypervisor, whi

    • I'm not as surprised as other people making comments about this article. In 1993, I ported Raima Data Manager (at the time, a network-model DMBS running 12,000 different commercial applications (you never heard of it, because it simply worked) to Stratus VOS. The manager of the Stratus office in Bellevue, WA gave me a tour. In the glass-walled machine room, he opened up the Stratus machine running the office - the center of the company's northwest US sales operation - and pulled out a board. I looked out of
  • What? (Score:2, Insightful)

    by Anonymous Coward

    '"I can't even think of an instance where we had an unplanned shutdown," he said.'

    Um... I should hope so, since if it had, it wouldn't have had 24 years of uptime. And no photo of the output of "uptime"? I'm starting to think that they DID shut it down/reboot it many times, but somehow ignore this in the "uptime". Nonsensical article.

    • Re:What? (Score:4, Insightful)

      by ShanghaiBill ( 739463 ) on Saturday January 28, 2017 @07:51PM (#53756595)

      if it had, it wouldn't have had 24 years of uptime.

      24 years of "uptime" doesn't mean no unplanned shutdowns. It means no shutdowns of any kind. This machine has not done that, and certainly has not been "running continuously".

      • Re:What? (Score:4, Informative)

        by JoeyRox ( 2711699 ) on Saturday January 28, 2017 @07:54PM (#53756607)
        Agreed. Based on one of the embedded articles reference in the original the server had run for at least 4 years continuously, which I still find impressive.

        http://www.computerworld.com/article/2550661/data-center/this-server-outlasts-two-presidents.html [computerworld.com]
        • by arth1 ( 260657 )

          Agreed. Based on one of the embedded articles reference in the original the server had run for at least 4 years continuously, which I still find impressive.

          4 years is 1461 days, which isn't that impressive.
          A couple of prod servers I use:

          21:09:09 up 1507 days, 10:21, 0 users, load average: 0.00, 0.01, 0.05
          21:10:00 up 1651 days, 9:03, 0 users, load average: 0.02, 0.33, 0.32

          And there are some irreplaceable Unix boxen from the mid 90s too, but they don't have as high uptimes, due to needs for repairs and parts cannibalization.

          • Our software used to rely on Stratus servers until we were forced to rely on eBay for spares... before that we had no downtime recorded for 8 years before i came, and none for 7 years after.

            I managed Novell servers with uptimes over 3000 days, which for Novell servers wasn't at all unusual. For most of these we didn't rely on the IDE drivers for data, so we avoided the clock problems by kicking the driver in the head.

  • BS title (Score:5, Insightful)

    by gravewax ( 4772409 ) on Saturday January 28, 2017 @07:42PM (#53756569)
    it DID NOT run continuously for 24 years. It simply never stopped or restarted without admin intervention, two very very different things.While still impressive it is no where near as impressive as if it had run 24 years continuously.
    • by chromaexcursion ( 2047080 ) on Saturday January 28, 2017 @08:19PM (#53756657)
      Stratus are an old school redundant parallel architecture. You can take a node off line without taking the system down. Beyond that multiple levels of redundancy with components. Portions of the system have certainly been taken down, but the system as a whole kept running.
      No one would consider that kind of architecture now; much too expensive, when other solutions are available now. The key word in the previous sentence is "now". Probably not an ad for Stratus, they don't really exist anymore.
      The equivalent now is a server farm. There are systems (server farms) that have been running for over a decade.
      • by Kjella ( 173770 )

        Don't mainframes kinda do the same thing today? I know they're not exactly mainstream but not everyhing is well suited to being done by a farm, where you really need serialization and global consistency. If you're Facebook or Google the page doesn't have to perfectly reflect changes someone did 0.01 second ago. If you're doing bank transactions or booking tickets then you really need to know if there's still money in the account or the seat is still free. NoSQL is great if you don't need all the guarantees

        • "not exactly mainstream?" Unless you're talking about real industries, like insurance, I suppose. Sure, Facebook and the like can get by with never-consistent kill-them-all-sort-them-later distributed farms of commodity PCs, but there are still some businesses which need a modicum of reliability in their data processing.

          There are probably on the order of 10000 System z installations. Yes, that's a small number relative to x64, but it's still very much "mainstream", particularly when you look at how they're

    • by Anonymous Coward

      For those who want to see an actual awesome uptime, there was a Reddit thread yesterday about a Cisco 2500 that has been up 20 years [reddit.com], without any interruption or downtime, since January 29, 1997.

      • by bytesex ( 112972 )

        Is it also pwned by every major government on the planet?

        • by skids ( 119237 )

          Depending on what features you are using the attack surface can be very small on these, so even if you don't have an out-of-band management system (or no management system, if you don't need to change the config enough for running to the closet with a console cable and a laptop to be a chore) they can be pretty much hack-proof.

    • Re:BS title (Score:4, Informative)

      by mysidia ( 191772 ) on Saturday January 28, 2017 @09:06PM (#53756791)

      Why do you say that's less impressive than running 24 years continuously? Any non-trivial application requires servicing eventually.

      And how will you even be able to tell if that is the case?

      In a virtualization environment; I have servers with 7 year uptimes. Of course, they have occasionally been vMotioned between hosts -- in some cases, servers have been checkpointed, Suspended for a few hours, then resumed in another datacenter without any operating system reboot, so if you go by OS uptime they've been up for 10 years.

      Sometimes a server application can become stalled or break, So it's not provided continuous service, but there's no visible indication on the server, no administrative indication in the log, etc.

      • What's REALLY impressive is that it was running during Y2K! It must have been Y2K compliant YEARS before anyone ever even thought to look for that.
    • As a biological machine, I have an uptime of 35 years. While I have been put into sleep mode from time to time, I have never been shutdown. In fact, one of the disadvantages of my particular architecture is that once shutdown, I cannot be restarted.
  • by guruevi ( 827432 ) on Saturday January 28, 2017 @07:45PM (#53756583)

    Not quite the same as 24 year uptime. In the same vein, I have a Sun server that is still running since the mid-90's, part of a medical device and used to compile very particular software code for an old small-bore MRI system. We shut it down when the power goes out (very rare), but it's SCSI drives are still good.

    • by jandrese ( 485 )
      While I've never worked directly with Stratus boxes, my understanding is that the machines have redundant and hot-swappable everything, so it's possible to completely replace half of the box while the other half is serving normally, and then switch it over and do the same on the other half. No unplanned outage might well mean that it never stopped doing whatever it is that the server is tasked with, even when parts of it had to be replaced or upgraded. Even the OS all the way down to the kernel can be upg
      • by guruevi ( 827432 )

        Sun Servers did as well, they were one of the first machines besides mainframes that even had hot-swappable CPU and RAM, Solaris kernels could be upgraded without a reboot.

  • IT application architect meet sanitation engineer.
  • by buss_error ( 142273 ) on Saturday January 28, 2017 @07:50PM (#53756591) Homepage Journal

    And no, those were the model numbers, not the CPU, which was the M68 series.

    About the only thing non-redundant was the clock card. Voice of Experience. The power supplies had built in UPS's. Funny thing on the 808X systems, the power switch had "Off", "On", and past "On" was another state, which I forget what it was called. But if you replaced hardware while running, you'd push it up (it was spring loaded) to get it to IPL the new hardware.

    I loved it because you could fold up 24 physical processors into 12, 6 or 4 logical with quorum voting. Get a bad CPU? It wouldn't miss a clock cycle, it's just lock it out and keep going. You could also run it completely unfolded.

    These days, folks would say "so what?" - but "back in the day", your PC had a single core. It was a big deal. And even today, if you get a Check CPU, the system crashes on a PC.

  • by El Cubano ( 631386 ) on Saturday January 28, 2017 @07:54PM (#53756605)

    "Over the years, disk drives, power supplies and some other components have been replaced but Hogan estimates that close to 80% of the system is original," according to Computerworld.

    Then is it still considered the same server? https://en.wikipedia.org/wiki/... [wikipedia.org]

    Personally, I have a computer that lives in a case I got in 2003. I am on motherboard #4, power supply #2, processor #2, memory modules #6 & #7, hard drives #4 & #5, etc. However, I still consider it to be the same computer. Perhaps there is something psychological about it, but the name (or in this case the case) has a special significance even if all the guts have been swapped out.

    • by mspohr ( 589790 ) on Saturday January 28, 2017 @08:12PM (#53756641)

      Reminds me of the farmer who had the same ax for 25 years. He'd replaced the handle 4 times and the head twice... but it was the same ax.

    • by Mysticalfruit ( 533341 ) on Saturday January 28, 2017 @08:28PM (#53756685) Homepage Journal
      Since this machine is running VOS, and from the '93 time frame it's either an X/AR with i860's or a Continuum with PA-RISC. I'll spitball and say it's a Continuum.
      These machines are not like desktops. The hardware and software is extremely tightly coupled. Multiyear uptimes are not uncommon on Stratus VOS machines.

      Full disclosure, I'm a former Stratus Employee.
    • by mysidia ( 191772 )

      If Microsoft says it's still the same computer for Windows OEM licensing purposes, so a new license purchase is not required, then I'll say it's still the same server.

    • by arth1 ( 260657 )

      Personally, I have a computer that lives in a case I got in 2003. I am on motherboard #4, power supply #2, processor #2, memory modules #6 & #7, hard drives #4 & #5, etc. However, I still consider it to be the same computer

      I'm sure a lot of us have systems that have been upgraded so many times that the only part being the same is the case (Lian-Li cases last forever) and name.

      However, my main server, serving DNS, DHCP, SMTP, POP3S, HTTP, NTP, Samba, NFS and NIS, is still the same as it was in 2001. PIII-S, 512 MB RAM, and while hard drives have been replaced over the years, it's still running well, with an OS up-to-date as per today. It's power frugal enough that it doesn't even have a CPU fan, despite the system having run

    • This is more a philosophical question along the lines of ship of Theseus [wikipedia.org].

      All the components of your cells are replaced about every 7 years. Are you still the same person every 7 years?

    • by Agripa ( 139780 )

      Personally, I have a computer that lives in a case I got in 2003. I am on motherboard #4, power supply #2, processor #2, memory modules #6 & #7, hard drives #4 & #5, etc.

      My FreeBSD router has been running since about 2000 with a Celeron 300A and Supermicro P6SBA Revision 2.0 motherboard with 384M of ECC SDRAM.

      I upgraded the storage from a 600MB hard drive to compact flash a couple years ago for faster booting and lower power. The power supply has been replaced once.

      Other than power outages, occasional software updates, and routine maintenance for cleaning dust and maintaining the fans, the only failure has been when the ice machine upstairs sprung a leak which managed to d

  • by Toasterboy ( 228574 ) on Saturday January 28, 2017 @08:07PM (#53756631)

    Stratus has proprietary redundant *everything* on their machines, and runs in lockstep; they literally have two of everything in there... two motherboards, two cpus, two sets of RAM, etc. If anything weird happens on one side, they fail over to the other motherboard running in lockstep on the other blade in the chassis. Combine that with running an extremely conservative set of drivers that are known stable, and you can get six nines out of the thing. Stratus is typically used for credit card processing and banking applications where it's not ever acceptable to have a machine down for the time it takes to reboot. Really, really, really expensive though. You wouldn't want to use one of these for anything normal.

    • by c ( 8461 )

      Really, really, really expensive though. You wouldn't want to use one of these for anything normal.

      Environment Canada used to run a similar architecture from Tandem for processing weather data. They wanted the "real timey" aspects of how it dealt with data, but the extreme data processing redundancy was a bit of a problem ("don't lose my money" is massive overkill for a temperature value that's updated at least hourly) and they ended up doing some deep O/S development to cut 150 disk writes per data element

    • A lot of manufacturing shops use them to run production lines (where the computer crashing can cause the entire line to shut down).

      They are also part of the 911 system.

      The other reason one occasionally wants voting hardware is to detect failures. If the numbers you are crunching are really important, you want to be sure you get the right answer. I certainly hope that the people designing self-driving cars are using voting computers, redundant sensors, and redundant actuators. I don't want a glitch in som

  • FTA:

    Even though the system has a character-driven interface, similar to an old green screen system, the users "like the reliability of it, and the screens are actually pretty simple," said Hogan.

    Is there any other way to run a serious server?

  • by Anonymous Coward

    Windows 95 had a bug that made it crash when the uptime hit 2^32 miliseconds, or 49.7 days. Since Windows usually crashed much sooner anyway, it took Microsoft years to notice that bug.

    • by Anonymous Coward

      For some more perspective, parent post is informative because everyone familiar with Win95 has already left /. in disgust.

  • Not proprietary, though netware is no longer supported, there's not a lot to go wrong, and some boxes had epic uptimes, as in never died, never rebooted. We had one that the only reason it completely went down was a catastrophic power loss (both PDUs lost power at the same time) . Its uptime was over a decade with over 50 users still accessing every day. All that being said, anything that's still running 24 years after initial boot is impressive and worthy of note. NOTHING running windows would have done t
  • I built a webserver on a PowerMac 7200 in 1996 and the machine's been running 24/7/365 since (barring power outages longer than the UPS battery, etc). Not a single component has been replaced, the OS (System 9.2.1) never updated, the software (WebSTAR) only patched until the company went out of business. I'd be willing to bet that there's a lot of servers like this still floating around universities and school districts...
    • Yeah the system in the article has been down for maintenance for various reasons. It hasn't CRASHED since it was put into service. A computer that doesn't crash? That's impressive - if it runs Windows. I've been running servers since the mid 1990s and I'd say MOST of them have never crashed.

  • Old-school pre-1990s telephone switches - you know, those nearly-building-sized things that kept thousands or tens of thousands of phones in a city working - had uptimes measured in decades.

    Short of either a scheduled replacement or a physical disaster, they kept running and running and running.

  • And while he believes the server's proprietary operating system hasn't been updated in 15 years, Hogan says "It's been extremely stable."

    Stable means unchanged. If it hasn't been updated in 15 years, of course it's stable.

  • Two years untouched, I returned and saw its screen had gone blank. Linux distro, booted from DVD, in-RAM, quite a chore to bring everything back after shutdown and reboot. But it wasn't needed: ctrl-alt-F2, init 2 to be sure, init 5 and presto the desktop was back. Singing along. Whatever the issue it didn't take my old Linux down. Happy for life's little joys.
  • ... is not to talk about server uptime. To anyone. Now you've just jinxed it.

"If there isn't a population problem, why is the government putting cancer in the cigarettes?" -- the elder Steptoe, c. 1970

Working...