Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Bug Software Transportation

Air Traffic Snafu: FAA System Runs Out of Memory 234

minstrelmike writes: Over the weekend, hundreds of flights were delayed or canceled in the Washington, D.C. area after air traffic systems malfunctioned. Now, the FAA says the problem was related to a recent software upgrade at a local radar facility. The software had been upgraded to display customized windows of reference data that were supposed to disappear once deleted. Unfortunately, the systems ended up running out of memory. The FAA's report is vague about whether it was operator error or software error: "... as controllers adjusted their unique settings, those changes remained in memory until the storage limit was filled." Wonder what programming language they used?
This discussion has been archived. No new comments can be posted.

Air Traffic Snafu: FAA System Runs Out of Memory

Comments Filter:
  • by Anonymous Coward on Wednesday August 19, 2015 @09:36AM (#50346701)

    But nobody should ever need anything more than 640k!

    • by msauve ( 701917 )
      ...as long as you remember to do an occasional FRE("") in your code.
      • Re: (Score:3, Informative)

        One advantage of many airline online transaction systems: An applications programmer cannot do a malloc equivalent.

        Programs are created with a fixed memory size, and complex applications are simply a series of program modules which pass data between each other via common memory areas or memory-mapped files.

        Memory leaks in such an environment are quite rare.

  • Software error ... (Score:5, Informative)

    by gstoddart ( 321705 ) on Wednesday August 19, 2015 @09:38AM (#50346713) Homepage

    You can make the argument that if the software allowed the operators to crash the system, it's a software fault.

    You can also make the argument that stuff like this should have been tested in parallel with the live system so this wasn't a possibility.

    I mean, my god, what are the change management and testing practices which allowed this to only be discovered in your real system?

    I've been around a few systems which had to do with aircraft ... and the rules and practices surrounding them are pretty paranoid and rigorous, because the stakes are so high. For an actual air traffic system I'm stunned this happened.

    I guess I'm not surprised, but I am stunned.

    • by AlecC ( 512609 )

      The loose description sound like something not being garbage collected when it should have been. So no single change cause the problem. It might well have been caused by controllers playing with a new toy, in a way they would never do once it had settled in and testers would not do, It is difficult to observe heap leakage - even if you check free space after a run, it is not clear what the right value is.

      • by Anonymous Coward on Wednesday August 19, 2015 @09:57AM (#50346871)

        No, no, no, no, no! The concept of garbage collecting is a reaction to poor coding practices and reliance on it is laziness. Software engineers responsible for real-time, public safety software should be capable of managing memory in their code!

        • by U2xhc2hkb3QgU3Vja3M ( 4212163 ) on Wednesday August 19, 2015 @10:03AM (#50346919)
          Not only should they be "capable" of managing memory in their code, it should be part of the software design itself.
          • by jamstar7 ( 694492 ) on Wednesday August 19, 2015 @10:32AM (#50347155)
            Couple things to keep in mind.

            The civilian aircraft control system has been chronically underfunded for decades, since Reagan fired PATCO. One of the things they were on strike for was for better equipment to do their jobs better, easier, and with less stress. Even in the 80's, the computers and radars were dinosaurs best kept in a museum. Upgrades since then have always been a day late and a dollar short.

            The airspace above the US is the busiest in the world, and it's just getting worse. They don't even report near-misses anymore to the media unless the pilots can see each other giving them the finger. They're that common.

            Nothing will be done until 3 or 4 planes do a mid-air and the public outcry is so bad that people are ready to march on the FAA's office with torches and pitchforks. Then there will be a massive round of public firings to appease the crowd, a slight boost in funding to the FAA, followed by further deregulation of the airlines.

            Personally, with all the deregulation already, I'm surprised more planes don't shed parts along the way.
            • As far as safety goes, the private airlines are heavily regulated. And maintenance is not under the purview of the air traffic controllers, so this is a red herring and "shedding parts" is mere hyperbole in any case.

              If the systems were allegedly "dinosaurs" in the 1980s, I would think they'd be causing "mid-airs" on a regular basis right now. That they are not tells me that the systems have been upgraded.

              Reagan went into office SUPPORTING PATCO. They actually endorsed him over Jimmy Carter, who had ignor

            • Re: (Score:3, Informative)

              by tomhath ( 637240 )

              The civilian aircraft control system has been chronically underfunded for decades, since Reagan fired PATCO.

              Reagan initiated and appropriately funded a complete overhaul of the control system [sebokwiki.org].

              The illegal strike by the air traffic controllers is irrelevant.

              • by PuckSR ( 1073464 )

                It was relevant because it was one of the issues highlighted at the time.
                Also, Reagan announced the overhaul as a RESPONSE to the strike, but it wasn't given the type of fast-track authorization that would have made it useful

            • one question...

              i can understand 4 planes having midair collisions... you know 4 in total.

              2 separate midair collisions...

              but in what possible universe do you imagine 3 planes colliding in total.

              • "but in what possible universe do you imagine 3 planes colliding in total."

                I would bet that, within civil aviation, it's easier to have three planes colliding mid-air than just two or, at least, three involved with two crashing and a near-miss.

            • I'm surprised more planes don't shed parts along the way.

              Did the Primary Buffer Panel just fall off my gorram ship for no apparent reason?

            • I think this system that failed is part of the same one I helped bid on upgrading in the late 80s. (We were the lucky ones who lost the bid; IBM were the poor suckers who won it.) The Advanced Automation System was supposed to have a budget of something like 4 years and $4B, or maybe it was $7B, but either way it ran way way over that, in both years and billions, before being restuctured, partly because the problem is really hard, partly because the specs were extremely unrealistic, and partly because we

        • by dwpro ( 520418 ) <dgeller777.gmail@com> on Wednesday August 19, 2015 @10:31AM (#50347151)

          Software engineers responsible for real-time, public safety software should be capable of managing memory in their code

          And surgeons responsible for cutting open live human beings should be capable of not leaving tools in the person they're operating on, but it still happens. Professionals make mistakes. Garbage collection is a useful tool to make it more difficult to screw up.

          • by SpeedBump0619 ( 324581 ) on Wednesday August 19, 2015 @10:53AM (#50347341)

            Professionals make mistakes. Garbage collection is a useful tool to make it more difficult to screw up.

            I get this. And as a software engineer I fully agree. However, in practical terms, there shouldn't be any dynamic memory management happening at all.

            It's a real-time system. It *must* interact, on time, with all the planes that are in it's domain. That should be a bounded, predictable load, or there's no way to guarantee responsiveness. Given that, an analysis should have been done on the maximum number of elements the system supported. Those elements should have been preallocated (into a pool if you want to treat them "dynamically") before actual operation began. If/When the pool allocator ran out of items it should do two things: allocate (dynamically) more, and scream bloody murder to everyone who would listen regarding the unexpected allocation.

            This is (one of) the reason(s) I generally haven't liked garbage collected languages for real time systems. There's rarely ever a way to guard against unexpected allocations, because *every* allocation is blind.

          • by Rob Riggs ( 6418 ) on Wednesday August 19, 2015 @11:11AM (#50347491) Homepage Journal
            Surgeons leave tools in patients because they have no process when operating on a patient. Read the Checklist Manifesto sometime and read what the author has to say about best practices in the operating room [npr.org]. Everyone makes mistakes. The process we follow is what allows us to catch those mistakes, and prevent any mistakes from re-occurring.
          • Software engineers responsible for real-time, public safety software should be capable of managing memory in their code

            And surgeons responsible for cutting open live human beings should be capable of not leaving tools in the person they're operating on, but it still happens. Professionals make mistakes. Garbage collection is a useful tool to make it more difficult to screw up.

            Until the entire air traffic system grinds to a halt at the same time every day while java garbage collects everything. No, garbage collection is not the answer. There are more performant ways to manage memory.

          • by phantomfive ( 622387 ) on Wednesday August 19, 2015 @11:57AM (#50347871) Journal

            Garbage collection is a useful tool to make it more difficult to screw up.

            Recently I've seen a lot of memory leaks in Java and Javascript. People stick things in a hash table or a queue, then forget to remove them (angular.js also has gotchas to watch for avoiding memory leaks). Because programmers in those languages don't think about memory, they end up with more memory leaks than programmers in C.

            For a system that needs high reliability, garbage collection is not the answer, and can make things worse.

          • And surgeons responsible for cutting open live human beings should be capable of not leaving tools in the person they're operating on, but it still happens. Professionals make mistakes. Garbage collection is a useful tool to make it more difficult to screw up.

            If only there were some kind of portable machine one could use in order to look for metal left in a patients body...

            If only there were a requirement to do things like count gauze pads before and after surgeries, and then account for any numerical discrepancy as being ON (not IN) the patient, and/or going into the biohazard disposal unit. You know: a documented procedure.

            Oh, wait: there is.

        • by swillden ( 191260 ) <shawn-ds@willden.org> on Wednesday August 19, 2015 @01:46PM (#50348711) Journal

          No, no, no, no, no! The concept of garbage collecting is a reaction to poor coding practices and reliance on it is laziness. Software engineers responsible for real-time, public safety software should be capable of managing memory in their code!

          Garbage collection is a red herring. The notion that "real" software engineers must use manual deallocation is just as silly as the idea that garbage collection eliminates memory leaks. Though GC actually does eliminate dangling pointer bugs... by turning them into memory leaks.

          Garbage collection is a viable and reasonable strategy for handling deallocation -- in fact it can be significantly more efficient than manual deallocation, in terms of cycles spent on deallocation -- but it's not a panacea. It doesn't eliminate the need to think about object lifetimes or memory consumption. It reduces the amount of development effort focused on those issues, trading it instead for management of GC times. Whether that tradeoff is a net benefit depends on the context and system requirements.

          And that is what real software engineers do. They don't choose their tools based on which is the manliest and best for proving their coding prowess. They choose based on the nature of the problem and the resources available. Where GC interruptions can be tolerated, or safely scheduled, GC is a tool that automates away significant engineering effort. That's a good thing. Hard real-time systems generally don't tolerate GC very well, but virtually anything that interacts with people does tolerate brief (50 ms or less) GC pauses, and that's actually quite easy to achieve.

      • Umm, valgrind

    • I mean, my god, what are the change management and testing practices which allowed this to only be discovered in your real system?

      Don't know, probably just Government ineptitude. Let's ask free-market leaders how they handle things: Toyota (brake/accelerator pedals) or Chrysler / GM (remote access) or Boeing (Li-ion batteries) ... -- oh wait.

      • humans make mistakes. Get over it. Even when we try not to, we still make mistakes. And also, when shit goes horribly wrong, humans make incredible decisions that are crazy insane that defy logic, that allow people to live when nobody should, even when no human mistake actually occurred.

        Sully Landing the plane on the Hudson is a great example. Probably the only way that scenario works is to land the plane there. Right guy, in the right place, thinking crazy thoughts in an emergency.

        • humans make mistakes. Get over it.

          Agreed. Which, of course, was my point. Should I have included a "/sarcasm" tag?

    • by Rei ( 128717 ) on Wednesday August 19, 2015 @11:09AM (#50347465) Homepage

      So, I actually am a programmer for an ATC system...

      First off, this isn't as bad as it sounds as far as safety goes. One first needs to ask themselves, "what is the purpose of an ATC system?". The simple answer is, "don't ever let two aircraft exist in the same location at the same time". So any two aircraft can be separated in a) time, b) location, or c) altitude, and so long as they meet the minimum safety distances, that's all okay. Complicating this is the great variety of hardware on the aircraft, communications methods and protocols, and gaps in the information available to you, plus the wide variety in ATC systems and how they talk to each other. And there's a lot of potential instability at each stage. So basically ATC systems are massive collections of "special cases" that need to be handled on top of the basics. Maybe some line in Denmark is garbling messages that lead to you being fed bogus data. Maybe some aircraft in India's buggy hardware is for some reason spamming everyone on the network. Maybe you've got two different systems handling radar data and one says the radars are all fine and the other says they're not. Maybe the aircraft says they were at X point at Y time but some radar says something different. These are the sorts of things we have to deal on a weekly if not daily interval, and they lead what seems like it should be very simple pieces of software to become really huge systems.

      As mentioned, there can be lots of instability. Yep, it's true, these things can be rather buggy - both hardware and software. They're usually old designs that may have been poor design from the beginning, but have had to be continually patched and patched over the course of decades. Don't like that? Throw some more funds in for new ATC systems designed from scratch, otherwise this is going to continue to be the reality (yeah, new subsystems do come in every now and then for various purposes, but old systems are slow to go away).

      So, instability and bugs can sound scary. But remember the goals of an ATC system: separation. So let's just say that you lose the whole system for a long time - what do you do? Well, you basically revert to paper, and you've got a LOT more phone calls to make. You have to allow for more separation, and because of the increased workload, you can't handle as many planes. So you have to greatly reduce the number of planes in your region - they have to divert or wait. It's big delays, which costs big money. But it's not like we just start guessing whether planes are going to run into something or not.

      Our software here is predominantly old C code with a little bit of C++, and miscellaneous like yacc and lex. There are changelog entries dating back to the 80s - though that's the manual changelogs, it didn't go under revision control until the late 90s. Its core uses macros to an annoying degree to emulate object-oriented design in C; macros can be nested dozens of layers deep. It makes bugs very hard to find sometimes, but it's the core of the software, so it's not something that can be easily changed. So we do our best. Yes, there are "WONTFIX" bugs that we know about, and operators have documented procedures for working around them (usually involving restarting some module - the system is very modular, you don't have to restart the whole thing to fix a part that's acting up). But we always prioritize fixing the things that get in the way of their work the most - there's a lot of direct back and forth. Again, safety always takes top priority, then throughput. Everything else is way down below on the priority list.

      Changes work through the following process. A report of a bug or feature request is made. Someone analyses it and if they think it's worth working on writes up a task and assigns it to a programmer. The programmer works on the task and when they think it's ready they submit it for code review. Another programmer looks through all of the code and tries to see if they have any complaints. After any necessary back and forth to get things r

  • So sloppy, untested, programming crashed the system. I'm willing to bet that if you take the hood off the system, it's written using high level languages instead of C.
    • I kinda doubt that, My understanding is most of the US's air-traffic control systems (and software) is ancient .

      • by 0123456 ( 636235 )

        I kinda doubt that, My understanding is most of the US's air-traffic control systems (and software) is ancient .

        Somehow, I doubt it was 2,000,000 lines of assembly language.

      • Well, whatever this was coded in, it was recently upgraded ... you know, not ancient.

        The language written in doesn't matter. It was a new change, insufficiently tested, and which failed in the real world with a corner case nobody anticipated.

        That's a pretty large failure of coding, testing, and deployment.

        It's a bunch of things, but really you'd expect the people responsible for it would have been a LOT more paranoid and rigorous about it.

        It's an air traffic system after all, around Washington for crying o

        • ... if "air traffic control offline around Washington" isn't begging to have a Bruce Willis movie, I don't know what is...

          "Die Hard Drive"?

          How about "Bjarne Stroustrup Must Die Hard"?

        • by Vlad_the_Inhaler ( 32958 ) on Wednesday August 19, 2015 @10:43AM (#50347245)

          I have actually seen something similar to this before, also involving an Air Traffic Control.

          They were having some problem in handling "Large Messages", I am not sure of the exact details / circumstances - I was only peripherally involved. Anyway, the programmer wrote these to a file, then they were processed asynchronously and deleted. This minor change was tested - as usual at the site - by someone shooting an hour's production traffic through the test system and checking for unexpected aborts or other abnormalities. All was fine, the spooling file was 1% full.
          The patch went online. 4 days later (it was a Sunday morning and it was snowing) the file hit some limit and refused to accept new messages. At that moment things went "Keystone Cops".

          • All department heads were informed, except programming. Given that only one the patch had been applied in the previous week, not very helpful. Headless chickens ran around trying to find a solution.
          • Standard practice in this type of situation was to switch to the backup/standby system. Since ATC data is very short lived, the backup system had an empty database which would then be populated dynamically. All "Station Chiefs" had to approve this step. One refused because he could not see any problem. Finally someone managed to make him understand what the problem was, then it was "oh yes, we are seeing that as well". His was the smallest station of course.
          • Standard procedure was also to switch to manual control - rather than automated - and cancel short-haul flights. The railways could take up the slack. This was done.

          The switch was duly made and everything was working again.
          It turned out that the deletion of the processed records had a bug. One hour of live data left the file 1% full. 100 hours . . . do the math. It took 5 or 10 minutes for the programmer to fix the problem, he could have done it live on the Sunday if anyone had bothered to tell him what was going on.

          One of the lessons from that is also relevant here - one hour of live data left the file 1% full. I'd bet that they were testing that the new feature worked, not looking for hidden side-effects.

          • by gstoddart ( 321705 ) on Wednesday August 19, 2015 @12:35PM (#50348175) Homepage

            It took 5 or 10 minutes for the programmer to fix the problem, he could have done it live on the Sunday if anyone had bothered to tell him what was going on.

            I have seen far too many occasions where some hotshot made an out of band code change, broke prod, and then said "oh, it's just a quick fix".

            It would have to be one hell of an emergency to have live changes on a prod system be anything other than a hanging offense. I've see more problems caused by it, than things fixed by it.

            I've experienced several outages caused by someone who was either thinking "it's just a quick fix", or was trying to sneak in a fix for something which shouldn't have left their desk in the first place.

    • Where you *could* write a ATC system in C, why would you? Given that this system was envisioned and designed nearly 20 years ago, I have a feeling they used the accepted tools of the trade for the day and have upgraded to new platforms since.

      My best guess here is that what failed is the system engineering followed by failure to performance test. As you move from C++ applications to say Java, your memory and CPU requirements go up, way up. What may work GREAT in the lab, may consume way too many resources

    • by godefroi ( 52421 )

      Surely! C has saved countless software systems from memory leaks over the years, not to mention other various classes of bugs that simply WOULD NOT EXIST had we stopped evolving programming languages past C.

    • by delt0r ( 999393 )
      What the fuck? if you use all the memory in C you run out of memory like every other fucking language! What is this tripe with programing languages as religions round here? You think the magic C pixie fairy is going to magic more memory into your system?
      • If you run out of memory because you didn't SELF MANAGE it correct, it's still your fault. Memory leaks are always the fault of the developer, unless they're hardware based.
        • Memory leaks are always the fault of the developer, unless they're hardware based.

          And the first rule of software development is that it's always a hardware problem until proven otherwise!

    • by sjames ( 1099 )

      So you're saying C would never allow failure to free the mallocs? I'm not aware of that dialect, please enlighten us!

      • No, I'm not saying that all, I'm saying that it's a combination of several factors:

        1. Sloppy, unprofessional programming, such as freeing a non allocated memory block.
        2. Modern languages that take over memory management for you.

        I'm pointing out C because when developers stop relying on managed memory and garbage collectors they learn to be more responsible and in general avoid simple memory issues.
    • by Rei ( 128717 )

      I can't speak for their code, but ours is written in C. And... (don't throw tomatoes, I'm actively working to remedy this!)... it's C compiled without warnings even enabled, let alone -Werror. When you turn warnings on you get heavily, heavily spammed.

      Hopefully I'll have that aspect fixed within a few months, if other tasks don't eat up too much of my time.

      • Hey as long as you have a plan thats awesome :-).

        I'm currently an embedded system developer and web system developers. ALL of my C code MUST compile with -Wall -Wextra -Wpadentic -Werror turned on and must pass strict valgrind scans.
        • by Rei ( 128717 )

          Our code doesn't even work with valgrind. It (again, no tomatoes!) uses shmat to seize a particular address space (clobbering whatever was there in the process) because it uses pointer addresses that are hard coded into the program rather than being allocated dynamically by the operating system.

          I wish I was kidding. It's kind of like this [xkcd.com].

  • by rockmuelle ( 575982 ) on Wednesday August 19, 2015 @09:46AM (#50346785)

    You can have poor memory management in any language.

    Sure, historically C/C++ have had the been known for memory leaks due to memory that's not freed, but in Java/Python/pick-your-favorite-garbage-collected-language or using smart pointers in C++, all you need to do is have a container that keeps a reference to everything and nothing will go away. It's not hard to do this.

    Based on the summary, it sounds like that's what happened. Some monitor views just kept a list of everything and the developer forgot to purge the lists when things went out of, er, scope.

    -Chris

    • by Jeremi ( 14640 )

      Just like you can die in a car accident, no matter what kind of car you drive. And yet, you're still safer in a Volvo than in a Pinto.

  • From the way it reads the system could have allowed for, say, 256 spiffy windows. If they weren't getting deleted as expected they could have drained that pool of spiffy windows no matter how much RAM they had.
  • by account_deleted ( 4530225 ) on Wednesday August 19, 2015 @09:49AM (#50346811)
    Comment removed based on user account deletion
    • Re:QA process? (Score:5, Insightful)

      by Brett Buck ( 811747 ) on Wednesday August 19, 2015 @10:03AM (#50346927)

      It didn't get caught in testing because testing is by far the most expensive and time-consuming part of the development process, and is always the first thing to get cut/trimmed/"streamlined". Just like it has been forever.

      • It didn't get caught in testing because testing is by far the most expensive and time-consuming part of the development process, and is always the first thing to get cut/trimmed/"streamlined". Just like it has been forever.

        There is one more reason... Testing is the LAST thing you do before a release, so as the schedule slips to the right the last task on the schedule ALWAYS gets squeezed into smaller and smaller schedules. Less time means less testing.

      • by Zak3056 ( 69287 )

        It didn't get caught in testing because testing is by far the most expensive and time-consuming part of the development process, and is always the first thing to get cut/trimmed/"streamlined". Just like it has been forever.

        While what you say is, sadly, how the world actually works, the above should never occur with safety critical systems. "We'll fix the bugs in production" is absolutely unacceptable when your possible failure modes include dead people and hundreds of millions of dollars in damage.

        • Well, a big part of the distinction you are making (life-critical VS commodity software) has gotten lost by the various programming "cargo cults", where every problem is the same and every solution fits into some sort of stupid ritual.

      • by CODiNE ( 27417 )

        I'm amazed continuous integration isn't a software industry norm. At the bare minimum adding a regression test for every bug fix would slowly grow a really nice validation suite.

  • by RevWaldo ( 1186281 ) on Wednesday August 19, 2015 @09:53AM (#50346843)
    I'm betting someone just got some of the punch cards mixed up.

    .
  • Depends a little on the OS... a while back it was a combination of OS/400, AIX, and MVS | OS/390 | z/OS.
  • Language: ADA (Score:5, Informative)

    by JumboMessiah ( 316083 ) on Wednesday August 19, 2015 @10:01AM (#50346901)

    While everyone speculates on GC vs heap vs what flavor is my coffee, ERAM approach systems use ADA as the language of choice.

    reference [iaeng.org]

  • ...than crashing. Well designed systems do not die when running out of memory - they recognize the issue, and either at the general OS level or at the specific Application level, begin shifting the memory requirements to storage. Yes, they run (much) slower - but it gives an opportunity for some system more aware of the big picture than the application (e.e. the operator) to prioritize and recover. As others have alluded to - how did this situation not get found in a proper testing process?
    • by 0123456 ( 636235 )

      Yeah, because what could possibly go wrong in an air traffic control system when the computers are thrashing like crazy as they run out of RAM?

      I've worked with some non-critical systems used in aviation, and they shut down and switch to the backup when they get close to running out of RAM. A few seconds' delay to swap in data about airliners that are travelling a few miles apart at a few hundred miles an hour could kill hundreds of people.

      • Yeah, because what could possibly go wrong in an air traffic control system when the computers are thrashing like crazy as they run out of RAM?

        I've worked with some non-critical systems used in aviation, and they shut down and switch to the backup when they get close to running out of RAM. A few seconds' delay to swap in data about airliners that are travelling a few miles apart at a few hundred miles an hour could kill hundreds of people.

        Very well put! The system should have failed over to a backup. Seconds could mean the difference between life and catastrophic death.

  • A quick search reveals Lockheed Martin used Ada 2005 primarily to implement ERAM. Ada's Vital Role in New US Air Traffic Control Systems http://www.iaeng.org/publicati... [iaeng.org] "The new Ada 2005 real-time, and object-oriented language. Now it offers more has introduced more robust capabilities based on user experience. safety and portability than Java, and better efficiency and The language offers particular innovations which helps make safety assurance less costly and further improves high integrity flexibilit
    • Re: (Score:2, Informative)

      by Anonymous Coward

      The backend code is implemented in Ada but all of the display code is implemented in a mix of C and C++

  • Nobody (Score:5, Funny)

    by U2xhc2hkb3QgU3Vja3M ( 4212163 ) on Wednesday August 19, 2015 @10:11AM (#50346999)
    Nobody expects the ERROR: OUT OF MEMORY.
  • Heard that somewhere before :)
  • You Said (Score:3, Informative)

    by dcw3 ( 649211 ) on Wednesday August 19, 2015 @10:20AM (#50347065) Journal

    But, you said that 8G was enough!

  • This is just indicative of America's crumbling infrastructure due to extreme ineptitude at the elected leadership level.
  • Even if the manual says that Win 95 no longer needs to be rebooted everyday like Windows 3.11, it's still a good idea.
  • by jeremyp ( 130771 ) on Wednesday August 19, 2015 @10:37AM (#50347185) Homepage Journal

    Ada and Java apparently

    http://dl.acm.org/citation.cfm... [acm.org]

  • the whole thing is probably coded with a stone tablet and chisel.

  • If this is till the based on the 20+ year-old system it's Ada.

As of next Thursday, UNIX will be flushed in favor of TOPS-10. Please update your programs.

Working...