Software Bug Caused Qantas Airbus A330 To Nose-Dive 603
pdcull writes "According to Stuff.co.nz, the Australian Transport Safety Board found that a software bug was responsible for a Qantas Airbus A330 nose-diving twice while at cruising altitude, injuring 12 people seriously and causing 39 to be taken to the hospital. The event, which happened three years ago, was found to be caused by an airspeed sensor malfunction, linked to a bug in an algorithm which 'translated the sensors' data into actions, where the flight control computer could put the plane into a nosedive using bad data from just one sensor.' A software update was installed in November 2009, and the ATSB concluded that 'as a result of this redesign, passengers, crew and operators can be confident that the same type of accident will not reoccur.' I can't help wondering just how a piece of code, which presumably didn't test its input data for validity before acting on it, could become part of a modern jet's onboard software suite?"
Bad software (Score:5, Funny)
This, from the same company, while building the A380 megajet decided to upgrade half of their facilities to plant software version 5, while the other half decided to stick with version 3/4. And did not make the file formats compatible between the two versions, resulting in multi-month delays of production as a result.
Point being, in huge projects, simple things get overlooked (with catastrophic results). My favorite is when we slammed a $20 million NASA/ESA probe in to the surface of mars at high speed because some engineer forgot to convert mph in to kph (or vice-versa).
Re:Bad software (Score:5, Informative)
No, it was when two different softwares were used to calculate thrust. The spacecraft software calculated thrust correctly in newton-seconds.
The ground software calculated thrust in pounds force-seconds. This was contrary to the software interface specification, which called out newton-seconds.
The result was that the ground-calculated trajectory was more than 20 kilometers too close to the surface.
The engineers didn't "forget to convert", they failed to read and understand the specifications.
Re:Bad software (Score:5, Insightful)
From out here it's hard to distinguish between 'forgot what the specification said they should do' and 'didn't bother to read it in the first place'. Even if your 10 testing guys knew it was in the specification doesn't mean they necessarily understood how to test it properly, and maybe did some sort of relative test (input of x should come out to be 10x in a simple example). The problem with using the wrong unit of measure is that the math is, in isolation all correct and self consistent, it's just off by a constant - which just happens to be enough to cause catastrophic failures.
In the case of an aircraft using only once sensor in the article, did it read in data from all the sensors, and just ignored some of the input? Did it average the inputs, (which, naively, isn't a bad answer, but fails badly when you have really wonky data), was there some race condition in their resolution between multiple sensors? That's a fun one, maybe it works on data on poling intervals and in very rare cases it can read data from only one sensor and not the others and so on. Even if you know the specification it can be tricky to implement (and realize all of the things that can go wrong, it's not like all of these people doing the calculations are experts in distributed systems necessarily, they might be experts in physics and and engineering). Doing something simple like taking an average of an array can fail in really bad ways - what if the array isn't populated on time? How do you even know if the array is fully populated? How does my average handle out of bounds numbers? How about off by 10^6 numbers? Does old data just hang out in those memory addresses, and if so what happens to it? A lot of those underlying problems, especially with how the array (or in this case probably how a handful of floats) is populated and is it aware if it is properly populated are handled by the implementation of the language, which is well beyond the people who actually do most of the programming. And not everyone thinks 'hey for every line of code I need to go and check to make sure the assembler version doesn't have a bizarre race condition in it', assuming you could even find the race conditions in the first place.
Re: (Score:3, Informative)
No, they were taking a value in N-s but interpreting it as lbf-s. This was not rounding error, all ground calculations
were off by a factor of > 4.
don't just wonder, learn (Score:5, Interesting)
"I can't help wondering just how could a piece of code, which presumable didn't test its' input data for validity before acting on it, become part of a modern jet's onboard software suit?""
How about reading the darned final report, conveniently linked in your own blurb? There was lots of validity checking. In fact, some of it was relatively recently changed, and that accidentally introduced this failure mode (the 1.2-second data spike holdover). (Also, how about someone spell-checking submissions?)
Re:don't just wonder, learn (Score:5, Interesting)
Mod parent up. Anyhow, information from a sensor may be valid but inaccurate. I deal with these types of systems regularly(not in aircraft, but control systems in general), and it is sometimes impossible to tell with out extra sensors. Its one thing to detect a "broken wire" fault, and a completely different thing to detect a 20% calibration fault, for example, so validity checking can only take you so far. Its actually impressive the failure mode in this case caused so little damage.
Re:don't just wonder, learn (Score:5, Interesting)
Agreed, valid but inaccurate.
Though such an airliner will have more than one air speed sensor, no? Relying for such a vital piece of information on just one sensor would be crazy. And that makes it to me even more surprising that a single air speed sensor to malfunction causes such a disaster. But then it's the same kind of issue that's been blamed on an Air France jet crashing into the ocean - malfunctioning sensors, in that case ice buildup or so iirc, and as all sensors were of the same design this caused all of them to fail.
Another thing: I remember that when Airbus introduced their fly-by-wire aircraft, they stressed that one of the safety features to prevent problems caused by computer software/hardware bugs, was to have five different flight computer systems built and designed independently by five different companies, using different hardware. So that if one computer has an issue causing it to malfunction, the other four computers would be able to override this. And a majority of those computers should agree with one another before an airplane control action would be undertaken.
Re:don't just wonder, learn (Score:4, Insightful)
I'm sure they must have more than one sensor. Perhaps even more than one sensing principle is involved. The problem with the system of having multiple computers vote, is we tend to solve problems in similar ways, so if there is a logic error in one machine (as opposed to a typo) it is fairly likely to be repeated in at least 2 of the other machines. Some sets of conditions are very hard to predict and design for. Even in the most simple systems. I often see code (when updating a system) that does not account for every possibility because either everyone considers that combination unlikely, or nobody thought of it in the first place(until it happens of course...) Being a perfectionist in this business is very costly in development time.
The fact is a complex system such as an aircraft could easily be beyond human capability to perfect first time. And test completely.
Re: (Score:3)
The way to prevent typos (and with that, bugs) being copied to other systems is to make sure your systems are designed by independent companies. The chance of having the exact same bugs in two independently developed systems is really small. Make that three different systems, and set up a majority vote system to be pretty sure you've got the correct value.
Aircraft are very complex systems indeed. Yet the results of failure are generally pretty bad, and it's hard to make an aircraft fail safe - so everything
Re: (Score:3)
That only covers mistakes like typos. The bugs independent systems do not cover are logical and conceptual errors. If the functional specification is wrong to start with, all the versions will have similar or identical issues. Fail safe is a tricky thing in an aircraft. Systems and sensors will fail. No matter how many or how well designed. If two out of three air speed sensors fail for example, then there will be big problems. The point being that there is no 100% safety.
Re: (Score:3, Informative)
The official report for that came out a week or so ago. The only effect that the malfunctioning sensors had in that case was to put the copilots back in control of the plane so they could proceed to attempt to climb above the limits of the aircraft, and continue to pull ba
Re:don't just wonder, learn (Score:5, Informative)
How about reading the darned final report. [atsb.gov.au]
I highly recommend that. It's a good read. This was not a sensor problem. The problem actually occurred in the message output queue of one of the CPUs, and resulted in sending data with the label for one data item with the data from another. The same hardware unit had demonstrated similar symptoms two years earlier, but the problem could not be replicated. This time, they tried really hard to induce the problem, with everything from power noise to neutron bombardment, and were unable to do so.
There are several thousand identical hardware units in use, and one of the others demonstrated a similar problem, once. No other unit has ever demonstrated this problem. The The investigators are still puzzled. They unit which produced the errors has been tested extensively and the problem cannot be reproduced. They considered 24 different failure causes and eliminated all of them. It wasn't a stuck bit. It wasn't program memory corruption. (The code gets a CRC check every few seconds.) The code in ROM was what it was supposed to be. Thousands of other units run exactly the same software. It wasn't a single flipped bit. It wasn't a memory timing error. It wasn't a software fault. It looked like half of one 32-bit word was combined with half of another 32-bit word during queue assembly on at least some occasions. But there are errors not explained by that.
Very frustrating.
I had the problem once (Score:5, Interesting)
Posting anon because I moderated.
I had a very similar problem once with firmware on a TI DSP. The symptom was that a peltier element for controling laser temperature would sometimes freak out and start burning so hot that the solder melted. After some debugging, it turned out that somewhere between the EEPROM holding the setpoint, and the AD converter, the setpoint value got corrupted.
The cause turned out to be a 32 variable that was un-initialized, but always set to 0 by the stack initialization code.
Only the first 16 bits were filled in because that was the value stored in the EEPROM. The programming bug was that the other 16 bits were left as is. In >99% of the time, this was not a problem. But if a specific interrupt happened at exactly the wrong moment during initialization of the stack variable, that variable was filled with garbage from an interrupt register value. Since the calculations for the setpoint used the entire 32 bits (it was integer math) it came out with a ridiculously high setpoint.
Having had to debug that, I know how hard it can be if your bug depends on what is going on inside the CPU or related to interrupts.
There may only be a window of less a micro second for this bug to happen, so reproduction could be nigh on impossible.
This is why I like fuzzing (Score:5, Insightful)
It looked like half of one 32-bit word was combined with half of another 32-bit word during queue assembly on at least some occasions. But there are errors not explained by that.
This is why I like fuzzing. Sending random and/or corrupted data to software to evaluate the software's robustness and sensitivity to corrupted inputs. For a project like this I would like to send simulated inputs from regression tests and recorded data from actual flights to the software while fuzzing each playback, repeat. Let a system sit in the corner running such tests 24/7.
In theory some permutation of the data should eventually resemble what you describe.
Re:This is why I like fuzzing (Score:5, Interesting)
True ... but you may not ever have enough time to hit all the corner cases.
If it's a single 32-bit word, that can cause the issue, then yes, you can go through every single permutation fairly quickly. There are only 4,294,967,296 of them - nothing that a computer can't handle.
Suppose for a moment that the issue is caused, not by one single faulty piece of data, but two right after each-other. Essentially a 64-bit word causes the issue. Now we're looking at 18,446,744,073,709,551,616. Quite a bit more, but not impossible to test.
Now suppose that the first 64-bit word doesn't cause the fault on its own, but "simply" causes an instability in the software. That instability will be triggered by another specific 64-bit word. Now we're looking at 3.40282367 x 10^38 permutations.
Now, keep in mind that at this point, we're really looking at a fairly simple error triggered by two pieces of data. One sets it up, the other causes the fault.
Now let's make it slightly more complex.
The actual issue is caused by two different error conditions happening at once. If they are similar as above, we're now looking at, essentially, a 256-bit word. That's 1.15792089 x 10^77 permutations.
In comparison, the world's fastest super computer can do 10.51 petaflops, which is 10.51 x 10^15, and it would take that computer 0.409 microseconds to go through all permutations in a 32 bit word. About 30 minutes for a 64 bit word. 10^15 years for a 128 bit word and 10^53 years for a 256 bit word.
Yes, you can test every single permutation, if the problem is small enough. But the problem with most software is that it really isn't small.
Even if we are only talking 32 bit words causing the issue, will it happen every time that single word is issued, or do you need specific conditions? How is that condition created? As soon as the issue becomes even slightly complex, it becomes essentially impossible to test for.
it's more complicated than that (Score:3, Interesting)
we're going to see a huge change in programming methods coming pretty soon. Today, A.I. is still math and computer based. The problem is that data, input, and all of the algorithms you're going to write can result in a plane nose-diving -- even though no human being has ever chosen to nose-dive under any scenario in a commercial flight.
Why was an algorithm written that could do something that no one has ever wanted to do?
The shift is going to be when psychology takes over A.I. from the math geeks. It'll be the first time that math becomes entirely useless because the scenarios will be 90% exceptions. It'll also be the first time that psychology becomes truly beneficial -- and it'll be the direct result of centuries of black-box science.
That's when the programming changes to "should we take a nose-dive? has anyone ever solved anything with a nose-dive? are we a fighter jet in a dog fight like they were?" Instead of what is it now: "what are the odds that we should be in a nose-dive? well, nothing else seems better."
Re:it's more complicated than that (Score:4, Insightful)
Instead of what is it now: "what are the odds that we should be in a nose-dive? well, nothing else seems better."
Probably more like, "the sensor spec sheet says it's right 99.99999% of the time. may as well assume it's right all the time".
The devil almost surely lives on a set of zero measure.
Re:it's more complicated than that (Score:5, Interesting)
yup. all the while forgetting that the while altimeter shows altitude, it rarely actually measures distance to the ground, it measures air pressure, and then assumes an aweful lot.
Re: (Score:3)
Interesting one indeed. Could be a tough measure.
For starters: what is one's current altitude? What is your reference point? The ground level at that point? Changes quickly when passing over mountainous terrain. Or the height compared to sea level? Which is also tricky, as the earth's gravitational field is not uniform and sea level is far from a perfect flattened sphere around the Earth's centre.
And how about GPS based altitude measurements? That's easily accurate to within a few meters, less than the size
Re: (Score:3, Informative)
I wouldn't be calling other people stupid. Altimeters are also used to maintain aircraft separation around busy airports, avoid bad weather etc. Your assertion that everything other than not hitting the ground is a use that is "just for fun" is ridiculous.
Re:it's more complicated than that (Score:5, Insightful)
Speaking as a pilot, I care a great deal where I am right now because it may affect whether I'm going to hit another plane. I've been close enough to see the crew of another plane and felt safe because I first spotted him nearly two miles out and knew where he was the whole time, and I've leveled off out of a take-off to see another plane inside of a quarter mile and was shaken by the experience. I know that a quarter mile seems like a long way, but when converging airspeeds are in the range of 150 knots, there's very little time between seeing him and a collision, and I want to know when someone is passing 500 feet above or below me or is on a potential collision course.
We maintain distance (something that falls into your definition of "everything else") for a reason. My plane's max cruising speed is only about 130 knots, but the Baron over there has a max speed in excess of 200 knots. If we're both tooling around max and closing on reciprocal courses, that's a potential closing speed of 235 knots--4.5 miles per minute. If we're two miles apart, we have less than 30 seconds to see each other and properly maneuver. I've also had a plane pass over me close enough that I could hear his engine over mine, and that's the last time I want to hear that sound.
I measure where I am because that is by far the most important. Where I will be is secondary. The basic rules of piloting: Aviate, navigate, communicate. Fly the plane as it is, figure out where you're going, tell someone where you're going. Notice that the first is where I am right now. The second one deals with where I'm going to be, because I almost always have options, even if it means turning around and going back where I came from.
You're either not a pilot, or one who I don't want to be within 100NM of.
Re:it's more complicated than that (Score:4, Funny)
A better use of psychology will be to examine the heads of anyone who wants to throw maths out of the window and engage psychologists when designing AI algorithms.
Re: (Score:3)
we're going to see a huge change in programming methods coming pretty soon. Today, A.I. is still math and computer based. The problem is that data, input, and all of the algorithms you're going to write can result in a plane nose-diving -- even though no human being has ever chosen to nose-dive under any scenario in a commercial flight.
There are some humans alive today who have wisely done so to the point of causing injuries to recover from stalls real and imagined.
Re: (Score:3)
absolutely. and out of all who have, look at what it took for them to choose to do so. and look at how many times it's happened. it takes a huge dicision for a pilot to decide to do it. it's not a single reading from a single instrument.
i'd say that there's no single malfunctioning device that could get a pilot to do that. in fact, I don't think any incorrect information could do it. the only malfunction to make it happen would probably need to be in the pilot.
Re: (Score:3)
Is that something you are saying from knowledge or just making up? I was under the impression that getting the nose pointed down was a fairly 'normal' thing for a pilot to do when faced with a stalling plane. Indeed, keeping the nose up [flightglobal.com] can be precisely the wrong thing to do.
Re: (Score:3)
lowering the nose, yes, absolutely. nose-dive, no. the kind of thing that injures passengers is not standard anything.
Re: (Score:3)
That's assuming that the computer knows what a "nose-dive" even is, or why it's (usually) a bad thing. It would have to know every problem, every tactic, and every risk, and nothing would actually be safer, though the program would be far more complex..
Instead, the "psychological" program thinks "We're going a lot slower than we should for this altitude. Oh no! We're going to stall, and it's only by sheer luck that we haven't already! Why are we this high, anyway? The pilot told me to go this high, but mayb
Re: (Score:3)
I'm going to assume you're not trolling at this point, though the utter rejection of match makes it unlikely you'r being serious. Oh well. It must be dinnertime.
there is absolutely no way that your brain does calculus in order to walk around an obstacle.
Actually, yes, the brain does some basic physics computations (including calculus) naturally. The answers aren't exact, and there aren't numbers given for output, but the process is straightforward: To predict the path of an obstacle (or several), first watch for a moment to determine the rates of change (velocity v and acceleration a), then apply t
Evolutionary anthropology (Score:3)
Basing A.I. on psychology will be only a stop gap measure, on the way to the true solution to this sort of problems: basing A.I. on evolutionary anthropology. You see, both the crew and the passengers can be modeled as as tribe, trying to adapt their stable trajectory based culture to changing conditions, namely a nose dive. As more and more air tribes experience such disruptions to their familiar environment, you will find that some develop better coping strategies than others. After a number of generation
Re: (Score:3)
certainly better. but anything they do which translates input into output suffers from the same lack of decision-making in the middle. There needs to be a step, the amigdala step, where a decision is questioned -- the official opposition step. And it's not about checking over the work. The work is fine. It's about self-doubt based purely on the most important observation available: I've been wrong before.
Re: (Score:3)
As I understand it the problem with "voting" is that it requires the three systems to produce identical (or very close to identical) results under non-fault conditions. That means you have to write an extremely detailed specification and that in turn means you can end up with all the teams implementing the same bug either because the bug was in the specification or because the specification while strictly correct lead all the teams into making the same mistake.
How should a computer behave? (Score:3)
I can't help wondering just how could a piece of code, which presumable didn't test its' input data for validity before acting on it, become part of a modern jet's onboard software suit?"
---
I'm surprised there are people who think that we have the technology to program computers to make decisions about how to control things like airplanes better then a human being.
Computers excel at solving mathematical problems with definitive inputs and outputs, but our attempts to translate the problem of controlling an airplane, or an organism, into a simple circuit...will necessarily be limiting.
They can only test that the computer program will behave as expected, but there is no test to prove that the behavior we attempted to implement is actually a "good" way to behave under all circumstances.
Re: (Score:3)
Take a look at this [wikipedia.org] incident. The autopilot did everything right except that lack of action, poor decision making and disorientation by the pilots caused a 747 to roll out of control.
The pilots did the following things wrong;
1. Failed to descend to correct altitude before attempting engine restart.
2. Failed to notice the extreme inputs the autopilot was using that did not correct the roll(the pilot should have used some rudder to help the autopilot)
3. Became fixated on the engine issue when he should have l
What? (Score:5, Informative)
What are you? some kind of person that doesn't read the actual articles or documents? Oh wait.. this is slashdot. Here let me copy paste some text for you
So there you go, there actually really was validity checking performed. Multiple times per second in fact, by three separate, redundant systems. Unfortunately all 3 systems had the bug. Here is the concise summary for you:
How did it happen? (Score:4, Funny)
Yes it was a software problem, but .... (Score:5, Informative)
I have worked in and around the safety critical software industry for over 20 years. The level of testing and certification that the flight control software for a commercial aircraft is subjected to far exceeds any other industry I'm familiar with. (I'm willing to be educated on nuclear power control software however.)
The actual problem on the Qantas jet was a latent defect that was exposed by a software upgrade to another system. So the bug was there for a long time and I'm sure there are still others waiting to be found. But this doesn't stop me getting on a jet at least twice a week.
As a software professional and nervous flyer, do problems with the aircraft software scare me? No not really. What scares me is the airline outsourcing maintenance to the lowest bidder in China, the pilots not getting enough break time, the idiotic military pilot who ignores airspace protocol, and the lack of english language skills in air traffic controllers and cockpit crew across the region where I fly (English is the international standard for Air Traffic Control).
A good friend is a senior training captain on A330's, and in all the stories he tells software is barely mentioned. What get's priority in the war-stories is the human factors and general equipment issues - dead nav aids, dodgy radios, stupid military pilots. One software story was an Airbus A320 losing 2 1/2 out of 3 screens immediately after takeoff from the old Hong Kong airport. The instructions on how to clear the alarm condition and perform a reset were on the "dead" bottom half of one of the screens.
A great example of software doing it's job is the TCAS system - Traffic Collision Avoidance System (http://en.wikipedia.org/wiki/Traffic_collision_avoidance_system). To quote my friend "If it had lips, he'd kiss it". It's saved his life, and the lives of 100's of passengers, at least twice. Both times through basic human error on the part of the pilot of the other aircraft.
One final thought - on average about 1000 people die in commercial aviation incidents each year world wide (source: aviation-safety.net) . In the USA, over 30,000 people die in vehicle accidents every year.
... injuring 12 people seriously and causing 39 (Score:4, Insightful)
injuring 12 people seriously and causing 39 to be taken to the hospital.
That is why you keep your safty belt shut.
If you don't like the feeling, losen it a bit, but keep it closed.
I really wonder why people keep taking such nonsense risks and open the seat belt directly after launch.
Avionics certification (Score:4, Interesting)
If you saw the procedures required to get airworthiness certification from the FAA for a critical piece of software, you would shake your head in disbelief. It is almost all about ensuring that every line of code is traceable to (and tested against) a formal requirement somewhere. In spite of greatly increasing software development costs (due to the additional documentation and audit trails required), the procedures do amazingly little to ensure that the requirements or code are actually any good, or that sound software engineering principles are employed. It does not surprise me that GIGO situations occasionally arise -- it is perfectly plausible that a system could meet the certification criteria but shit's still busted because the formal requirements didn't completely capture what needed to happen.
The cost of compliance can also warp the process. A co-worker once told me a story about an incident that happened years ago at a former employer of his. A software system with several significant bugs was allowed to continue flying because the broken version had already received its FAA airworthiness certification. A new version which corrected the bugs had been developed, but getting the new version through the airworthiness certification process again would've been too costly; so the broken version was allowed to continue flying.
Look up "DO-178B" sometime if you're curious...
Re:What about Google driverless car? (Score:5, Insightful)
sure, but the number of accidents will likely still be fewer than those caused by human drivers.
Re:What about Google driverless car? (Score:5, Insightful)
Which is actually Airbus relies on sensor input over the "pilot". Boeing believes in the opposite. I'm inclined to believe Airbus in that the majority of accidents are human error over computer error.
The problem with aviation accidents is the relatively small sample size. With cars there will be much better data (i.e. more data points).
If anything computer driven cars will be better - since due to the safety "fears" like the OP, they will be programmed to be cautious. They have to be better at handling conditions than human operators, otherwise it's instant blame. They have to be better to the degree that you can blow the stats out of the water. e.g. When the first computer driven car hits a person, they need to say "well based on hours on the road, if it was human driving this it would have hit 30 kids by now".
Re:What about Google driverless car? (Score:5, Insightful)
Which is actually Airbus relies on sensor input over the "pilot". Boeing believes in the opposite. I'm inclined to believe Airbus in that the majority of accidents are human error over computer error.
Sometime in a flight like AF447 the computer doesn't know jack either and gives up the ghost. In the AF477 flight(equipment airbus A330), apparently, the pitot sensors gave inconsistent readings and the autopilot disengaged. What insued was apparently what can happen when you have pilots that are error prone and a computer that doesn't know what the hell to do to help them. In these situations, I think it's prudent to still have a system that defaults to the pilot as if they knew what to do when they know the sensors have crapped out and apparently even Airbus agrees with this. Unfortunatly, it appears that the AF447 pilots were not up to the challenge in this circumstance.
Re:What about Google driverless car? (Score:4, Interesting)
The Airbus will also change the throttle to the engines without moving the throttle levers whereas the Boeing will move the levers to where the computer set the throttle, When the autopilot takes a crap and you put your hands on the throttle, you must remember that the controls are lying to you and act accordingly.
Re:What about Google driverless car? (Score:5, Informative)
Sure - the flight in question was British Midland Flight 92.
The situation I mentioned came about because the pilots shut down the right engine due to vibration and presence of smoke in the cabin - up to the 737NG line cabin air was only taken from the right engine, so thus engine vibration and smoke in the cabin meant the right engine was at issue. However, in this case it was the left engine which had actually failed.
While the left engine was failing, the autothrottle automatically adjusted the fuel flow into it in order to maintain the thrust levels from it - this had the effect of causing an asymmetric thrust selection between the left and right engines - however, the throttle lever actuator only selected a physical position for the right hand engine at its lower thrust level, meaning the autothrottle was actually selecting a higher value which was not indicated in the positions of the throttles (the two throttle levers are linked by one actuator when under autothrottle control, thus they can actually only show the thrust level of one of the engines - in this case, the right engine).
When the pilots turned off the autothrottle to power down the right engine, the left engines selected thrust returned to that of the physical position of its thrust lever - which had the effect of reducing the vibration to the point where the flight crew thought they had indeed turned off the correct engine when they shut down the right engine.
When they were on approach to Midlands Airport, they increased thrust on the left engine, which caused it to fail completely and thus the aircraft crashed short on approach.
This system was highlighted in the crash report, along with a number of other issues with the 737NG design - Boeing did infact have to ground a large number of aircraft before the solution was deployed to the delivered fleet. It was not the sole cause of the crash, but it was something that was heavily highlighted in the chain of events.
Re: (Score:3)
"Which is actually Airbus relies on sensor input over the "pilot"
Fly-by-wire flight controls by their nature treat pilot inputs as a "request" which they attempt to implement with their parameters such as G-limiting and stability augmentation.
Lose enough sensors (unlikely) or get the wrong inputs and you can lose control.
Re:What about Google driverless car? (Score:5, Interesting)
Back in my Finnish Air Force days I talked to a captain who had flown the F-18C in his last three active flight years. He told that when you're straight and level in the Hornet and peek over your shoulder you probably see the ailerons swaying back and forth as the computer tries to keep the plane stable.
Re:What about Google driverless car? (Score:5, Informative)
Re:What about Google driverless car? (Score:5, Funny)
Except that the designers of the software didn't take all possible situations into account. For example, any Fly By Wire Airbus will automatically pitch up if speed increases too far above the maximum airspeed, even when flown manually. This may be a good idea when the airplane is diving (the most likely cause for overspeed), but not when it's straight and level with other traffic immediately above!
Except if that other traffic is also an Airbus.
Re:What about Google driverless car? (Score:4, Interesting)
*blinks* You're not well versed in the effect of turbulence on localized airspeed or altitude are you? The sensors will report airspeeds that are only possible in a dive, combined with the loss of altitude even though the angle of attack is level or steady could easily cause software to attempt to pull out of the "dive". That assumes that the plane is allowed to override human input, which is a seriously fucking asinine design if true.
Re:What about Google driverless car? (Score:4, Informative)
If I had written VMO or MMO instead of "maximum airspeed", you wouldn't have understood what I wrote. Airplanes do have a maximum airspeed (airspeed being the speed relative to the air, as opposed to ground speed). Go too far above VMO, and the plane starts buffeting (a kind of vibration). Go a bit further, and you may lose control completely due to high speed stall, mach tuck, control reversal, etc...
Airbuses do indeed have autothrottles, but engines react rather slowly so, while indeed reducing thrust, the flight control systems pull the nose up as well. They have in one recent incident in my current company, and this had already happened before in several other companies. In one case, there was another plane 1000 feet above and the pilots managed to stop the climb after 700 feet.
There are many possible reasons for a sudden increase in airspeed. Most of the time, it's due to a change in wind. If a 100 knot tailwind suddenly drops to 70 knots, you've just gained 30 knots of airspeed. But the true airspeed doesn't even have to change: in a recent incident in my company, the outside temperature changed by more than 10 degrees in a very short time which increased the mach number above MMO (because the speed of sound changes with temperature). The autopilot immediately disconnected and the flight control computers started a rather violent climb which the pilots could only recover from after climbing more than 500 feet.
So, you say you're a rocket surgeon? What kind of operations have you performed on them?
Re:What about Google driverless car? (Score:4, Interesting)
WEll as you dont know anything at all about flying let alone commercial pilots, let me inform you.
none of what you dream up is happening up there except for the bad pilots. i know several commercial pilots, they are busy up there with checklists, comms, and do not have time to chat up the stewardesses. There was a little thing that happened in September of 2001, they keep the door locked for the flight.
But you go ahead with your fantasy, it's just like all the fantasy reported on fox news.
Re:What about Google driverless car? (Score:5, Insightful)
Okay, a few facts, the A330 is fly by wire, this means between pilot and control surfaces everything must go through the avionics, if the avionics totally fails then that plane is by definition little more than a glorified missile.
That said, it seems the backups and pilot responded exactly as they should have in this case. The plane pitched, enough to throw the passengers around and cause injuries, pilot disengaged autopilot and corrected, declared an emergency and safely landed at the nearest big enough airport.
Please tell me how he did anything wrong? Please tell me how the rest of the computer systems failed to cause and actual crash Nope neither, the plane was left in one piece on the ground.
The only thing I say is, why did it take Airbus 2 years to find and fix that major bug?
Re: (Score:3)
If there are serious failures, an Airbus pretty much starts to behave like the controls of a Boeing.
That's reassuring - "if the shit hits the fan, this aircraft will act like a Boeing" - so why not fly an actual Boeing in the first place? What actually makes an Airbus a better option outside of extreme conditions?
Re:What about Google driverless car? (Score:5, Insightful)
Even on the road today this is an issue. Doesn't matter how good of a driver you are. If one other idiot on the road is driving crazy, you could get killed no matter how you drive. Weakest link and all that...
Re:What about Google driverless car? (Score:5, Funny)
Re:What about Google driverless car? (Score:5, Insightful)
A good driver, by definition, mitigates the bad driver by taking appropriate actions to reduce the risk. It is not how you drive, its how you manager the drivers around you that makes you a good driver.
Re:What about Google driverless car? (Score:5, Insightful)
So how will you reduce the risk of someone next to you suddenly deciding to switch the lanes without checking that you're there? How do you reduce the risk of someone deciding he just has to pass the car in front of him even when there's incoming traffick? How do reduce the risk of someone deciding to test his engine and losing control?
It doesn't matter how good a driver you are; if someone else screws up bad enough, you're dead.
Re:What about Google driverless car? (Score:4, Interesting)
So how will you reduce the risk of someone next to you suddenly deciding to switch the lanes without checking that you're there? How do you reduce the risk of someone deciding he just has to pass the car in front of him even when there's incoming traffick?
Um...by not riding beside somebody, especially in their blind spot?
I mean, is this a serious question? Have you never learned defensive driving?
Re:What about Google driverless car? (Score:5, Insightful)
So how will you reduce the risk of someone next to you suddenly deciding to switch the lanes without checking that you're there? ...
You reduce that risk by not staying next to another driver any longer than you have to.
You watch the drivers around you and anticipate what stupid things they might do that would endanger you. Then you decide what actions you need to take to minimize that risk. Then you take those actions. That's what defensive driving is all about.
It's not easy and can't really be done while jabbering on the phone. And it's not very satisfying to the ego to drop behind another driver who is a little more aggressive than you, but it can pay out in reduction of accidents caused bu others.
Yes, I'm sure one can point out situations where there is little to no opportunity to avoid the actions of others, but in far more situations there is plenty of opportunity to minimize the risks due to other driver's stupidity.
Re:What about Google driverless car? (Score:5, Insightful)
Re: (Score:3)
Re:What about Google driverless car? (Score:5, Funny)
Even on the road today this is an issue. Doesn't matter how good of a driver you are. If one other idiot on the road is driving crazy, you could get killed no matter how you drive. Weakest link and all that...
Everyone who drives faster than me is a maniac. Everyone who drives slower is an idiot.
Re:What about Google driverless car? (Score:5, Insightful)
done much driving lately?
even if MS wrote the software, it'd definitely be well in the top 2 percentile as far as driving skills go.
see how input data validation works in your brain when you're tired, drunk or just distracted?
Comment removed (Score:5, Insightful)
Re:What about Google driverless car? (Score:4, Interesting)
I'd even take it further: I'd hand over my driving to an autumatic car in a second if it meant all the other morons would have to do the same.
For those addicted to driving: yes, I'd love to force you to take your self-driving to the circuit, where it belongs (once the driverless cars have proven to have less than half the accident rate of humans).
Re: (Score:3)
This! I live in a bicycle friendly city and generally use my bike to commute. But even here you see people on their phones in cars on a regular basis (which is actually now illegal here), plus trying all kinds of crazy driving when impatient or not thinking straight. When you're cycling it's much more easy for one of those incidents to turn into a serious injury, since you have no protection at all from other people's vehicles.
But people don't have the mindset that they're operating a dangerous machine t
Re: (Score:3, Funny)
I don't think anyone's out to force you to use a driverless car.
But if you do aquire one, don't be suprised if one morning you get in and find the interior is all spaced out white with whispy grey lines that lack contrast.
Re: (Score:3)
Good point. Look at the summary: " operators can be confident that the same type of accident will not reoccur".
Only someone with no fucking clue of how software works can ever write something that stupid...
I wonder if Google will be found liable when someone dies out of their car. After all, if they make the fallacious promise of "bug free", they should be held responsible for bugs. And without the promise, I fail to see how anyone will give them a license to mass produce this thing.
Re:What about Google driverless car? (Score:5, Insightful)
Re:What about Google driverless car? (Score:5, Interesting)
There were people against airbags, too, because they killed some people who otherwise wouldn't have died. You work on fixing those things. But whether the system as a whole is worthwhile is judged on whether it saves more than it kills.
Re:What about Google driverless car? (Score:4, Interesting)
Re:What about Google driverless car? (Score:5, Insightful)
Re: (Score:3)
I feel "eat your own dog food" should apply.
Re:What about Google driverless car? (Score:5, Insightful)
This type of story isn't new and i'd imagine its pretty common. When you know there are corner case bugs unpatched that were only 1 in 10,000,000 chance of being triggered in a given flight, do you still want to risk relying on your software for your life or death? Nah. What those engineers weren't doing was listening to the Boeing engineer's list of bugs and that they'd be doing the exact same thing whenever a new system's hot off the assembly line.
We have computer controlled trains in my city, and the rumor mill kept chupring away that the engineers would never touch them with a ten foot pole, but to my knowledge there's never been a serious derailment or automation related fatalities (lots of jumpers sadly, but I guess that comes with the territory).
Re: (Score:3)
Sure, the same way any bug is fixed forever. But software is still loaded with bugs. Even a completely bug-free system will accumulate bugs over time as the code is maintained and/or features are added.
Re:What about Google driverless car? (Score:4, Insightful)
Right, because bug fixes never introduce bugs. Code just keeps getting better and better and better.
we already fixed it. its called 'trains'. (Score:5, Insightful)
the idea that a bunch of automatically piloted vehicles is somehow a better solution to city transport than mass-transit, it boggles my mind.
real people do not have money to maintain their cars properly. things are going to break. there are not going to be 'system administrators' to fix all the glitches that come up when cars start breaking down after a few years.
there will be problems. do i know which problems? no, but i know the main problem.
arrogance amongst revolutionaries. it is historically a pattern of the human species. declaring that nothing could go wrong is usually a precursor to a lot of things going wrong. not because the situation was unpredictable, but because human beings in an arrogant mindset tend to make a lot of mistakes, be reckless, and try to cover their asses when things go wrong.
but successful engineering is the anti-thesis of arrogance. nobody worth his salt is going to say 'what could go wrong'? they are going to have a list of 500 things that could go wrong, and all the ways they have tried to counter-act those wrong things happening.
Re:we already fixed it. its called 'trains'. (Score:4, Insightful)
Trains are great when you have lots of people going to/from the same place. The trouble is in a large conurbation while there are a lot of people going to/from the city center there are also many people who would like to travel between two points further out in the conurbation that are fairly close together but on different radials. Doing this by public transport typically either means catching a slow bus (slow because to get enough passengers to make it viable it has to stop frequently and drive on the slow roads through places rather than the fast roads round places) or taking a very roundabout train route. If you enjoy exploring the countryside it gets even worse with many places effectively cut off from you completely.
It's possible to live without a car but it means planning your life arround public transport (including choosing where you live to have a fast public transport link to where you work) and putting up with the fact that any journeys other than your regular commute (which you chose your place of living based on) are going to be very slow. Especially in the evenings and on sundays when there are less busses and trains.
IMO the only way car ownership and use will significantly reduce is if using a car simply becomes unaffordable for the vast majority of people.
Re: (Score:3)
Re: (Score:3)
Some roads near my house have deliberate pinch points which rely on drivers playing chicken with each other. Its called traffic calming. Several bike riders have died in collisions with cars where the car driver absolutely refused to give way to a bike and as a result caused a crash. The game of chicken depends of certain assumed characteristics of the drivers of both vehicles. Heavy trucks will burn through and expect a small car to give way for example.
Now, my wife's car has automatic headlights and this
Re:What about Google driverless car? (Score:4, Insightful)
Are you seriously accusing Google of being malicious in developing a driver-less car? Do they have a stake in keeping the population numbers down or something?
While I agree that software will never be bug free, it will quite possibly save many more lives as human drivers are terrible. They are prone to panicking under pressure, misjudging distances, unable to handle a car as efficiently as possible, take too many risks (swerving in and out of traffic, following too close), drive under the influence of drugs and alcohol, get distracted by phones, screaming kids among many other things that well written and tested software could do better.
Do you also want pilots to fly planes manually at all times and remove auto-pilot since software can never be perfect?
Re:What about Google driverless car? (Score:5, Insightful)
Re: (Score:3)
I trust Google's engineers not to get me killed more than I trust the vast majority of drivers, especially knowing how little it takes to get a drivers license. So far, the only incident involving one of Google's self-driving cars is when a human was in control (i.e., it was sheer coincidence that it was one of those cars); statistically speaking, they're the safest vehicles currently in existence. At least software can be fixed; try as we might, we haven't yet fixed stupid. I'm trying to look up how many a
Re:What about Google driverless car? (Score:5, Insightful)
It's bad idea for a specific reason.
There are two "brains" that can operate the car. Google can make a pretty decent brain, but it is not going to come remotely close (in any way) to the human brain in terms of its ability to perceive the environment (sensors), make sense of it (pattern recognition), and put it all into context (experience, extrapolation).
Google will excel in reaction times and advanced planning. Through Google it will be possible to mitigate traffic by solving a very human problem, which is cooperation towards a common goal. Google could react faster, and with less overcompensation, to a car drifting into its lane.
Where Google will fall far short is recognizing the road rage in the driver next to it (beating his hands on the steering wheel and screaming), the lack of concentration (woman putting her lipstick on), etc. Putting those things in context and assigning risk to drivers next to you is not something Google will be able to do from its sensors. However, even the average driver is getting cues in so many ways about what is really going on around them.
The reason why it is a bad idea, is that while Google is operating, the human brain is off. It's not instant-on either. Driving is a constant level of concentration, even when it seems like you are doing it "subconsciously". From start to finish, the average driver is pretty aware of their surroundings and processing an impressive amount of data. A human brain will beat Google every time on those terms.
When Google fails, or "judges" the environment poorly, how quickly can the human brain come back online, evaluate the current environment, take control, and make the required adjustments?
Until the Google brain is able to fully replace a human brain, it is not a good idea to involve the two in a hybrid system. The lag between the two systems taking control from one another is just too great.
Self-parking is fine, and limited operations involving high efficiency traffic lanes where human control is not permitted will be fine. As long as the transition into those operations is in a time frame a human can deal with.
Example being, the human brain pulls the car along the high efficiency traffic lane, "tags" the Google brain in to insert itself into the traffic. The Google brain then notifies the driver and validates proper control and awareness before exiting the traffic and turning control over to the human driver. Failure means Google pulls the car to the left in the emergency lane and brings the car to a full stop.
Any other kind of operations just seems fundamentally unwise to me because of the hybrid nature and inherent limitations of Google's AI, advanced as it may be for now.
My threshold for letting a computer operate a car no differently than a human, is the computer can meet or exceed the human's ability in every respect. That is not true right now, and will not be true for decades.
You may trust a Google car more than the average driver, but that is only really true if the Google car also has no driver.
Re: (Score:3)
Re:What about Google driverless car? (Score:5, Interesting)
It's not about choosing one or the other, but hybrid systems operating at the same time.
If you are going to compare quality, the human will win every time. We can give anecdotal evidence about how bad drivers are, but statistics show that driving is not so dangerous that we need to consider stopping it altogether. Really think about it for a second. During your average day, how many really bad drivers did you personally interact with that created a dangerous situation resulting in an accident? Pretty low huh? I would expect so, otherwise insurance would cost thousands and thousands per month, instead of per year.
Humans are not the inferior solution overall right now. Not by far.
It is also not because Google is not perfect either. Specifically, it is because of the time required, and the complexity of shifting control from Google to the driver. Once such a system becomes normal to a driver, their attention is not going to be on the road, but on their interaction with other devices. You cannot reasonably expect a person to be in complete awareness, hands at 10-2, ready in a split second to take control. You would get too bored without immediate feedback, your mind would drift. This would be completely normal too.
This is not to say that the system itself might not be useful, but it would have to be under very controlled conditions excluding human drivers altogether. It could work, provided the shifting of control was at a controlled rate in relatively controlled conditions. Give the human being time to adapt and obtain situational awareness.
As cool as this sounds, it is just not ready to fully replace a human, unless it could perform at a human level or better. The dream of a car that can drive itself completely under all conditions is still some ways away.
The idea of changing carpool lanes over to high efficiency lanes where human control is not allowed seems like a more pragmatic approach that decreases the complexity and uncertainty that the Google system has to deal with. It has very high value as well since it can optimize traffic patterns far better than a human simply because it can cooperate with a much larger number of cars over greater distances. A human could never hope to do that with our inherent limitations.
That system could realize some serious fuel savings and increase productivity by essentially mimicking an airplane in auto pilot mode. The human is really just there to get the system to the point where it can safely transition in and out of a computer controlled lane. That will be extremely advantageous to overall traffic.
Re:What about Google driverless car? (Score:5, Interesting)
It's so interesting to see people's reaction to the whole driver-less car thing. It's incredible to see the kind of ethical thought-experiment that must necessarily go through everyone's mind when they come to this conclusion: How many lives must be saved before I will tolerate someone being brutally slain by a malfunctioning computer?
Every day, children are run down by drivers who are not paying attention, tired, drunk, or just plain don't have time to react. Since a driver-less car is incapable of being drunk, tired, or distracted, then it's a safe bet that they'll be much better at avoiding those accidents that can be avoided. But the reality is that the latter scenario (no time to react) would still lead to the deaths of many children (and others!).
At what point does it become "worth it"? When the driver-less car causes 1/10th as many fatalities? 1/100th? 1/1,000th? How many human deaths must be prevented by letting computers drive cars before we're willing to accept 1 single death by those same computers?
It's a real-life example of the "Trolley Problem"
http://en.wikipedia.org/wiki/Trolley_problem [wikipedia.org]
Re: (Score:3)
Also, who gets the blame?
The way it is now, if I cause the accident, then I am responsible for it, because I either caused the accident myself (being drunk/asleep/distracted is no excuse) or my car failed in such a way that it caused the accident (I tried to stop, the brakes failed and I hit the car in front of mine). It was also my responsibility to maintain the car, so I am responsible even if it was the car that failed.
Now, if the driverless car hits another car and there was no mechanical failure that I
Comment removed (Score:5, Interesting)
Re: (Score:3)
A brilliant transplant surgeon has five patients, each in need of a different organ, each of whom will die without that organ. Unfortunately, there are no organs available to perform any of these five transplant operations. A healthy young traveler, just passing through the city the doctor works in, comes in for a routine checkup. In the course of doing the checkup, the doctor discovers that his organs are compatible with all five of his dying patients. Suppose further that if the young man were to disappear, no one would suspect the doctor.
An engineer would realise that since these patients are all missing one different organ, then one of them can be cut up to save the other four, saving the moral dilemma since any one of them would have died anyway. Philosophers are always looking for the perfect question, but an engineer knows there is always a better solution. All an engineer needs to know is how his car killed someone the last time and he can fix it.
Re:What about Google driverless car? (Score:5, Insightful)
have you SEEN the way meatware AI operates a car? At least a google driverless car would use its turn signal before suddenly jerking into a turn and trying to kill me on a bike with a right hook.
Speaking of faulty sensors, that's pretty much what goes down when meatware AI has a certain alcohol content. Or uses a cellphone. Or eats fast food. Or puts on makeup. Or deals with newer meatware instances in the back seat. Or looks down to adjust the radio. Or falls alseep. Or is distracted in thought. Or....
Re:What about Google driverless car? (Score:5, Informative)
www.un.org/ar/roadsafety/pdf/roadsafetyreport.pdf
Re:What about Google driverless car? (Score:4, Interesting)
A million people die annually because of human drivers. A driverless car killing half that many would still be an improvement.
When a human driver kills another human being, the courts can punish that person and allow for the victim's family to claim compensation.
When a driverless car kills a human being... ?
Maybe we could copy the system we have for vaccines [wikipedia.org]
Re:What about Google driverless car? (Score:4, Interesting)
You hold the people who made the car responsible? They better analyze the hell out of every single tiny problem that crops up and make details and fixes public. This is why all these driverless softwares must be open source. Any 'benefits' of making it proprietary would come at the cost everyone's safety.
And besides, it doesn't really matter how someone is punished for wrongdoing. You judge whether it's an improvement or not; you don't judge on how best to get retribution. Otherwise, you could hypothetically end up choosing a system that causes a lot of problems as long as it's easy to blame someone for causing them.
Re: (Score:3)
It's an issue of overall efficiency and safety for everyone. In the long run, it will save lives, improve quality of life, be more efficient etc. The downside of having to blame a "dumb" machine for a lost life/accident, is possibly more palatable than the alter
Re:What about Google driverless car? (Score:5, Interesting)
Technically correct I suppose (Score:3)
Re:Outsourcing is bad. (Score:5, Informative)
What a load of uninformed bullshit - Airbus has several levels of computer control, called laws, one of which is Direct Law which passes all inputs directly to the control surfaces. And if that isn't enough, they have mechanical backup controls for all surfaces on the flight deck, so even with a completely dead computer the aircraft is still flyable.
You sir, are talking complete shit, but that seems to be normal when someone wants to put Boeing on a pedestal over Airbus.
Let's go over some of your "mistakes"...
The 787 isn't Boeings first FBW aircraft, they have had one flying since the mid 1990s with the 777. The 787s system is an evolution of the 777s.
AF447 didn't crash because of a computer problem, it crashed because of poor crew relationships in the cockpit - three pilots in that cockpit and not one was interested in what the others were doing. They didn't run basic check lists, they ignored other information, and the pilot flying did completely the wrong thing - the situation was completely survivable if they had carried out the correct procedures, except they didn't. The crash wasn't caused by the computer, it was caused by the pilot taking a stable aircraft and stalling it badly when nothing about the computer error forced him to do that.
Re: (Score:3)
This sounds very much like the failure of the Pitot tubes (used to measure airspeed) on the A330 that chrashed in the Atlantic on 1 june 2009.
Actually, this would presumably have saved AF447, as the crash was caused by the pilot holding the nose up in a stall. Probably because the stall warning apparently turned off when he pulled the stick back and turned back on when he pushed it forward, so the correct action to get out of the stall seemed to be causing it.
Re:Boeing vs Airbus (Score:5, Informative)
Interesting here would be some statistics. How many Boeings have come into serious trouble, and how many Airbuses?
Besides the GP's point about Airbus pilots being unable to override the computer being complete and utter bollocks (Airbus' still have a analouge actuator control (Electronic) in them), there have been a few near misses which if it were not possible to take manual control would have resulted in a crash such as the JetBlue landing at LAX in 05.
On the other hand, there have been incidences with Boeing aircraft which are believed would have been solved by automated systems such as AA flight 965 (Colombia 1995) where if the airbrake was automatically retracted the pilot would have been able to climb a way safely.
Here is a good post on the subject. [askcaptainlim.com] According to the ASTB who conducted the investigation there have only been 3 such incidents in 128 Million hours of A330 operation as of 2008. That is a damn good rate of failure wouldn't you say? Pilot error being the cause of approx 48% of all accidents, Airbus or Boeing. Modern aircraft are getting safer all the time, they see more mechanics and engineers in a week then your car will see in its entire lifetime. Everything is checked and double checked, anything suspicious gets replaced. I never think I'm in danger stepping onto an an Airbus or Boeing aircraft.
The whole Airbus Vs Boeing argument is a dick pulling contest between biased pilots. It's like a Xbox/PS3 fanboy war. Utterly senseless to third party observers (and bronzed fingered PC gamers) Now amongst the 25 worst airlines you have a 1 in 850,000 chance of dying and I dont fly any of those airlines (1 in 9.2 million for the 25 best), hence my practice of congratulating myself at the check in counter as I've survived the most dangerous part of air travel, the drive to the airport. Compared to our road toll, our air toll is minuscule.
Re: (Score:3)
On Airbus vehicles, if the avionics computers crash, the airplane crashes. There's exactly ZERO way to pilot the computer manually in such a failure.
Completely untrue. When the avionics 'crash', the flight system progresses through 'alternate' to 'direct' law where the pilot has direct control of the plane.
Moreover, the avionics system can and does overrule pilot input. So if you get sensor malfunctions like this, even if the pilot is trying desperately to save the plane, the computer can still crash you.
Have a look at the statistics [airsafe.com] (pages maintained by a pro-Boeing pilot, by the way) and you'll see (i) for all your hysterical fear of Airbus aircraft, the fly-by-wire Airbus aircraft (i.e. all except A300 and A310) are just as safe as their Boeing counterparts (ii) there are no examples of an Airbus crash caused by the computer overriding the will of