View Single Post
Old 21st Apr 2019, 14:06
  #4194 (permalink)  
Join Date: Jun 2009
Location: Oxford, England
Posts: 297
Originally Posted by MemberBerry View Post
Good point. I also read this thread from the beginning and, to add another data point to the tendency you noticed, I'm a software engineer and I tend to blame Boeing more than the pilots. Boeing's attitude after the Lion Air accident contributed to that. If they didn't try to downplay the gravity of experiencing an incorrect MCAS activation, I would have probably been more sympathetic towards them.

Just like there seems to be a deficit of pilots in the aviation world, I think that generally there is a deficit of good software developers, and it's getting worse. I think the quality of software took a nosedive during the last decade. Software from a decade ago was way more polished than what I see today, and this is very frustrating.

Sure, just as the safety of air travel is getting better and better, a lot of lessons have been learned from the past in the software industry, and some types of bugs and quality issues are becoming less and less frequent. But there seems to be a lot less attention to detail, and I find it unbelievable that large software companies repeatedly release software products with significant bugs, that should are obvious to anyone after only a few minutes of using the product.

And I don't think it's just the deficit of good software developers causing this. I see a variety of other factors contributing to the decline in the quality of software, for example a tendency to spend less on quality assurance, and relying more and more on the end users to find and report quality issues with the software products. I hoped this trend would affect mostly regular consumer products, and not software that is critical for safety, but unfortunately that doesn't seem to be the case, some recent examples being the Tesla self driving car software, and possibly MCAS.

Anyway, back to the MCAS topic, I watched Mentour's recent video about being unable to trim manually at high speeds when the aircraft is severely out of trim. One thing that surprised me is that the simulator, which Mentour described as: "this is a level D FFS. That’s as real as it gets", is not able to replicate a stabilizer runaway similar to an incorrect MCAS activation, the runaway stabilizer failure is not able to bring the trim to full AND.

I guess the reason the simulated failure is not able to apply more AND trim is that it simulates something similar to stuck yoke trim switches. In such a situation, after reaching about 4 units with the flaps retracted, the trim limit switches would activate. That would prevent the trim from going lower than 4 units. I think that's why they have to trim manually the wrong way in the video, to try to simulate a worse mistrim, similar to that experienced by the Lion Air and Ethiopian pilots, because the simulator doesn't seem to be able do that.

If that's the case, I'm even more annoyed by Boeing's initial response that the pilots they should have just applied the runway stabilizer procedure. If the simulators are not able to replicate a mistrim as severe as one caused by a malfunctioning MCAS, clearly the existing simulator training for a stabilizer runaway failure is not entirely adequate for dealing with an MCAS induced trim runaway.
Haven’t posted here for a while and semi retired now, but also electronics / software engineer, three decades plus including avionics systems exposure. Have read this thread and amazed that such a system with a single point of failure could ever have passed certification, either internally or regulatory. Although the circumstances differ, am reminded of the AF447 episode, where the crew were completely disoriented by the system going awol and dumping the a/c in an unknown state, misleading signals, onto an overloaded crew, who really did not stand a chance. Seems to me yet another example of the gap in the man / machine interface. Ideally, such systems should be designed to provide an unambiguous view of the machine state at all times, but seems far from it. Should be a basic design requirement that no crew should be expected to “guess” the state of the system at any time.

What is clear is a gross failure of systems engineering. Design, attention to detail and oversight. The big picture view of how the overall system works and how the individual parts interact and communicate. I don’t think you can blame software engineers or the software for any of this, as faults vs spec at that level would have been found during rigorous testing, but if the fundamental design is wrong, or full of uncovered corner cases, no software can compensate for that. The problem is that modern systems are now so complex that it may in fact be nigh impossible to test for every possible situation, or component failure. However, that is no excuse for not trying.

Reminded of another company: Hewlett Packard, who built a reputation over decades for building the most innovative and highest quality test equipment in the business. They spent a fortune on R&D and were widely diversified into science, healthcare and more. Then, bean counters and “shareholder value”, gross mismanagement and greed turned a hard won reputation and pursuit of excellence into a laughing stock. Fortunately, the test gear division was spun off, but now a pale shadow of their former selves and not sure how much r&d they do now. Really, does anyone care anymore, or is it already too late ?...
syseng68k is offline