To support ELAC's point.
In certification terms, we are simply not allowed to retain any systems "in the loop" - whether they be software or hardware - when they are reduced to "taking their best guess" which is what a computer system (just as a pilot) would be reduced to doing when faced by multiple contradictory data sources.
It might seem unfair, but the regulations allow us (the designers) to assume god-like omnipotence from the flight crew when required. We have to assume that systems can fail - but that pilots will unerringly follow the appropriate procedures and, when required to pull a rabbit from a hat, will invoke "airmanship" and all will be well.
A trifle faceatious, but given a situation where a pilot might make the right choice 99% of the time, and a software system 99.9% of the time, if the consequences of error are catastrophic I am more-or-less forced to dump the problem in the pilot's lap. because while it's acceptable for the pilot error rate to be 1%, a software catastrophic failure rate of 0.1% would never, ever, be certifiable.
Add to this that its essentially impossible for the software to cater for all combionation, and it becomes essential for the s/w to at some point "give up" and hope that the pilot can get himself out of trouble.
It's as if the software systems were a reliable and skilled trainee, but somewhat wet behind the ears in terms of thinking outside the box. It at least is pretty good at realising when it's outside its "skill level" and at handing back control.