Originally Posted by
ORAC
The point here is that they are not trying to build machines that “think” like human beings and are self aware.
One of the main advantages of self-learning machines, whether in designing components, playing games or folding proteins is that they “think” differently and avoid the blind spots in human cognition.
I'm curious about how well AI copes when faced with circumstances that are abnormal. For example, there have been occasions when relatively minor sensor or data processing faults have confused human pilots so much that an essentially serviceable airliner has crashed. Would an AI do any better when presented with illogical or contradictory data? I imagine such faults are relatively common and in most cases a human pilot is able to use other clues to troubleshoot the problem - clues that AI may be oblivious to.