Originally Posted by
eglnyt
The expectation would seem to have been that the would do exactly that. Identify the errant data, isolate it and move on. They seem to have struggled to identify the errant data and possibly then struggled to remove it, exactly why isn't really covered in the prelim report, hopefully it will come later.
If you can't isolate that data then it's going to fail every time. If you take all the data out then your system is equally ineffective.
A solution to this sort of problem was proposed as long ago as 1978
Telnet randomly lose option
...which addressed the following:
Several hosts appear to provide random lossage, such as system crashes, lost data, incorrectly functioning programs, etc., as part of their services. These services are often undocumented and are in general quite confusing to the novice user.
A general means is needed to allow the user to disable these features.