It's a good question and a fascinating subject.
One big problem in discussing it (and in arriving at any conclusion) is that the information we have WRT the actions of aircrew is heavily slanted towards the negative. Why? For the simple reason that we hear about accidents and incidents which were induced by pilot action, but we almost
never hear about mishaps that were
prevented by pilot action, unless they were dramatic enough to make the news.
There is an interesting analogy to the development of self-driving cars. Google are finding in the course of their tests in California, that their cars of course conform 100% to the highway code. This has obviously been programmed into them. However, the real world doesn't always conform. The big challenge here is to install a sort of fuzzy logic that allows the car to 'think', which in extreme cases also involves ethical dilemmas. I suggest you
read this excellent article on the subject.
Personally, I'd much rather live with the errors my fellow human beings (and I!) make than hand over my life to some algorithm.