I think there will continue to be applications where "essentially zero" will be perfectly fine, mostly where the AI is used as a tool to do the grunt work but a human will still be expected (at least encouraged :-) to check the results.
What we're left with is a tool - an amazing tool, to be sure, but it's still a tool. To get to a level of AI autonomy that we can trust, we need to be able to firmly ground the AI in its environment (real or otherwise), just like an aircaft autopilot is firmly grounded in the sensory information it receives about the plane, airflow, temperature, etc., and it's objectives to keep the plan on a level course and get it to a destination.