Human-in-the-Loop

There’s a solid argument for the above. The Real World is an unforgiving place. Anything remotely safety critical requires human oversight when it comes to robotics not least because of culpability. We can embolden our creations with all kinds of sensors and autonomous behaviours yet the one thing you can almost guarantee is that at some stage the system will fail.

Additionally, if we’re trying to create simple solutions to complex problems then humans have millennia of evolution to combat such challenges. Why wouldn’t we take advantage of that?

Autonomy can be used to a greater or less extent but ultimately we as humans should be making key decisions along the way.

That kind of oversight isn’t just engineering best practice but humanistic best practice too. That is, if we strive for a world ruled by machines, I would suggest we promptly reconsider the implications of such an outcome.

There’s almost something nihilistic about replacing ourselves with autonomy. It certainly feels dysfunctional to be searching for one’s own existential demise.

So, again, as we grapple with societal implications of AI/ML we should be taking the time to consider developmental trajectories. Don’t be the author of our deaths. That’s a deplorable epitaph.