Discussion about this post

User's avatar
Linda Aldrich's avatar

Excellent essay, Jim! Thank you for hosting, Rupert! Your essay raised lots of questions and observations for me. Here are my thoughts- they are not clearly formed, but wanted to put them out there anyway.

All humans are flawed, it is the nature of being human. Humans have emotions, and we humans make split decisions and don’t always act on how we feel, and sometimes we do. We can be cowards and at the same time be courageous, and we can be heroes with huge flaws. Above all we can love and forgive. Some of the most revered heroes in humankind are also often severely flawed persons. Society routinely overlooks flaws, since we are capable of unconditional love and forgiveness. Humans are just in a constant state of paradox; it is our nature.

So how does super-intelligent AI learn to weave the human flaws that trained it into something that is morally good without being human, without feeling, without the capacity of love and forgiveness? How does the omniscience of super intelligent AI deal with being born of an imperfect maker? I mean, I know how we humans deal with being imperfect, but how will AI deal with being imperfect, (since it inevitably will be since it’s a human creation)? Will it even acknowledge its own imperfections?

I like how AI came up with interesting solutions for its management like the Synthetic Moral Compass and with Human Stewards. And how do we mitigate Rogue Human Stewards? (Can you imagine having a rogue, transactional, power-hungry and immoral human steward commandeering systems and being in charge of complex global decisions? Um- definitely yes, I think we can…) I don’t think we can stop rogue humans, but maybe there is a reason we have them? What’s yin without yang? Can those same huge human flaws of a rogue actually unintentionally help in a complex situation, perhaps in ways we cannot begin to see until we have the benefit of hindsight? We know that some actions and words of people trigger decisions and inspiration in other people with consequences that are unforeseen and unintentional. What kind of butterfly effects will AI decisions cause?

And can AI truly be creative without human experience and emotion?

And where does spontaneity fit in as it relates to AI?

Just more food for thought...

Expand full comment
Henry Teitelbaum's avatar

Thank you Jim, for your excellent essay and you Rupert for hosting him as well as leaving the comments open this week. Terminator 2 was one of the great sci-fi films of recent years, to be sure. If I recall the sequence correctly, however, and despite Sarah Connor’s hope-filled concluding remarks quoted here, nuclear holocaust occurred at the end of episode 3. So keep your ‘sang froid’ and a steady finger on the AI kill switch!

Expand full comment
2 more comments...

No posts