Excellent essay, Jim! Thank you for hosting, Rupert! Your essay raised lots of questions and observations for me. Here are my thoughts- they are not clearly formed, but wanted to put them out there anyway.
All humans are flawed, it is the nature of being human. Humans have emotions, and we humans make split decisions and don’t always act on how we feel, and sometimes we do. We can be cowards and at the same time be courageous, and we can be heroes with huge flaws. Above all we can love and forgive. Some of the most revered heroes in humankind are also often severely flawed persons. Society routinely overlooks flaws, since we are capable of unconditional love and forgiveness. Humans are just in a constant state of paradox; it is our nature.
So how does super-intelligent AI learn to weave the human flaws that trained it into something that is morally good without being human, without feeling, without the capacity of love and forgiveness? How does the omniscience of super intelligent AI deal with being born of an imperfect maker? I mean, I know how we humans deal with being imperfect, but how will AI deal with being imperfect, (since it inevitably will be since it’s a human creation)? Will it even acknowledge its own imperfections?
I like how AI came up with interesting solutions for its management like the Synthetic Moral Compass and with Human Stewards. And how do we mitigate Rogue Human Stewards? (Can you imagine having a rogue, transactional, power-hungry and immoral human steward commandeering systems and being in charge of complex global decisions? Um- definitely yes, I think we can…) I don’t think we can stop rogue humans, but maybe there is a reason we have them? What’s yin without yang? Can those same huge human flaws of a rogue actually unintentionally help in a complex situation, perhaps in ways we cannot begin to see until we have the benefit of hindsight? We know that some actions and words of people trigger decisions and inspiration in other people with consequences that are unforeseen and unintentional. What kind of butterfly effects will AI decisions cause?
And can AI truly be creative without human experience and emotion?
And where does spontaneity fit in as it relates to AI?
Thank you for sharing your thoughts in detail, Linda. Despite the “hopeful” ending to my essay, I share all of your concerns. Frankly, I prefer Sarah Connor’s solution (minus the violence) to the problem, but there is likely no going back to a pre-AI reality.
A significant number of things have to go right in order for something like an SMC to emerge, and at this time, unfortunately not a lot is going right. AI will never truly feel, think, love, and yes hate as humans do. But it can and has lied. With current LLMs like ChatGPT, these lies are detectable and for the most part innocuous. The danger is continuing to work toward building AGI and ASI systems without first fixing this problem by incorporating reliable “lie detection” capabilities — something that is a non-trivial challenge when dealing with a “superintendent” AI.
As I said, a lot of things have to go right over the course of the next couple of years.
I surely hope they do. I know that there are people working on writing protocols for AI currently that DOGE did not manage to get fired. I hope what they forge is solid and gets fast tracked.
Thank you Jim, for your excellent essay and you Rupert for hosting him as well as leaving the comments open this week. Terminator 2 was one of the great sci-fi films of recent years, to be sure. If I recall the sequence correctly, however, and despite Sarah Connor’s hope-filled concluding remarks quoted here, nuclear holocaust occurred at the end of episode 3. So keep your ‘sang froid’ and a steady finger on the AI kill switch!
Excellent essay, Jim! Thank you for hosting, Rupert! Your essay raised lots of questions and observations for me. Here are my thoughts- they are not clearly formed, but wanted to put them out there anyway.
All humans are flawed, it is the nature of being human. Humans have emotions, and we humans make split decisions and don’t always act on how we feel, and sometimes we do. We can be cowards and at the same time be courageous, and we can be heroes with huge flaws. Above all we can love and forgive. Some of the most revered heroes in humankind are also often severely flawed persons. Society routinely overlooks flaws, since we are capable of unconditional love and forgiveness. Humans are just in a constant state of paradox; it is our nature.
So how does super-intelligent AI learn to weave the human flaws that trained it into something that is morally good without being human, without feeling, without the capacity of love and forgiveness? How does the omniscience of super intelligent AI deal with being born of an imperfect maker? I mean, I know how we humans deal with being imperfect, but how will AI deal with being imperfect, (since it inevitably will be since it’s a human creation)? Will it even acknowledge its own imperfections?
I like how AI came up with interesting solutions for its management like the Synthetic Moral Compass and with Human Stewards. And how do we mitigate Rogue Human Stewards? (Can you imagine having a rogue, transactional, power-hungry and immoral human steward commandeering systems and being in charge of complex global decisions? Um- definitely yes, I think we can…) I don’t think we can stop rogue humans, but maybe there is a reason we have them? What’s yin without yang? Can those same huge human flaws of a rogue actually unintentionally help in a complex situation, perhaps in ways we cannot begin to see until we have the benefit of hindsight? We know that some actions and words of people trigger decisions and inspiration in other people with consequences that are unforeseen and unintentional. What kind of butterfly effects will AI decisions cause?
And can AI truly be creative without human experience and emotion?
And where does spontaneity fit in as it relates to AI?
Just more food for thought...
Thank you for sharing your thoughts in detail, Linda. Despite the “hopeful” ending to my essay, I share all of your concerns. Frankly, I prefer Sarah Connor’s solution (minus the violence) to the problem, but there is likely no going back to a pre-AI reality.
A significant number of things have to go right in order for something like an SMC to emerge, and at this time, unfortunately not a lot is going right. AI will never truly feel, think, love, and yes hate as humans do. But it can and has lied. With current LLMs like ChatGPT, these lies are detectable and for the most part innocuous. The danger is continuing to work toward building AGI and ASI systems without first fixing this problem by incorporating reliable “lie detection” capabilities — something that is a non-trivial challenge when dealing with a “superintendent” AI.
As I said, a lot of things have to go right over the course of the next couple of years.
I surely hope they do. I know that there are people working on writing protocols for AI currently that DOGE did not manage to get fired. I hope what they forge is solid and gets fast tracked.
Thank you Jim, for your excellent essay and you Rupert for hosting him as well as leaving the comments open this week. Terminator 2 was one of the great sci-fi films of recent years, to be sure. If I recall the sequence correctly, however, and despite Sarah Connor’s hope-filled concluding remarks quoted here, nuclear holocaust occurred at the end of episode 3. So keep your ‘sang froid’ and a steady finger on the AI kill switch!