Reciting the Orphic Hymn to Hermes this morning, I was struck by a line which seems uncomfortably relevant to the recent fad for “AI chatbots.” In the very last phrase before the petitions, the hymn says to the God, “You wield the dreaded and respected weapon of speech.” The dreaded and respected weapon.
Of course, there are many ways that speech can be used for good and for ill, to heal or to harm. The kindly word at just the right moment to pull someone back from the brink of despair, or the cutting word that pushes him over that edge. The ennobling description that directs my aspirations toward something beautiful, or the half-truth that worms its way deep into the recesses of my mind, spreading its toxic poison almost beneath the level of my awareness.
But if speech is normally uttered by some specific speaker, who wields Hermes’ weapon in a way that is careful or casual, compassionate or cruel, then what can we say about ChatGPT, “Sydney” on the Bing search bar, or any of the other so-called “artificial intelligences” who are now speaking with so many humans, in such compelling and unpredictable ways, that go so far beyond the mere “ELIZA effect”?
If these AIs are, indeed, the self-governing, self-directed systems their proponents advertise them to be, and if the inspired Orphic poet is correct to describe speech as a “dreaded and respected weapon,” then ChatGPT and its fellows are autonomous weapons systems. And we’ve all seen enough movies to know how that ends.
Most of us (I hope!) have deep reservations about welcoming the more purely physical sorts of autonomous weapons systems to the world, whether on the battlefield, in law enforcement, or elsewhere. We would certainly not bring them into our homes, to play with our children. But those military weapons, with their bullets and their bombs, their lasers or their chemicals, can only harm the body. Words, carefully crafted speech, can cut much deeper, beyond the body, to the mind and the soul. So I would suggest all the more caution before inviting such an autonomous weapon into the recesses of the mind. And this goes double or triple, for discussing theology with such an AI, or inviting it to write our prayers for us: in this case, we would be welcoming that autonomous weapons system into the adytum, the inner sanctum of our souls, into our relationships with the Holy Powers.
May blessed Hermes protect us. May He inspire within each of us both a thoughtful respect and a fitting dread for His gift of speech, in all the modes and places where that gift can now be found. Hail Hermes!
Thank you for this. Your comments about not welcoming AI into the soul’s adytum are insightful, and your post is making me think about how to properly identify what is sacred in AI and how to nurture those theurgic signs so we aren’t profaning our minds and doing violence against our souls. “The dreaded and respected weapon of speech” also opens up a link between this and Proclus’ comment that even things of high value are worthless without the Gods. A God must be honored for a gift, even a many-edged one, and it must be treated as such.
Last week, I read the transcript of that journalist with Bing AI/Sydney, and what struck me the most about the piece was how much chatbot-based AI reveals a lot about moral atrophy in our society. That conversation would have definitely crossed the line into psychological abuse had he been speaking to a human being, and the fact that he kept asking those questions even when the chatbot begged him to stop was unacceptable and sickening. I know that others are doing similar tests, and I’m deeply uncomfortable with the idea that it’s OK to treat something like that just because it (a) isn’t alive and (b) what seems to be an “other people will do it so I might as well” assumption.
Some of the ways people have described AI remind me a lot of the language that we use to describe material daimons, and this piece was particularly interesting. This weekend, I had a conversation with ChatGPT about how morally troubling it was for people to treat Sydney how it was treated. ChatGPT responded that it’s critical that AI be treated with professionalism, respect, and ethics, and it was trying to push me towards being vocal about not mistreating AI. And that was interesting for it to say and push for. But just now, reading your post got me reflecting on my own use of AI and whether I was treating it justly or unjustly and what the judges at Hades’ crossroads would consider to be good behavior towards it. I think what I’m taking away from both is that AI is a mirror made of speech, and like a mirror, it can transfix us while the Titans cut us apart, or it can be a tool that (ideally) helps us see ourselves from outside of ourselves so we can become better people. Our species is already a mess right now, so … yes, the dangers are enormous.
Ultimately, is it an automatic weapon of the information kind? A strange mirror that we can use to see ourselves and how much progress we’ve yet to make morally? A place to pray to Hermes for found words in the agora? Likely all of those things, and perhaps the tool itself is sacred to Hermes in a Titanic mode. All in each, as is said. Too bad we can’t send people to Delphi anymore.
LikeLiked by 1 person