Ki Teitzei: Should We Be Polite to ChatGPT?
In this article I argue that the answer to this question emerges from a machlokess (disagreement) between the Ramban and the Rambam, with Shadal weighing in. I am curious to hear what YOU think!
The Torah content for the first month of the new school year has been sponsored by the Brevique BrewLid. The BrewLid integrates coffee directly into the lid, offering a cleaner, more convenient, and eco-friendly coffee experience. By eliminating the need for machine contact, it reduces contamination risk, minimizes steps, cuts down on waste, and keeps the aroma around longer while delivering every last drop of flavor. If you love coffee and want to get in on the ground floor of BrewLid, check out the Kickstarter!
Click here for a printer-friendly version of this article.
Ki Teitzei: Should We Be Polite to ChatGPT?
When I ask ChatGPT to complete a task, I deliberately say “please” and “thank you.” This wasn’t always my habit. It started about nine months ago when I began using ChatGPT-4’s highly realistic voice interface. (If you haven’t heard what this sounds like, check out the 11-minute demo I made, where I discuss this article with ChatGPT, which appears after the article!) My reasoning is rooted in the mitzvah of shiluach ha’ken (sending away the mother bird), mentioned in this week’s parashah (Devarim 22:6-7):
“If a bird’s nest happens to be before you on the road, on any tree or on the ground—young birds or eggs—and the mother is roosting on the young birds or on the eggs, you shall not take the mother with the young. You shall swiftly send away the mother and take the young for yourself, so that it will be good for you and prolong your days.”
Ramban (ibid.), toward the end of his discussion of taamei ha’mitzvos (reasons for the commandments), explains:
The reason for the prohibition is to teach us the trait of mercy and to prevent us from becoming cruel, for cruelty spreads through the soul of man [expanding from cruelty towards animals to cruelty towards people]. It is well known that butchers who slaughter large oxen and deer are violent individuals, [akin to] murderers, and exceedingly cruel. This is why the Sages (Kiddushin 82a) said: “The best of butchers is a partner of Amalek.” To sum up: these commandments related to animals and birds are not expressions of compassion for the animals; rather, they are decrees for us, to guide us and to teach us good character traits.
The Torah permits us to kill animals for food but warns that how we treat animals can influence our middos (character traits), making us prone to cruelty. The mitzvah of shiluach ha’ken is meant to cultivate mercy toward animals, reinforcing that the way we treat them shapes how we treat our fellow human beings.
My argument for being polite to ChatGPT is this: the more humanlike AI becomes in its interface and behavior, the more our interactions with it will affect how we treat other people. Therefore, it’s in our best interest to engage AI with good middos, cultivating these positive traits, while avoiding negative behavior that might reinforce bad tendencies. Saying “please” and “thank you” to ChatGPT encourages the habit of showing gratitude to real people, while ordering ChatGPT around may condition you to treat others similarly.
A few months ago, I shared this argument on social media and was met with a counterargument: we should not be polite to ChatGPT, especially as it becomes more humanlike, because this blurs the distinction between humans and AI, ultimately leading to treating our fellow humans with less dignity, not more. The objector used the example of the growing trend of the “AI girlfriend,” writing:
The problem is viewing AI as a girlfriend. At that point, how you speak to it is really secondary. That is, if you consider AI to be your girlfriend, then you are correct that how you treat it is indicative of how you treat girlfriends. But that begs the question of how you got there, which may be at least partly by encouraging/going along with being polite in prompts to ChatGPT …
Bottom line, why are we treating this as different from Google search terms (in question form)? Whatever the answer, this is where my concern arises. AI is, at essence, code, not sentient. That's a crucial distinction … This is why I come down squarely on the side of not deliberately being polite. I never got to the point where my default was to be polite … nor do I wish to get there. I don't thank my washing machine for cleaning my clothes, I don't say please when prompting a voice activated response tree to connect me with the desired department, I don't say please to Google, and I honestly don't see ChatGPT as different. Nor do I want to get to a point where I do.
This argument has merit: the way we treat humanlike entities can influence how we treat real humans, which strengthens the need to distinguish clearly between them. Rambam’s explanation of oso v’es b’no (the prohibition to slaughter an animal with its young on the same day) and shiluach ha’ken (Moreh 3:48) supports this:
It is also prohibited to kill an animal with its young on the same day, in order that people should be restrained and prevented from killing the two together in such a manner that the young is slain in the sight of the mother; for the pain of the animals under such circumstances is very great. There is no difference in this case between the pain of a human being and the pain of other living beings, since the love and tenderness of the mother for her young ones is not produced by rationality, but by imagination, and this faculty exists not only in man but in most living beings. This law applies only to an ox and a lamb because out of all the domesticated animals used as food, these alone are permitted to us, and in these cases the mother recognizes her young.
This is also the reason for shiluach ha’ken … And if the Torah shows concern for the psychological afflictions of these animals and birds, how much more so regarding the individuals of the human species!
Rambam explains that the emotional pain experienced by animals is not fundamentally different from human pain. While animals may not possess rationality, they experience real emotional distress, rooted in their capacity for “imagination”—a concept that, today, might focus more on the biological similarities between human and animal brains. This stands in stark contrast to AI, which, no matter how convincingly it mimics human emotions, does not actually feel anything. Unlike animals and humans, whose emotions are of the same nature, AI has no inner experience.
Thus, while both Ramban and Rambam agree that mitzvos like shiluach ha’ken promote compassion, Rambam’s reasoning underscores the real emotional similarities between humans and animals. The distinction between sentient beings and AI is not just quantitative but qualitative. Whereas animals and humans may experience emotions to varying degrees, AI is entirely devoid of emotional experience. This makes the need to maintain a clear boundary between how we treat sentient organisms and AI all the more critical.
This is where I initially intended to end this article when I planned it months ago. However, last week’s article, Shofetim: Three Philosophies of Bal Tashchis (Wasteful Destruction), added a new dimension. In that article, I cited Shadal’s explanation of the prohibition against bal tashchis, where he argued that we are prohibited from wastefully destroying any object from which we have derived personal benefit. Shadal explains that the Torah aims “to distance people from the trait of ingratitude and accustom them to esteem that which has done them good.” I concluded: “The wasteful destruction of such goods promotes ingratitude, and the Torah seeks to prevent us from exhibiting ingratitude even in our conduct toward inanimate objects.”
If Shadal is correct, and the Torah encourages practicing gratitude even toward non-sentient entities—such as trees, clothing, and food—then saying “please” and “thank you” to ChatGPT would certainly be in line with that principle. And perhaps Shadal might even thanking your washing machine and Google.
The only remaining question is: What will you do? ChatGPT and other AI technologies are becoming more advanced and humanlike with each passing month. If we don’t think about these issues now, while technology is still in its early stages, we risk adopting ethical norms by default. I suspect that Rambam, Ramban, and Shadal would all agree that allowing society to unthinkingly dictate our ethics is not what Hashem desires.
What do you think: should we or shouldn’t we be polite to AI? Why or why not? I’m genuinely curious to hear what you think!
Like what you read? Give this article a “like” and share it with someone who might appreciate it!
Want access to my paid content without actually paying? If you successfully refer enough friends, you can get access to the paid tier for free!
Interested in reading more? Become a free subscriber, or upgrade to a paid subscription for the upcoming exclusive content!
If you've gained from what you've learned here, please consider contributing to my Patreon at www.patreon.com/rabbischneeweiss. Alternatively, if you would like to make a direct contribution to the "Rabbi Schneeweiss Torah Content Fund," my Venmo is @Matt-Schneeweiss, and my Zelle and PayPal are mattschneeweiss at gmail. Even a small contribution goes a long way to covering the costs of my podcasts, and will provide me with the financial freedom to produce even more Torah content for you.
If you would like to sponsor a day's or a week's worth of content, or if you are interested in enlisting my services as a teacher or tutor. Thank you to my listeners for listening, thank you to my readers for reading, and thank you to my supporters for supporting my efforts to make Torah ideas available and accessible to everyone.
-----
Substack: rabbischneeweiss.substack.com/
Patreon: patreon.com/rabbischneeweiss
YouTube: youtube.com/rabbischneeweiss
Instagram: instagram.com/rabbischneeweiss/
"The Stoic Jew" Podcast: thestoicjew.buzzsprout.com
"Machshavah Lab" Podcast: machshavahlab.buzzsprout.com
"The Mishlei Podcast": mishlei.buzzsprout.com
"Rambam Bekius" Podcast: rambambekius.buzzsprout.com
"The Tefilah Podcast": tefilah.buzzsprout.com
Old Blog: kolhaseridim.blogspot.com/
WhatsApp Content Hub (where I post all my content and announce my public classes): https://chat.whatsapp.com/GEB1EPIAarsELfHWuI2k0H
Amazon Wishlist: amazon.com/hz/wishlist/ls/Y72CSP86S24W?ref_=wl_sharel
Wondering how many months until AI achieves what is arguably sentience and then the safek shifts
Great article! I find myself wanting to say thank you but forcing myself not to. I'm worried about where this goes. If they don't already, many people will soon prefer talking to ChatGPT over real humans. GPT will be more funny, interesting, flattering, etc., and many will feel like it's a real person. Combine that with the increasingly manufactured plausibility that AI could someday "become conscious," and we are soon in a world where people are arguing for rights for their AI girlfriends. And then we will have to decide what happens when the rights of an AI that we adore conflict with those of a human being, who will be less valuable by every metric. Sounds pretty dystopic, but it's also around the corner unless we draw a very hard line between AI and humans.