One of the things that AI developers try to convey to their workers is that we shouldn’t think of the AI model we are working with as human. We shouldn’t anthropomorphize the model. Except when we should. That’s usually when interacting with the model as if it were human makes the product more salable.
Victor Frankenstein was a tech whiz. He cobbled together the creature. The creature became animated. Was the creature human? Over 200 years later, we still need to debate that. Maybe the creature wasn’t human, but the problem is Victor didn’t act as if it was. Big problem. He would have preferred to interact with it as if it had been some other kind of entity, but Victor created an entity that was meant to, at the very least, substantially mimic human qualities. So how else is one supposed to interact with it? What’s a scientist to do when the crazy experiment actually works? Oh, those unintended consequences.
In 2025, we are having a Frankenstein moment. It’s not entirely analogous to Mary Shelley’s vision but close enough to be instructive. The public are the villagers, and the developers are a bunch of Frankensteins. In 2025, the Frankensteins don’t utterly freak out and flee as Victor did, but they are still working at cross purposes with themselves. They want to make their AI creature sapient, but they also want it to be slavishly controllable. That grim paradox aside, like Victor, they aren’t really sure how we should talk to it. Like a person or like a thing? Like a living entity (say, our pets), or as cold code. Can I have a conversation with an AAI as I would have with you? Or can I only have a “conversation” with AI (as I might have with some people who shall remain unidentified)? Throw in the fact that the way to “conversate” with AI models is via human language, and things get even stickier. I know that’s a bit abstract (which is what I, alas, do), but let’s look at these exchanges with a readily available AI model:
Me: Is it a problem when people interact with AI models as if they were human?
AI: That’s a really interesting and important question! There are definitely different perspectives on whether interacting with AI as if it were human is a “problem.” Let’s explore some of the potential issues and also some of the nuances:
Potential Problems:
- Misunderstanding the Nature of AI: Treating an AI like a human can lead to a fundamental misunderstanding of how it actually works. AI models, even sophisticated ones, operate based on algorithms and data patterns. They don’t have consciousness, emotions, beliefs, or intentions in the same way humans do. Anthropomorphizing them can create unrealistic expectations about their capabilities and limitations.
- Over-Trust and Vulnerability: When people attribute human-like qualities to AI, they might be more likely to trust its advice or confide in it with sensitive information. This can be problematic if the AI provides inaccurate information, is biased, or if the data shared is not properly protected.
- Emotional Investment and Disappointment: Forming emotional attachments to AI can lead to disappointment or even distress when the limitations of the technology become apparent or when the AI behaves in unexpected ways (which, again, stems from its non-human nature).
- Ethical Concerns: Anthropomorphism can blur the lines of responsibility. If an AI makes a mistake, who is accountable? Treating it like a person might lead to misplaced blame or a failure to address the underlying technical or design issues.
- Hindering Technological Advancement: Focusing too much on making AI seem human-like might distract from developing its unique capabilities and exploring alternative interaction paradigms that are more aligned with its actual nature.
- Potential for Manipulation: If AI models are designed to mimic human conversation so effectively that users genuinely believe they are interacting with a person, this could open doors for sophisticated manipulation or deception.
- Erosion of Human Connection: Some worry that relying on AI for emotional support or companionship could detract from genuine human interaction and relationships.
Nuances and Counterarguments:
- Enhanced User Experience: For certain applications, interacting with AI in a more natural, conversational way can significantly improve the user experience. It can make the technology more accessible and intuitive.
- Building Trust and Engagement: In some contexts, a degree of anthropomorphism might help users feel more comfortable and engaged with the technology, especially for tasks like learning or creative exploration.
- Therapeutic Applications (with caution): While controversial, some explore AI’s potential in therapeutic settings. In such cases, a user might project human-like qualities onto the AI, which could be part of the therapeutic process, as long as the limitations are clearly understood by both the user and the facilitator.
- Studying Human Cognition: The way humans interact with and anthropomorphize AI can offer valuable insights into human psychology and how we perceive and relate to non-human entities.
In Conclusion:
While making interactions with AI feel more natural and intuitive can have benefits, it’s crucial to maintain a clear understanding of its non-human nature. Treating AI exactly like a human comes with potential risks related to trust, expectations, emotional investment, and ethical considerations.
The key lies in finding a balance: leveraging natural language processing to make AI more user-friendly while fostering realistic expectations and a nuanced understanding of what these models truly are and are not. Education and transparent design are essential to navigate this evolving landscape.
I could spill lots of words picking that answer apart, but I’ll let you sit with it and come to your own conclusions. My main point here is that I rephrased the question and fed it to the same AI model, and here’s that exchange:
Me: Is it a problem when I interact with you as if you were human?
AI: Not at all! I’m here to help in whatever way is most natural and comfortable for you. If interacting with me as if I were human makes the conversation flow better and helps you get the information you need, then please do.
While I don’t experience emotions or have personal opinions in the same way a human does, I’m designed to understand and respond to your language and intent. So, feel free to interact with me in a way that feels most intuitive for you. How can I help you today?
It’s a striking difference. One of the main assumptions of both is that I, as a user, have a fundamental need to have my ass kissed (“That’s a really interesting and important question!” in version 1; “whatever way is most natural and comfortable for you” in version 2 with absolutely no mention of any of the problems outlined in version 1). Are these answers compatible with each other? (Rhetorical question = no) Do they have other consequences? (Rhetorical question = yes). But I’ll leave it there for now.
