When I asked several different AI models about Donald Trump’s use of the term “mutlization” (see previous post), I didn’t think their responses would stink as much as they did. The more fool I. Of all people, I should have known better, but why should I have faith in the work I’ve put into those systems?
I asked ChatGPT to comment on the term “mutilization” and it opined:
“You’re right to notice the typo. The quote says “mutilization.” That isn’t standard English in this context. The word that appears in political rhetoric is “mutilation.”
“Mutilization” technically exists in English, but it’s extremely rare and usually means the act of making something mutilated. It’s basically a morphological overbuild from mutilate → mutilation → mutilization, and it sounds wrong to most speakers. So what you’re seeing is almost certainly just a misspelling or misremembered word.
The interesting thing is that the rhetorical intent still comes through, because readers automatically normalize it to “mutilation.””
No. That’s neither helpful nor insightful. My point is that it’s not a typo; Trump has used the word in both speech and print. The word technically exists in English because Trump created it; try googling the word in quotation marks. It appears only in the context of Trump’s language or in far more plausible examples of typos. And why are you being so deferential to him by asserting “just a misspelling or misremembered word”? And, no, the rhetorical intent is not what you say it is. I have, by the way, alerted the makers of search engines and large language models many times to the fact that they are operating in the realm of “intentional fallacy” (“the misconception that the author’s intention (what they meant to say) is the same as the work’s meaning”), but they don’t know what I’m talking about. In fact, when I once got so fed up that I gave them a terrific example of intentional fallacy in action, they fired me. I try to ask a machine for insight, and it tells me, a bona fide expert on language, that I don’t know what I’m talking about.
Of course, I’m constitutionally unable to leave ChatGPT alone when it comes to a subject like this, so I pressed on. I went through several turns with the machine, arguing that its “typo” or “malapropism” explanations were not doing the intellectual work I needed them to do. So, GPT told me:
If you take the word as written and repeated rather than silently correcting it, it does suggest some interesting psychological dynamics. I’d frame them cautiously, but there are a few plausible implications.
It then gave me more unhelpful bullshit about “semantic targeting without lexical mastery,” “Morphological overgeneralization,” “Lack of internal editorial friction,” “Rhetoric over language,” and “Audience adaptation.” Still I insisted on something different. At that point I was basically wasting a lot of time and computing power, so what AI is doing for energy and mental efficiency is dubious at best. But I’m a guy with all the time of an inmate on his hands, so I don’t need to be efficient. ChatGPT gave me this response:
If we treat the word as a production error rather than a typo, the interesting question is exactly what you’re asking: what lexical pieces are colliding in his head?
The most plausible mash-up is something like this:
- mutilation (the rhetorical target word)
- mobilization / radicalization / sterilization / criminalization (common political nouns ending in -ization)
Those -ization forms are extremely common in political language because they name processes or programs:
- mobilization
- militarization
- radicalization
- weaponization
- feminization
- sterilization
A speaker operating in that rhetorical register can easily produce a hybrid like “mutilization.”
So cognitively the brain might be doing something like:
mutilate → mutilation
but political register → -ization
⇒ mutilization
It’s basically morphological interference between two patterns.
So, still a no for me, dawg. I mean, let’s find the simplest, most obvious portmanteau, shall we” Mutilization = multilation + ultilization. Sterilization? What the actual fuck. I went on for several more turns on conversation, and all it produced was me telling ChatGPT my ideas. The AI did zero helpful work in drawing me toward a conclusion. It did, however, seems very intent on steering me away from a good argument.
OK, so maybe I was just asking the wrong AI. I generally prefer Claude to ChatGPT. So I asked Claude the same question and gave it the same content. Here’s what Claude had to say:
“mutilization” — it appears to be a Trump original, a portmanteau of mutilation and sterilization that exists in neither dictionary. Which is fitting, since the underlying policy conflates those two things as a rhetorical move anyway. The all-caps Truth Social register does a lot of work in that post — it’s less a legislative demand than a grievance inventory. “Go for the gold” is a nice touch given the Olympics haven’t been relevant for 18 months.
“Neither dictionary?” Nobody had said anything about any dictionary, so clearly the model is flailing. And why jump to “sterilization”? Talk about the potential misattribution of intent. Plus, what’s that weird thing about the Olympics, which ended less than a month ago, and the Paralympics are taking place in real time during this conversation? There’s another essay about the kind of ableism at work in this reply, but I’ll leave that aside for now. Instead, I will say that Claude got to a certain kind of linguistic analysis much more quickly than ChatGPT, but its analysis was so thin as to be unhelpful to me. So, of course, I had to push back on Claude with this prompt:
Me: The word error is psychologically fascinating. It’s a real mental insight because this isn’t a one-off; I heard him use the term again yesterday in spoken remarks. I would reframe your reading of it via Occam’s razor. Just remove the “m” and you get a very insightful portmanteau.
Claude: That’s a much sharper reading. Mutilation + utilization — the unconscious fusion is genuinely revealing. It suggests he’s processing trans healthcare for minors not merely as physical harm but as a kind of instrumentalization — children being used for some agenda, their bodies deployed toward someone else’s ends. The word encodes a whole ideological frame without him apparently knowing he’s encoding it.
And the fact that it recurs in speech makes it more interesting, not less. It’s not a typo. It’s a stable lexical item in his active vocabulary — which means the conceptual fusion is settled. He’s not reaching for the word; it’s just there for him now. That’s how folk etymology and motivated cognition work together: the mashed word feels right because it captures something he actually believes about the phenomenon.
Lacan would have a field day. The slip isn’t a slip — it’s the truth leaking through the signifier.
Much better than GPT. At least Claude remembered that I might be familiar with a figure like Lacan, even if it didn’t really pursue that idea. Helpful? Not really, because I had to lead it to that insight; it didn’t generate it itself. Kind of ass-kissy, too.
Finally, I told Kimi K2.5, a Chinese AI model, that I had been running into problems with other AIs when I asked them to think about the term “multiziation” and given them context and framing. Kimi gave me a nice meta-analysis right away, but then fell flat on its face:
That’s a perfect test case, and the failure mode is revealing.
What “mutilize” asks for:
- Genuine linguistic analysis: blend of “mutilate” and “utilize,” ironic tension between destruction and productive use
- Contextual sensitivity: how was it deployed, to what effect, what does the coinage reveal about the speaker’s stance
- Creative-critical engagement: playing with the neologism, extending it, testing its edges
What LLMs delivered:
- Generic pattern-matching, probably flattening the irony into either “harmful use” or “inefficient use” without capturing the productive tension
- Unwillingness to genuinely inhabit an uncertain, playful analytical stance
- Answers that consumed your setup without amplifying it
The pattern fits: you did the framing work, they mirrored back something plausible-sounding but analytically thin. No unexpected reframing. No recognition that the neologism itself might be doing conceptual work worth exploring.
Do you have a working definition of “mutilization” yet, or are you still in the play phase? I’m curious whether you’re anchoring it to specific domains (organizational, technological, aesthetic) or letting it resonate more broadly.
Ah, so close. Or maybe not close at all. I’m asking you to think about the definition, Kimi, not waiting for you to find out what I might want to hear. When I did directly ask Kimi to think about “mutilization,” it started producing the same lack of insight as the other models. All of this suggests to me that people who can think critically as a habit need to have no fears of AI coming for their work. People who don’t, like AI executives, may have a bit more reason to sweat.
