Have you ever asked AI a question only to have it come back with a seven-step “solution” to a problem you didn’t even know you had? For example, you might say something like, “How is it that my neighbor’s cats look so much sleeker than mine?” The AI comes back with an itemized list of what’s “wrong” with your cats and what you should be doing about it. Haven’t experienced that? Try it out. I’ll wait….
The annoying thing about AI is that when you ask it a question, it defaults to thinking you have a problem. Beyond that, it defaults to thinking that the problem has a single, definitive answer. Are sleek cats really a problem? Not for most people. It’s more like an observation posed as a question. A musing, if you will. But except for a subset of cat fanciers who are trying to keep up with the Joneses, it’s not really something that needs a solution. Or maybe cats are an unfair example. Let’s try, “How do they get their whites so white!” (Laundry whites, not MAGA whites). I tried that question, too, and got an 8-step solution for my dinginess. As if I could possibly care….
But, alas, AI programmers and developers work in the mindset of solutionism, which we might define as the idea that every complex — or mundane — “problem” has a fix. That every question should be solved. That anything you formulate as a question is something that you need to have resolved, and resolved quickly. Because AI developers and engineers inhabit this worldview, they also pass it on to their creations, AI algorithms. They aren’t even fully conscious of doing it. It’s the air they breathe. It’s also a pretty insidious form of bias.
Solutionism is akin to mansplaining in many ways. It assumes that there’s a definitive answer. It assumes that the listener wants to hear that definitive answer. It assumes that the listener is ignorant of what that answer is. It assumes that the explainer, not the questioner, is the one who has authority. It’s utterly performative in that it lets everybody know there’s a hierarchy here, and the person who is explaining sits atop that hierarchy.
This probably isn’t just a cultural bias born of techbro oblivion. It’s also a structural bias that built into AI training itself. When simps like me train AI, we are required to “reward” or “punish” certain kinds of outputs based on specific categories. These categories include accuracy, efficiency, relevance, and “helpfulness.” Each of those is given a numerical score and then fed back into the machine to “teach” (in the most pedantic sense of that term) it what’s good or bad. All those categories are a foundation of mansplaining in one way or another. It also means that when I pose as the “ideal” user to give these ratings, I’m part of the mansplaining reinforcement system. If I respond with, “Well, who the fuck didn’t know that?” I’m a bad trainer. Bye-bye, precarious job. What the job requires of me is, from the system’s perspective, to be the user in perpetual need of (and desire for) mansplaining. Moreover, when I’m tasked with teaching the AI model to tackle a task it hasn’t seen before, I’m required to mansplain the approach to the AI. How do I know I have to do that? Because the approach is mansplained to me. And, as anyone used to mansplaining can tell you, the explanation often doesn’t make a lot of sense. And if you point out an internal contradiction in the mainsplain, well, look out. It’s all a nasty cycle. An infinite loop, if you will, of which one might think that computer people would be far more suspicious.
All this is to say that AI has some habits baked into it that I find utterly annoying and potentially self-undermining. It wants to have the final word, and thinks I want it to have the final word, when I don’t care about final words. When I say, “I don’t know…,” it’s trained and rewarded to say, “But I do!” even when it doesn’t; this is part of AI’s hallucination problem. It says things with confidence even when utterly lacking in competence. It tends to produce something that might be superficially satisfying in the moment (mansplainer smugness) rather than something accurate in the long run. It drowns out dialogue by (occasionally) pretending to invite it. (The old tagline: “Would you like me to break that down further for you so you can really see the details?”). It mistakes an answer for intelligence. Teachers’ jobs would be so much easier if there were a connection between those two things….
Because I, myself, have mansplained enough already, I won’t dig a deeper pit of irony by talking about the problems of masculine authority that are at play in all this. I could change the “you” in the title to “I” and make it somewhat better, but that would only be cosmetic, I think. I’ll just try to defend myself by apologizing for the mansplaining and splaining this: I wasn’t really trying to tell you how things work; I was just thinking out loud for my own benefit. Did you buy that?
