Please allow me to speculate on how I got fired from a totally alienating job a few months ago. The work I was doing involved marking up possible computer responses to internet search queries. Every query was presented totally out of context, so, for instance, if you were given the term “bones,” you had no way of deducing whether the person was interested in human anatomy, archaeology, or treats for dogs. Nevertheless, the drones like me who were tasked with plowing through and rating the relevance of the computer’s response to “bones” were charged with imagining what some imaginary user might want to see when typing in “bones.” If you think analytically about this for a passing moment, you see that the drones are screwed. You want me, a drone, top come up with an pretend intention for a non-existent user, and then you want to rate me on how accurately I do that? There’s not a chance in hell that you and I are going to agree on that unless we reduce our thinking to cliché and stereotype, which data science people seem to have renamed “probability.” After all, you’re a Trekkie and can’t get Doctor McCoy out of your mind, and I suddenly feel a hankering to play some dominoes. For the record, we are both “wrong” and will get poor evaluations for expressing what our truly-human intentions actually are. Because, you know, language isn’t a beautifully complex set of signs and symbols inherently rife with ambiguity. No, it’s an algorithmically refinable equation that can direct you to the products you really need to buy, even if you don’t know it yet (and forget the wonderful irony of pointing out that we don’t really know our own intentions — something I’m completely be on board with and a major part of my point here — and substituting somebody else’s “common” intention for our quirky ones).
So what happens if we actually look up “bones” in a couple search engines? What is the intent we are “supposed” to have if we were truly just regular people and not freaky eccentrics out here in our own little worlds? According to Google’s search engine, if we are looking up “bones” what we are primarily seeking is a TV show that’s no longer on the air. That honestly never even crossed my mind what I randomly came up with this example. Google’s top two results are for the TV series that ran between 2005 and 2017. News flash: I never saw it, not a single minute of it. Saw the ads for it for a decade and every single time I saw an ad, I thought, “Nope, not going to waste my time on that.” Does that make me a total elitist or an utter philistine? Does that make my guesses at the intentions of a statistical strawman invalid? I can’t answer the former question, but the latter garners a “yes.” If you don’t buy into the fallacy, then the fallacy doesn’t work. If the fallacy doesn’t work, you are impeding the march of progress toward easily accessible and uniformly desirable information that somebody somewhere seems to think we actually want.
OK, you might say, but we can refine such a vague term by asking follow up questions, and Google does bake follow-ups into their results. However, the four “people also ask” questions the algorithm offers after giving us those top two entries are also only about this TV show I have never seen. Talk about going all in on a single, fictional intent. Well, maybe we want to see some kind of visual results. Good thinking on the one hand. On the other, the next three results are all music videos of the Imagine Dragons song “Bones.” It’s nice to have a different possible intent here, but doesn’t it seem a bit nonsensical to go with a TV show, a VISUAL MEDIUM, as the primary intent, and not show video of that? Sure there are copyright issues, but that actually speaks to the underlying problems of how the probability of “intent” is being monetized, and that’s a topic for another post. So, after the videos, we get another result that we could use to stream the TV show on a paid streaming service. After that we get a result from an Australian government health site about human anatomy. Then a result about a rapper. Then three more results for the TV series. Then a results block that tries to suss out other intents (though, geeze, it seems we are pretty baked into one already, so at this point how helpful is that?) called “related searches.” Of those results, one, “Bones song,” simply reiterates that music video results we have already seen. One is about anatomy. Four are about entertainment (TV, movies, music, and are really repetition and reinforcement of the previous results), and two make virtually no sense at all: “bones definition” which, I suppose, is for people whose intentions are outside the realm of the basic English needed to type the search in the first place; and “bones wikipedia,” the possible intentions of which make my head hurt, so I’ll avoid them. After all that, we are given images of human skeletons, though noting for other vertebrates. Then a site for skateboard wheels with a “Bones” brand name, so you can see how monetizing is really affecting what is allegedly “intention.” Then another result for the TV show, followed by a health site directed at children and a result for a steakhouse in Atlanta. So, of those top 16 results (actually more because I have grouped some together) which one comes closest to what you might have intended? That’s only a semi-rhetorical question. I’m rambling here a bit, but, very briefly, Google’s tiny rival Bing gives us these results: top three results for the TV show; results 4-7 for human anatomy; result #8 for human the TV show; result #9 for concert tickets; result #10 for purchasing dog bones. Bing also gives a trendy suggestion in a sidebar that you might be looking for something about “bone broth.” Perhaps the Bing search AI has a slightly better understanding of diverse intentions, but not by a whole lot. The benefit of Bing seems to be that it’s a little bit less insistent about it knowing what I’m supposed to want.
My God, I haven’t even gotten to getting fired yet. Well, because I’m a non-average eccentric who doesn’t even think about TV shows as primary possibilities, I was already unsuited for the job. Because I’m self-aware of that mismatch and happen to know something about intentional fallacy, which my beloved search engines define as a “term used in 20th-century literary criticism to describe the problem inherent in trying to judge a work of art by assuming the intent or purpose of the artist who created it” (and by the way, AI, this term can apply to all language, not just works of art), and because I just might be a devil (this is related to the life after prison part) who can’t resist making the road to hell just a little bit smoother, I went into my trick bag and did this:
Given the search term “red snapper” and zero context, I decided to tell my overlords that yes, there is more than one possible intent for such a query. Now I knew that this would violate their arbitrary ideas about what a “common intent” might be, but my point the in previous paragraphs has been to explain how much nonsense that idea is in the first place. So to confront their misunderstanding in a graphic way that might be easily grasped by someone who thinks “bones definition” is a good idea, when I was asked to provide some web-based evidence for what alternative intentions for “red snapper” might be, I went to the very first place MY mind went when I was given the term. Enjoy the show.
The day after I suggested this intention, I was locked out of work for a “quality control” audit, and eventually terminated, but only after 2 more weeks of being locked out and repeatedly asking what the problem was and getting no response. I can’t say for certain that the brilliant back-and-forth between Larry David and Kym Whitley got me fired, but wasn’t my job to devine unclear intentions and use them to come up with certain conclusions? Hey, maybe this doesn’t the most common desire when you think about “red snapper,” but it’s certainly an intention and one that makes me happy and satisfied. And even if it’s not common, shouldn’t statistical models need to account for the variety of responses “red snapper” invites, rather than a preconceived single answer? Besides, I dare you, from this moment forward, NOT to think about this scene every time you hear “red snapper.” I get that this is “not safe for work,” but for God’s sake, people, we are working with the internet here which means that a substantial percentage of all the content both searched and available is going to be NSFW in one way or another. “Naughty” is going to be a common intent for the “common user” more often that some of us might want to admit. And, by the way, it’s not as if this is pornography.
What’s more, this clip itself is a great, if unintentional, meditation on intent. What’s Larry’s intent? Not what Monena initially assumes it is. What’s Monena’s intent? To earn a living, but to do so, as the whole episode bears out, she needs to be able to adapt to things that are beyond her initial presumptions. Hell, one might easily argue that the entire premise of the series is that we often misread others’ intentions and mischaracterize our own in ways that go horribly and hilariously wrong. That’s not a failure. That’s delight. That’s life. I suppose I can’t expect the National Red Snapper Council or whoever may have trying to get that intent narrowed to a monetizable moment to understand that. However, if we are ever going to have artificial intelligence that even remotely approaches the miraculous — in part because it can be so flawed — power of human judgment and understanding, we need to let it learn from the exciting diversity of us human drones, rather than asking us human drones to conform to an arbitrarily “right” answer produced by inscrutable desires and rationalized with the bad, majoritarian use of statistic.