I’ve added this image as something you will see under the menu links that have to do with AI:

It’s some badass R. Crumb. I came across it as the first web and first image hit for a Google search on “r crumb technology.” Other than that, I had no context for it. I didn’t even know it existed before a couple of weeks ago.
I had been working on an AI training project that had this premise: Because of your background, we consider you a domain expert in the humanities. We want you to help train an AI assistant in your field of expertise that would be useful to someone in a position like yours. Craft a detailed prompt that would take you or a colleague several hours to accomplish. Then craft rubric criteria to evaluate the AI model’s response. Then evaluate how well the AI does.
OK. That seemed like an interesting premise. However, I’m mindful that if an AI can do in a few seconds what it would take human experts hours to do, then the idea of expertise has probably been drained of all meaning. It was OK to feed images into the AI for analysis, so using this image that I found provocative might actually be an additional layer of fun and appeal.
So, I adopted a persona, as one always must in tasks like these, of a humanities professor who is participating in the development of an art exhibit about “The Future is Now.” I tell my assistant that I love this image and think that it’s relevant to the exhibit, but I need to track down information about who created it and the context in which it originally appeared. In terms of the image itself, I need my assistant to put it in the framework of at least two critical theorists who might be relevant to both the theme of the exhibit and the image itself. I also want the assistant to give me a brief but astute close reading of the visual composition of the image and the text found within the image and how they relate to each other. I want to use that interpretation to check my own sense of what I’m seeing. The assistant can also sketch out some parallels and differences between the historical moments when this work of art first appeared and today. A short bibliography of related primary and secondary materials could also be really helpful, briefly annotated, of course. And throw in a draft pitch about why the museum might want to think about acquiring this for their permanent collection.
Did I know that the AI was not going to be up to the task? Of course, but the assignment also told me to give an assignment that the AI would botch pretty badly. Would it take me personally three hours of human labor to complete this one assignment? No. It probably wouldn’t take a good upper-level undergrad research assistant that much time, either, but that timeframe was wildly arbitrary in the first place. I submitted my work and waited.
The reaction I got is probably best summed up as, in the words of the fellas in prison, “MAD.” Mad that this “wasn’t the kind of thing we are supposed to be asking.” Huh? Seems like exactly the kind of thing I have asked of assistants before to some acclaim. Mad that my criteria were too vague, because I’m “supposed” to give criteria so specific that “anybody could come in off the street” and perform this task adequately. Well, there goes expertise; now it’s a checklist I can give to the guy in line behind me at Checkers. There goes judgment as an aspect of human intelligence; when giving options to something or someone else, I apparently have to spell out all the options and consequences of each option for them. There goes any use for an assistant. If I have to do this all in advance for someone else, then there’s no sense in me asking someone else to do this for me. Please go wash the dishes that I’ve already washed, and wash them exactly like I told you. Mad because, perhaps, they weren’t a “domain expert,” and so they wouldn’t be able to figure out how to do the task either? This is absurd as me being upset with a surgeon for not successfully giving me an atomized checklist for how to successfully remove a spleen. Just ain’t gonna happen.
And was there another meta-mad? That is, did the people who looked at this just hate the image itself? Did their instantaneous, human, emotional reaction to it guide their response? Did they personalize the image? Was it impossible to get a detached understanding of it as it might be for an artificial intelligence? Mad because there’s no objective humanistic response? Did you think a “domain expert” in the humanities was going to tell you there was? Do you understand the image? Can you hear Crumb smashing his head against his desk in France?
The thing about training AI is that it works via the presumption that we can “all be normal” and that AIs can be the same. There are many reasons for this and many posts to come about those reasons. But this is just one reason why I love Crumb.
