So after about a year and a half of working as a human trainer for artificial intelligence and search algorithms, my ass got put into the unemployment line for a lack of “quality” in my work. I don’t know specifically what the means because the corporations that rely on exploiting desperate labor are usually non-transparent in terms of their evaluations. The virtual boss (and I never actually saw another human being while working this job) provided a set of flow charts with a long and — when you really break it down — unhelpfully consistent handbook that tell you how you should score search results, but nothing specific to the tasks they actually assign their workers. In fact, in your normal workflow they actually install pre-determined tests that are intended to catch the “incompetent.” The problem there is that if these tests and the reasoning behind them are ridiculous, then only people who unthinkingly follow their flow charts will seem competent. That, I know, is not as well-explained as it could be, but I’ll delve into the details of their illogical logic on some other posts.
Let me give you a little example, however, of how a search algorithm produces an utterly asinine result and how any level of actual intelligence at all would produce a superior outcome. So, I’m spending a lot of time trying to find work, and as a convicted felon, that’s pretty much a dead end. Add to the conviction the facts that I’m starting to get “old” for the workplace (anybody over 50 being pretty much poison for the younger people doing the hiring) and that I have a PhD (which makes me overqualified for “menial” jobs, too advanced for entry-level professional jobs, and when combined with a felony conviction, translates to “GED dropout” for jobs that actually require a PhD), and the chances get even slimmer. I mean Lowe’s won’t even give me an interview, and I’d be happy breaking my back to stock their shelves overnight.
But moving away from that screed of hate and self-pity and back to the real topic, I was on the world’s most popular search engine seeking some algorithmically smart career guidance and I typed in “formerly incarcerated professionals.” How would you, as an intelligent human, interpret that input? You would likely see that it’s ambiguous. I could be looking for a list of names. I might be seeking jobs. As a human you would probably want to follow up and clarify, but that’s not how AI works. Nor is it how AI companies are training it to work. Basically, they are training it to provide instant, cock-sure feedback, to equivocal human input. That results in really stupid results like this:
My apologies that there is so much to digest in that screen capture, but I hope you will find some entries there that will give you a bit of a double take. I’m not going to walk through every entry, but I do want to highlight a few just to point out why I’m getting a bit of vertigo at the intersection of employment and AI.
First, as part of trying to be a good employee for my former overlords, I have the generative AI turned on as part of my Google searches. Notice that this experimental feature is the FIRST thing you see rather than some kind of supplement. They can’t really control it, but why not just make it seem as if it’s top of the line by making it top of the list? The generative AI result also carries that foolish disclaimer, as if the quality of all their searches is not varied. At any rate, I’m looking up “professionals” and the generative AI is returning garbage, construction, and dishwashing. Just like most humans who have some knowledge about employment opportunities for the formerly incarcerated, the AI produces an answer that utterly ignores the idea of “professional.” Hell, try asking your probation officer what he would suggest and he would tell you, “well, just keep trying.” Gee, thanks! This kind of non-answer only makes sense, however, because the AI is really just scraping up human ignorance and misprision and repackaging it as easily digestible factoids; real intelligence might be able to think BEYOND that problem. I mentioned earlier about follow-ups, and the generative AI format tries to offer some potential prompts that might anticipate my questions, but as you can also see, that’s a total failure. “What are formerly incarcerated leaders?” is a question that literally makes no sense; it might go toward verging on the offensive if the formerly incarcerated were not generally thick-skinned. “Is it hard for former prisoners to get a job?” is a no-shit Sherlock question for anyone who lacks complete naivete about the criminal justice system in America. The very presence of that question suggest that the AI is programmed to presume a level of human ignorance that makes it usefulness, at best, remedial. Just for laughs, let’s check out the AI’s response to that follow-up:
I said “laughs,” but maybe it’s not so funny. The AI gives a correct answer, but it’s so vague as to be pretty much useless. The response is also infused with a Pollyanna attitude (another aspect built in to a lot of AI programming, but also a topic for different posts), that I find both infuriating and non-responsive. But, once again, let’s look as the presumptions. Because AI is built on the mathematics of probability it sacrifices specific, contextual results for generalized ones. “Many ex-prisoners lack basic skills.” We humans will remember that that was not my question. Moreover, that’s a really problematic assumption and also a very unhelpful justification (at least for convicted felons) about why I can’t get a damn job. “Ex-offenders may not qualify.” This is presented as if it’s somehow the ex-offender’s fault. The applicant does not set the (arbitrary) qualifications for employment; that’s on the employer. That means it’s the very definition of the AI’s next “insight” about discrimination. “Legal Liability” is also a justification for discrimination, not a hurdle in its own right. Because I got caught breaking the law in no way implies that my colleagues are NOT breaking the law themselves or that I will break the law again. Because I have been caught, prosecuted and, if the lip service we pay to prisons is correct, reformed, how am I more of a liability to employ than Mr. Wilson down in accounting who has been doing God-knows-what for God-knows-how-long unbeknownst to the legal department? Lawyers and politicians, go ahead and chime in here about liability law, but I’m pretty sure that we are eventually going to come to the conclusion that such laws don’t really prevent anything and are, on their face, more or less arbitrarily discriminatory. Like I said, a justification rather than a reason.
All that said, how is AI supposed to be helpful to real humans whose questions and searches are replete with ambiguity, uncertainty, irony, and undivinable intent? The real answer is that it’s not helpful unless you are willing to take results as a dogmatic truth, not matter how bad they are. So going back to the first set of results, let’s see how useful that algorithmic results are. “Life After Prison Success Stories” seems helpful and hopeful. The result gives 6 examples of success stories, but reading through them doesn’t do much to address my query. One is somebody who got hired by the very organization that educated him in prison. One is about somebody who was exonerated after serving his sentence, so always innocent but still got screwed; who exactly is going to make him whole? Three entries are about people who started their own business or non-profits, suggesting that working for somebody else is really going to be an impossibility. And one was hired by a company that proactively seeks to hire the formerly incarcerated. So we are in unicorn territory all around here, but AI likes a feel good story, and it can’t read between the lines. Nor do AI companies train their beasts to read with sophistication, so I’m not sure whom this result benefits aside from people who want to feel good about improbable examples. Ironic for a technology that works via probability.
I could go on and and on about the utility to a formerly incarcerated professional about the reset of the results, but you probably see where that would be headed (though would an AI see that probability?). Instead, I’ll simply mention two more things. The first is the result “Words Matter” which is a subject I appreciate because my professional experience is both in language and subjects of diversity and inclusion. Yes, words matter, and this isn’t an unimportant topic, but it’s not without its problematic ironies. It has the problem of all style guides in that it provides rather prescriptive answers to purely contingent and contextual questions. More importantly, it seems to be written for and by people who want to know what to call me rather than calling me by name. Rebranding the category does not alleviate the problem created by the category itself. Again, we can debate this, but that’s a debate for another post.
The second thing I wanted to point out is another example of AI “hallucination.” That is, the AI is just making things up seemingly to fill up space rather than to be genuinely helpful when judged by any reasonable human standards. If you have ever taught high school or college writing, you will recognize that this isn’t solely an AI problem, but it does point to the fact that AI often seems to work at the intellectual level of students who really do not understand the question at hand or cannot be bothered to put any effort into understanding it before they turn in their assignments. More specifically, I direct your attention to the “Related Questions” portion of our first screen shot here. Presumably, this is another probable follow up to the initial query, and that seem plausible to me. However, the answer is shocking. “Famous criminals who have turned their lives around.” OK, so I’m not famous, and I can’t really turn my life around without meaningful (or even subsistence) work, so I’m not expecting a lot of personally satisfying insight here. It’s really more of a gossipy kind of question, but who can resist a bit of gossip? Well, it turns out we should all probably resist it when it comes from search algorithms. Who are these famous criminals (and I’m not sure that’s the proper term we should be using based on the previous result)? Al Capone. Professional gangster! Went to prison at age 33, paroled 7 years later due to syphilis of the brain, refused subsequent medical treatment by Johns Hopkins University because of his criminal background and died in 1947 at age 48. Not sure how this responds even to the AI’s own question. Charles Manson. This answer responds only to famous criminal and nothing else. John Dillinger. Pretty much fulfills the same criteria as Manson. I suppose turning your life around could mean getting shot in the back by the cops. Frank Abagnale, the guy made famous by the movie Catch Me If You Can. A famous con man who (kinda) claims to have reformed, but everything about his life history makes it seem as if he is a still a con man, and if you ever served a minute in federal prison, that lack of transformation would totally verify your lived experience with many of your fellow inmates. Saying he turned his life around is like saying Trump became a public servant when he got elected President. Jeffrey Dahmer. So I guess murdered in prison is an AI synonym for turning your life around. And Chuck Colson. Probably the closest thing the AI gives to a legit answer for its own question. I’m just fascinated (and frustrated) that it takes public fawning over a Protestant Christian God, to kind of qualify you for what I would call a correct answer on this list. If, in my former job, I had said that Capone, Dillinger, and Dahmer don’t qualify, would I have been punished for poor quality? If I had suggested that if you are going to put Chuck Colson on the list you had better put Malcolm X on it, too, how would that have flown? I’ll go you even one better. What if I had argued for Elijah Muhammad to be on that list. After all, we did our federal time at the same institution (though 75 years apart), and his crime was failure to register for the military draft. I mean, does one even really have to “turn one’s life around” from that offense?
And after all this, do I have any good leads on where a professional kind find a job after being released from prison. No. But I can do blog posts for free.