How scale beats skill, and fraud beats both.
Most of my regular readers already know this much: I have a PhD. I’ve published peer-reviewed research. I have multiple years of experience in some of the rarest and most rarified fields of education. I mean, who else would take a line from Foucault and turn it into a blog title? I also did federal time.
So if you’re wondering what kind of job market that leaves me in, the answer is: I now annotate AI training data on gig platforms. I “teach” machines because teaching actual human beings is too dangerous. At least that’s what they tell me. My contributions get judged by people with anonymous usernames and no published work. Hell, old-school academic peer reviews are blind. You don’t know those names, either, but you’d be surprised how many times at, say, a conference, you get talking to a colleague from another institution. You begin gabbing about the work you have been doing lately, and one of you says, “Oh, I reviewed that paper for XYZ publication.” But in the AI gig world, the anonymity and lack of serious credentials are virtually guaranteed. Once in a while, you get matched up with some really good people who are also, for one reason or another, in need of this same kind of exploitative work. That’s fun — even satisfying — while it lasts, but it never lasts long. Here’s part of the reason why.
A classic con game is one where, as David Mamet wrote in House of Games, “Everybody gets something out of every transaction.” They might even get enough that they don’t even know they’ve been conned. Suckers never know; the witnesses don’t know; and best of all, the mark—the one whose resources were actually drained—thinks the whole thing fell apart because of inefficiency, bad luck, and honest mistakes.
The thing about doing time is that you get to meet some pretty slick con artists and some who are lucky they didn’t fall for their own deception. That’s why I feel I have some “expertise” in reviewing this bit of chicanery. It happened right under my nose on a data annotation platform where I’d been working for months. I won’t name the platform—because honestly, I think they’re still in denial—but I will tell you how the whole scam played out, because, like most scams, it’s beautifully instructive. Especially if you work in tech, trust experts, or believe scale is a neutral concept. It’s the kind of critical thinking that a human can perform but an AI can’t. It’s a hallmark of expertise.
This is not a complaint. It’s not even a judgment. It’s a report from the field.
The Bait: $100 an Hour
This particular project launched with a simple premise: the platform needed domain experts—people with high-level credentials and deep subject matter knowledge—to annotate complex content that would train some next-gen language model.
The pay was $100/hr, which is unheard of in this line of work unless you’re a sucker or an insider. Funny enough, this grift makes you of both categories at the same time, which is part of its beauty. That kind of money pulled in real talent—people with serious training, people who’d been cast off from conventional employment paths but still had real expertise to offer. It promised to put my expertise to work in the exact domains I had been trained to work. I didn’t apply to them. They chose me from their existing talent pool. It doesn’t mean I wasn’t a sucker.
The work started out fine. Interesting problems. Complex enough to be satisfying. A lot of us did good, hard work—stuff that would absolutely help build better AI models.
You think about the questions seriously. You try to be creative. You think, if I walked into colleague A’s office with a question about this topic, what would I ask, and how would they respond? You can hear the conversations. You’re transported to that place. Then come the reviews.
The Wall: 2 Out of 5
Every contributor’s work was subject to “quality assurance.” This meant that someone else—a “reviewer” or “QA”—would look over your submissions and score them. Fine in principle. But soon, the pattern emerged.
No matter what the content was, no matter how rigorous, no matter how aligned it was with the platform’s own standards, the scores started coming back: 2 out of 5.
Always a 2. It has to be a 2 because a 1 means that the writers didn’t put any effort into it at all. It means it’s nothing. It means it’s pure spam. There’s a different kind of fraud where people sign up on these platforms to crank out huge volumes of pure spam, but those are easy to spot. They’re like someone trying to pick your pocket with one of those mechanical grabbers you use to get things off of high shelves. A 3 means it’s good enough to go forward. It might get sent back to you for revision, but it’s OK. 4s and 5s are no problem at all.
But a 2 means something else. It means something like, “You’re trying, but you just don’t get it. Bless your heart.” And if you get a few 2s in a row, it means simply by that bureaucratic standard, you can be removed from the project. And that possibility of removal is a central part of the scheme. Of course, you are given the text of a review, and you can dispute the review. You’re welcome to say that the reviewer doesn’t know what the fuck they are talking about. But you can have only three ongoing disputes at any given time. Once you exceed that, you are put on pause and practically removed from work until these things get wrapped up. Plenty of legit experts did just that, but their disputes largely vanished into the void. You’d get no real explanation, no meaningful adjudication. You’d just find yourself locked out of the project, ghosted. Like you’d never existed.
Of course, a sucker always thinks it’s his own fault. A sucker, especially one hungry for work, is willing to give the benefit of the doubt, which is exactly what this kind of fraud depends on. But as more and more experts were dismissed with vague or nonsensical quality reviews, a started publicly complaining and comparing notes. And that’s when things started to stink.
The Choke Point
The benefit of the doubt still rules in those discussions, but there’s a pattern that emerges from the nearly identical nature (and even phrasing) of all the complaints that leads me to a non-beneficial conclusion. Here’s what I see, and I can thank the criminal justice system for teaching me to recognize the rules of this confidence game.
A group—probably a small, organized association of contributors—had figured out how to game the system. They weren’t aiming to contribute high-quality expert work. That was just the front door.
They came in posing as experts (maybe a few were, maybe not), and they did just enough decent work to get promoted. Because here’s the trick: on this platform, if you do a handful of good tasks, you get the chance to become a reviewer. Supposed to be an expert in a particular field? Don’t worry about that. You can feed your work into an AI chatbot, and it will give you the veneer of competence that will keep you going. After all, the people who are “above” the contributors and reviewers in the hierarchy aren’t experts in fields other than, for the most part, computer and data science. How are they going to check what you have to say about the structural properties of cantilevered roofs unless they plug it into Chat-GPT simply to make sure it’s not total bullshit? Moreover, nobody is going to check to see if you really possess the educational credentials you claim to have. Nobody is going to dive into your publication history or professional experience. The platform sends an AI-driven screening tool to read whatever resume you sent them. That AI tool, which can be truly horrible in terms of quality (and people who have applied to jobs within the past 5 years will know exactly what I’m talking about), sorts you into available projects and roles based on nothing but your claims. Platforms call that “efficiency.” Scam artists call that “opportunity.”
Common AIs themselves tell legitimate job seekers how to exploit this. They suggest that “you may not know the exact weights in [the hiring company’s] proprietary algorithm, but you can flood it with every keyword, phrase and data point they’re looking for” in the job description. Polish up your resume for an HR AI bot, and it hands you the keys to the house. The AI will even do the polishing for you. Just give it a copy of your résumé and the text of the job description. My cellie in the feds, who was about 15 years into a 20-year stretch, used to remark, “In prison, you can be whoever you want.” The same is true in the data gig economy.
The really fucked up thing is that to the non-experts who are supposed to be reviewing the initial work and deciding who gets to become a reviewer in the first place, is that an AI-generated task might even seem more palatable than one generated by a human expert. The AI gives you something that’s simple to follow and nearly ordered. That’s what it’s optimized to do. The human expert is going to be messy, something that relies on judgment and circumstance, and computer folks don’t really know how to program that. THe point for the scammer is to just do enough to keep going through the initial level (remembering, of course, that you don’t really need to do that much and that you’re still getting $100 and hour), and by God get promoted to the reviewer level.
Once you’re reviewing, you control everything
This group seeded its own people into reviewer roles. Once they had enough of those roles, they controlled who passed and who failed. If you were a real expert doing actual thoughtful work—well, your answers didn’t align with their needs. You were failed, scored 2 out of 5, and removed.
Meanwhile, the fake experts were passing each other. Over and over.
From the outside, it looked like high-performing reviewers were approving fast, clean, consistent work. From the inside, it was a closed circuit of quality laundering.
Silence from Above
As I said before, people did notice this fairly quickly. Experts may be suckers, but they’re not stupid. But most folks on this platform don’t have domain expertise in fraud and confidence games. They didn’t know that, were they reviewers, they would have been right to give the platform a score of 2 for administration, but the scammers a score of 5 for what they were pulling off.
People flagged bad reviews, submitted disputes, and emailed the QA managers and platform reps directly. When people started to complain, a warning went up on the project’s internal message board: “Do not ever DM a quality manager with a problem unless they directly solicit that DM.” Now you could get a 2 without even doing any work at all! Communication with the platform broke down, but that’s good for the con artists.
People stopped responding. The project forums went dark. Tickets went unanswered for weeks. Some contributors were still active, but they reported that the number of available tasks had plummeted. Tasks, when they appeared, were snatched up within seconds—likely because the reviewers were feeding them to their in-group or grabbing them with scripts.
The project itself, which had once proudly declared its commitment to expert labor and high pay, had become a ghost town of expertise. No official explanation. No follow-ups. Just data tumbleweeds.
The Real Trick: Conning the Platform, Not the Worker
Here’s the part I find fascinating.
This wasn’t just a grift against the other contributors. It was a confidence scam run on the platform itself.
The fraudsters exploited the platform’s desire for scalability, its weak verification systems, and its faith in internal hierarchy. The platform thought it was being efficient. It had reviewer consensus! Automated promotion! Transparent metrics!
What it really had was a compromised reviewer layer acting as a gatekeeping cartel.
Even now, I bet the platform thinks the main problem was too many low-performing experts and not enough dispute adjudication. They’ve probably moved on to some new ID verification system or automated reviewer training. But those tools don’t stop a coordinated crew from walking in with new, legit-sounding IDs and doing it all over again.
That’s the beauty of scale: fraud scales faster than data. If I wanted to make a fortune passing counterfeit money, I wouldn’t walk around trying to get change for a pack of gum from phone $5 bills. No, I’d enlist a whole bunch of people to pass out fake $100s for me. I’d also make sure that I have some store clerks paid off to accept the funny money. Everybody would get a cut. I’d get the lion’s share and never touch a bill. By the time the whole thing collapsed, I’d have moved on somewhere else.
Epilogue: Dialing for Data Dollars
This isn’t the first con I’ve seen, but it’s damn well one of the cleanest.
No one ever touches the money directly. There’s no single villain. Everyone has plausible deniability. The victims (us) blame the platform. The platform blames attrition, QA overload, or “quality variance.” The scammers get paid and move on. The platform can’t admit that they got taken. If they did, what would happen to all the venture capital they’re looking to come pouring in? The client can’t admit they just got swapped with shitty data; they have to worry about the same things as the platform.
And me? What’s a gig worker going to do? The platform says it never happened, so when I say it did, it’s just another reason to show me the door and replace me with a cheaper faux expert. It’s a perfect response that goes virtually unnoticed in the age of gaslighting. I can take scraps of pay from projects I can get, and I get another reminder that in the world of gig work and tech infrastructure, being good at what you do isn’t enough. You also need to navigate a system that doesn’t understand the difference between expertise and expediency. Or worse, doesn’t care.
This isn’t just a scam—it’s a masterclass in labor infiltration. It runs on structural opacity, scale-for-the-sake-of-scale thinking, a make-believe meritocracy where promotion is just a throughput metric, and platforms that conceive of fraud as something individuals do, not something networks coordinate. It’s an elegant act of critical thinking, something that AI can’t perform, and something that, it turns out, the AI-minded may have some difficulty with.
P.S.– In the spirit of 2025’s fraud-not-fraud ambivalence, I first whipped up a human draft of this, then fed it to an AI, then did another human draft of it. Bonus points if you can tell which part is which. Where would this narrative be without some sort of meta-irony?
