“Fat, drunk, and stupid is no way to go through life, son.” — Dean Vernon Wormer, Animal House
This is a list of allegedly august personages:
Robert F. Kennedy, Jr., Secretary of Health and Human Services – Chair
Vincent Haley, Assistant to the President for Domestic Policy – Executive Director
Brooke Rollins, Secretary of Agriculture
Scott Turner, Secretary of Housing and Urban Development
Linda McMahon, Secretary of Education
Douglas Collins, Secretary of Veterans Affairs
Lee Zeldin, Administrator of the Environmental Protection Agency
Russell Vought, Director of the Office of Management and Budget
Stephen Miller, Assistant to the President and Deputy Chief of Staff for Policy
Dr. Kevin Hassett, Director of the National Economic Council
Dr. Stephen Miran, Chairman of the Council of Economic Advisers
Michael Kratsios, Director of the Office of Science and Technology Policy
Dr. Martin Makary, Commissioner of Food and Drugs
Dr. Jayanta Bhattacharya, Director of the National Institutes of Health
They are the members of the Make America Healthy Again (MAHA) Commission, and they slapped their names on their committee’s report right below Donald Trump’s bloated signature. Take a look at them, and they don’t appear fat, drunk, and stupid, but they do seem to rival the members of Delta House in terms of intellectual achievement. Why would I say such a nasty (as the President would have it) thing? Maybe you have seen the news from yesterday: “RFK Jr.’s MAHA report cited nonexistent studies.“
The media seem obsessed with the fact that the report cites sources that don’t exist. They correctly ascribe that error to the use of AI in composing this supposedly foundational government document. When you ask an AI model to cite sources, the model frequently invents them. Developers act as if this is a bug in the system, but it’s a feature of how they have constructed it. Because AIs are commercial products, they are optimized for consumer satisfaction. “Satisfaction” is generally defined as a single output that stems from a single user input. The idea behind that is the assumption that most users don’t want to be bothered with going back and forth with the AI multiple times; they want a single, “correct” answer, and they want it immediately. Rather than the AI coming back and asking the human, “What kind of stuff do you want?” or having it say, “I have no idea what you are talking about,” AI models are directed to create the appearance of thorough research and logical thinking, whether they have done those tasks or not. In short, what most satisfies the users is that they don’t have to research, think, or write at all. Playing to that intellectual laziness is what earns a model high scores. It’s not really much different from how, in the olden days, internet search engines optimized for the most popular results to show up in the first three positions of the search results. Most people don’t even read search results to the bottom of the page, let alone click through multiple pages of the results. There’s an entire industry that built up around that practice: Search Engine Optimization or SEO. Its entire purpose is to get your webpage to the top of the search results, whether the content of your page sucks or not. It’s even built in (thanks to AI) to the platform I’m using right now. For example, the SEO function on this platform tells me my title is faulty because it only scores a 55 out of 100 on its magic SEO scale. I could use their AI to rewrite it, but I’d have to pay extra.
My point, however, is not that the MAHA Commission Report cited non-existent sources multiple times. My point, to use a health metaphor, is that this type of error is a symptom of a disease, not the disease itself. The report makes this mistake because the entire report was generated start-to-finish by an AI model (likely a version of ChatGPT, based on the reporting I have seen). This wasn’t a mistake in citation style or formatting. It’s not as if someone had done all the research and had a stack of really good sources sitting on their desk and simply made transcription errors when typing them into a draft report that the AI then “tightened up” for grammar and clarity. No. This is the sign that the people who requested the report (I was going to say “generated” the report, but that’s not really accurate) did no research at all. This is made more evident by the fact that they published the MAHA report with all these mistakes in the first place. That is, nobody on that commission bothered to look at what was produced in their name, or if they did, they did not understand that there was bullshit being produced in their name. Either option is only possible if you haven’t done any homework at all.
Of course, no commission member actually wrote or researched anything. That’s something they delegate to staff. I’d argue that’s hardly an excuse, however; rather, it’s evidence of multiple layers of incompetence. Incompetence because every person on that list seems to have been “satisfied” with the result. Incompetence because nobody jumped in and said, “Hmm, I’d like to take a closer look at Study X” only to discover that there is no Study X. Incompetence because of the presumption that competent (or even just curious) people never look at footnotes and citations, although that’s what subject experts spend lots of time doing. Incompetence because the report is intellectually dishonest, not in the sense that it made up sources but in the sense that it doesn’t care to wrestle with facts and perspectives. Some staffer took notes at a commission meeting where it was determined that “We need a report of A length, that makes B, C, and D points, and offers E and F solutions.” The staffer went to ChatGPT, typed in those instructions, and voila! The MAHA Report! The report seems plausible, and who’s going to take the time to check, anyway? Certainly not the members of the commission in charge of it.
Where’s DOGE when we need them? Waste, fraud, and abuse? Look no further. The commission wasted time and money putting out a fraudulent report that’s an abuse of government power and the public’s trust. Yet, as I know too well, if there were no cognitive dissonance, we wouldn’t have a government at all.
