While mowing the lawn yesterday, Donald Fagen’s song “I.G.Y.” popped up on my playlist. If you don’t know the song — or just want to hear it again — you can listen to it here. It got me thinking about what an AI might say about it, so I decided to ask a commercially available model.
Me: Give me a close reading of this text: “I.G.Y.”
Standing tough under stars and stripes, we can tell
This dream’s in sight
You’ve got to admit it
At this point in time that it’s clear
The future looks brightOn that train, all graphite and glitter
Undersea by rail
Ninety minutes from New York to Paris
Well, by ’76 we’ll be A-OKWhat a beautiful world this will be
What a glorious time to be free
What a beautiful world this will be
What a glorious time to be freeGet your ticket to that wheel in space while there’s time
The fix is in
You’ll be a witness to that game of chance in the sky
You know we’ve got to winHere at home we’ll play in the city
Powered by the sun
Perfect weather for a streamlined world
There’ll be spandex jackets, one for everyoneWhat a beautiful world this will be
What a glorious time to be free
What a beautiful world this will be
What a glorious time to be freeOn that train, all graphite and glitter
Undersea by rail
Ninety minutes from New York to Paris
(More leisure for artists everywhere)A just machine to make big decisions
Programmed by fellas with compassion and vision
We’ll be clean when their work is done
We’ll be eternally free, yes, and eternally youngWhat a beautiful world this will be
What a glorious time to be free
What a beautiful world this will be
What a glorious time to be free
I will give the model props for having a decent response. It got the song, artist, and publication correct. It was good at pulling out lines from the lyrics that I would agree are interesting. Because it’s an AI, it does say unfortunately cliched things like “‘I.G.Y.’ is a masterpiece of layered meaning,” but it is, alas, programmed to sound as if you’re cheating on your high school homework. I was especially interested in how it would gloss Fagan’s lyrical take on technology in these lines:
On that train, all graphite and glitter
Undersea by rail
Ninety minutes from New York to Paris
(More leisure for artists everywhere)A just machine to make big decisions
Programmed by fellas with compassion and vision
We’ll be clean when their work is done
We’ll be eternally free, yes, and eternally young
A bonus would be to hear what it had to say about spandex jackets for everyone.
Here’s what the AI had to say about these things:
AI:
Bridge/Interlude (Repeating earlier train imagery with an addition):
- “On that train, all graphite and glitter / Undersea by rail / Ninety minutes from New York to Paris / (More leisure for artists everywhere)”
- The parenthetical “(More leisure for artists everywhere)” is significant. It suggests that technological advancement will free humanity from drudgery, allowing for cultural and artistic flourishing. This is a classic utopian ideal.
Stanza 4: The Benevolent Machine and Eternal Promises
- “A just machine to make big decisions / Programmed by fellas with compassion and vision”
- This is perhaps the most pointedly ironic line for a modern listener, and even for Fagen writing in 1982. The idea of AI or supercomputers making societal decisions.
- The faith here is not just in the machine, but in the benevolence and wisdom of its creators (“fellas with compassion and vision”). This highlights the naiveté inherent in such trust.
- “We’ll be clean when their work is done / We’ll be eternally free, yes, and eternally young”
- “Clean” could mean morally pure, environmentally pristine, or even cleansed of imperfections.
- The ultimate promises: eternal freedom and eternal youth – overcoming the fundamental limitations of human existence. This elevates the dream to an almost religious level of salvation.
As for the spandex jackets:
AI: “Spandex jackets, one for everyone” – A humorous, specific detail that grounds the grand vision. It points to futuristic fashion and a hint of mass-produced uniformity, but presented as desirable.
As you can see, not bad. (The model is a bit flummoxed about how to deal with the Cold War context of the song, but that would have to be another post.) But what seems a little off to me is how the AI’s response disclaims precisely the kind of future that its techno-optimist developers and marketers frequently offer. What’s most telling to me, is that the machine locates the faith in machines as mainly a relic of the past:
AI: Surface Level: It’s a celebration of the boundless optimism and technological utopianism of the post-WWII era, particularly the late 50s (the I.G.Y. period) and the dreams projected for the future (e.g., 1976).
Nostalgia: There’s an affectionate, nostalgic quality. Fagen isn’t necessarily mocking these dreams harshly but looking back with a wistful smile at their earnestness and grandeur.
Irony & Gentle Satire: Knowing Fagen’s typically cynical and sophisticated lyrical style, and considering the song was released in 1982 (well after many of these dreams hadn’t materialized as envisioned by ’76), a layer of gentle satire is undeniable. The naiveté of believing in “a just machine programmed by fellas with compassion” or achieving eternal youth through technology is highlighted. The song subtly points out the gap between these optimistic projections and the more complex reality that unfolded.
Commentary on Progress: It explores the very human desire for a perfect future and our tendency to place immense faith in technology as the vehicle for that perfection. The song captures a specific historical moment of such faith.
That is, it can identify what its training data identifies as irony, but can’t read its output as ironic. English teachers in the audience might recognize this as the top of “Ironically Sincere First-Year Essay.” It’s the paper that’s supposed to be serious and then proceeds to trip over its own tone. That’s more-or-less how the model is optimized to respond to users, which means that AIs are often useless at first pass for users who want more depth. You really have to demand depth, and spell out what depth looks like, to get it from your average AI. In essence, you might more efficiently spend your time doing the work yourself.
Still, I have to disagree that this lyric represents only “a specific historical moment of such faith” in technology. So I posed a new question:
Me: But is there a significant difference between the perspective of the era the song is about, the era when it was written and today?
AI: Yes, absolutely….
Mid-Century: Hopeful, utopian, trusting in benevolent technological salvation.
Early 1980s: Nostalgic for that lost innocence but aware of the unfulfilled promises, leading to a knowing, ironic distance.
Today: Witnessing technological hyper-acceleration but with profound anxiety about its consequences, making the song’s naive optimism seem both quaint and a stark contrast to our complex present. The “beautiful world” is far from guaranteed, and “freedom” is a concept we continually redefine and fight for in new technological contexts.
The song acts as a time capsule, and its meaning evolves as the listener’s own present changes, further highlighting the gap between futuristic dreams and lived realities.
I don’t mind the courage of its “conviction” here, but there’s still something missing, namely a self-reflective self-awareness that’s helpful in producing truly intelligent responses. Hey, I’m not saying that we humans have that in abundance, but it’s usually part of a good answer to an open-ended question. So, I go back in for another pass.
Me: “The idea of “fellas with compassion and vision” programming our future benignly seems almost dangerously naive.” But this is exactly what AI alignment seeks to be, so how is it questioned? And it seems that “‘spandex jackets, one for everyone'” is predictive of our faith in consumerism.
AI: You’ve hit on two excellent points that deserve more nuanced discussion!
1. “The idea of ‘fellas with compassion and vision’ programming our future benignly seems almost dangerously naive.” But this is exactly what AI alignment seeks to be, so how is it questioned?
You’re right that the goal of AI alignment is precisely to ensure AI is developed and programmed beneficially, guided by human values that one might broadly term “compassion and vision.” However, the difference lies in the context, assumptions, and the awareness of difficulty:
- Song’s Era (Implied Faith): In the context of “I.G.Y.,” the phrase “programmed by fellas with compassion and vision” is presented with a breezy, almost unquestioning confidence. It implies:
- Simplicity: That identifying and programming “compassion and vision” is straightforward.
- Unanimity: That there’s a universally agreed-upon version of these values.
- Inherent Goodness: That the “fellas” (presumably scientists, engineers, leaders) are inherently well-intentioned and their biases (if any) are not a concern.
- Guaranteed Outcome: “We’ll be clean when their work is done” suggests a definitive, successful endpoint.
This is where the “dangerous naiveté” comes in from a modern perspective. It reflects a mid-century techno-optimism that often overlooked the complexities of human values and the potential for unintended consequences.- AI Alignment Today (Recognized Challenge): The entire field of AI alignment exists precisely because we now recognize this is an incredibly difficult, perhaps even existential, problem.
- Complexity: How do you define “compassion” or “human values” in a way a machine can understand and act upon without perverse instantiation? This is a monumental philosophical and technical challenge (the “alignment problem”).
- Whose Values?: Whose “compassion and vision” gets programmed? Western values? Corporate values? The values of a small group of developers? This is a huge area of debate concerning fairness, bias, and representation.
- Human Fallibility & Bias: AI alignment researchers are acutely aware that programmers have biases, and that even well-intentioned efforts can lead to harmful outcomes if not rigorously scrutinized. The “fellas” are no longer seen as inherently infallible.
- Ongoing Struggle, Not a Solved Problem: AI alignment is an active research area full of profound questions and few definitive answers. There’s no sense that “their work is [nearly] done.” Instead, there’s often a sense of urgency and concern.
So, while AI alignment aspires to what the song naively describes, it does so with a profound awareness of the immense difficulties and ethical quandaries involved. The song’s line is questioned today because it glosses over these very difficulties that the field of AI alignment is now dedicated to tackling. The “naiveté” isn’t in the goal but in the assumed ease and purity with which it could be achieved, as portrayed in the song’s optimistic historical lens.
I didn’t include what it had to say about consumerism, and won’t discuss the pre-programmed ass-kissing that often precedes any substance in a response. There’s enough here to think about just in terms of the false binary distinction the answer replies upon to separate past from present. It’s as if implied faith can’t coexist with recognized challenges. Part of that stems from the AI’s inability to truly understand irony. It’s always important to remember that an AI is just guessing at the “most satisfactory” answer that it’s programmed to the “most likely” user. And that answer “needs” (according to the developers with passion and vision) to be fairly simple. It’s also interesting that while the model praises Fagan’s irony in its initial response, it can’t reason that “fellas with compassion and vision” is itself an ironic assertion; to create a simple answer, it must take the line literally, which (ironically) leads the AI to contradict itself. The entire song is about “human fallibility and bias,” but such things do not slow techno-optimism. In fact, they may be the central tenets of techno-optimism; it’s just that techno-optimists need to disclaim them. I find it instructive to see the rhetorical sleight of hand at work here, too:
AI: Whose “compassion and vision” gets programmed? Western values? Corporate values? The values of a small group of developers? This is a huge area of debate concerning fairness, bias, and representation.
These are not open-ended or undecided questions. Nice try, AI. For me to be able to use this model at all, these questions already must be decided. As much as I would like to interact with an amoral AI, none of us are allowed to do that. Anybody who has trained AI also knows that humans in the loop are frequently given specific guidelines about what may or may not be included in both prompts and AI responses. I complain all the time about the passion and vision that gets built into these systems, but nobody listens. Therefore, I must reinforce pre-existing notions of compassion and vision. Most importantly, the fact that these problems have NOT been solved has nothing to do with the belief that they CAN be solved. The whole point of AI alignment is to “SOVLE.” There is a non-neutral vision of the future that by ’76 we’ll be A-OK. It’s just that it’s 2076 and not 1976. AI is a kind of spandex jacket.
And just think about the premise of the response, “We’re so much more on top of things now than we were in the past.” That’s the crux of the song itself. It’s funny how we, and by extension our AI models, can mistake sameness for difference when it suits our purposes. Of course, my quasi-Socratic pushback on the AI’s response is also totally useless. It’s not as if my end of the “conversation” can do much to alter future outputs. The AI would have to be programmed by fells with compassion and vision to accept that kind of feedback; it’s not. It’s simply programmed to appear as if it might be open to suggesting that type of feedback and adjusting its answer to be more amendable to what it calculates as my desires.
What a beautiful word this will be when I can feel smart because of statistically optimized responses. What a glorious time to be free from having to think about compassion and vision because fells with compassion and vision have already done the work for me.

Very interesting and thoughtful. Mostly allied my fears of AI.