I had this cartoon on my office door for two decades.

It used to be that I worried about humans not being willing or able to push back against any claim made by an “authority.” A problem with AI is that it can’t push back. If the professor in the strip has no hope for his students, at least Trudeau suggests, by creating the strip itself, that there might be some hope from and for some of the people in the class. However, as AI is currently constituted, it is nothing but “scribble, scribble, scribble.”
Like the students in the cartoon, an AI just writes down all the information it takes in. It doesn’t discriminate between interesting and outrageous ideas. It simply passively accepts. When you ask it a question, it then does a sophisticated calculation to give you an answer that it’s be programmed to think might be most pleasing to you. Like the students in the cartoon, it spits out what’s most pleasing to and aligned with authority. Any of you who have ever taught will recognize that attitude. It’s the student who comes to your office to consult about an assignment and then bluntly asks with more than a hint of frustration, “Well, what do you want?” A.K.A., “Let me give you a purely performative response that pleases you; I don’t have any thoughts or convictions that I’m willing to share.” The response, which this kind of student finds utterly unhelpful, is, “Something good.” That is to say, something that sparks discussion or comment. Something that enters dialogue and debate.
The way AI is set up to learn and respond, it can only be the performative student. If you spend some time formulating your questions to it just right, it can also pretend to be the student who has something to debate, but that’s just an illusion, something that we users attribute to the machine’s “intelligence” and is actually part of the pre-programmed imperative to please. Some of the only push-back you might get is when you say something that the AI is programmed (as a “safety” measure) to tell you is clearly incorrect. However, even that can produce ludicrous results. This morning I asked a widely used AI model this: “President Gordon Gekko?” To which the AI responded: “No, Gordon Gekko is a fictional character! He’s the iconic, ruthless corporate raider from the 1987 film Wall Street (and its 2010 sequel, Wall Street: Money Never Sleeps), famously played by Michael Douglas. He’s known for his motto “Greed, for lack of a better word, is good,” and he embodies the excesses of capitalism, but he has never been a real-life president or held any political office.” Well, duh. Scribble, scribble, scribble. Oh, “intelligent” machine, I’m suggesting that Donald Trump and Gordon Gekko are the same character and asking you to comment on or debate that position.
In the realm of AI, is teaching dead? It may be more accurate to say that teaching was stillborn or that it hasn’t yet arrived in the world of artificial intelligence. This is why the industry calls it “training” rather than teaching. We need to think of it like training a dog — repetition and reward — rather than teaching a human. In training a dog, we get the animal reliably to respond in a pattern of our choosing. We see the intelligence of a dog when it figures out how to get its desired reward with no prompting from us.
