Last week, I had my mind bent by Zuckerberg’s vapid pronouncements on “superintelligence” (including whether he would be able to recognize it if he saw it). Today I’m blown away be this vapid article that tries to summarize an equally vapid podcast (while they both try to appear profound): “Ex-Google exec’s shocking warning: AI will create 15 years of ‘hell’ — starting sooner than we think.“Because I value engaging with original sources, I started to watch the podcast that the article takes as its gospel. However, that podcast is 2.5 hours long. I have to balance my critical instinct and training against the emotional and intellectual costs of sifting through that many of Gawdat’s untested and largely unexamined claims. So what I’m going to do here is simply pick a few sections of the article and once section from the podcast and let loose on it. I’m doing this because my critique is aimed at both the manner in which these statements are uncritically amplified and go unquestioned and at their content. Taken together, they form a spectacle designed to resist real scrutiny. The combination of the of the endless podcast that the gets reproduced as clickbait summary already tells us a lot about how tech narratives get so problematically dominant.
Gawdat’s predicts that AI is already leading us into a dystopian near future. In the podcast, he casts that dystopia as “inevitable.” It’s he suggests, beyond his control, although he is one of the people who has developed that future and continues to develop it. Human futures never happen; they are human-made.
The article cites this as one of his primary insights: “Gawdat, 58, pointed to his own startup, Emma.love, which builds emotional and relationship-focused artificial intelligence. It is run by three people. ‘That startup would have been 350 developers in the past,’ he told Bartlett in the interview, first reported by Business Insider.” It seems scary, but it anybody going to ask the about the fundamental assumptions and actions that back this up? I’ll give you a list of them that just spring to the top of my mind instantly:
- How many people run your company is a choice, not a necessity. How and why did you make that choice?
- Running a company is not the same as working (would that it were). You may think it is, but your example shows that’s not the case. Do you know what solipsism means?
- Let’s stop trying to be seers and prophets and ask the real questions that are, perhaps, more dystopian than you would like: Of the 347 people whom he has “replaced,” what kinds of jobs did they hold, and what does that tell us about AI?
- What is “emotional” AI? Are you saying that you and two other people have reached a point at which machines have emotion? Or are you saying that you have developed algorithms that seem to “solve” human emotions?
We can simply start with those, and we might fill up that 2.5 hours in a more productive way.
We can also think about how the article more-or-less accurately reports this sentiment: “Instead of being ‘focused on consumerism and greed,’ humanity could instead be guided by ‘love, community, and spiritual development,’ according to Gawdat.” It’s still a “choice” we can make. True. Like your choice to get rid of 347 potential employees. Your choice to turn AI into a commercial entity, because I presume you’re not giving away emma.love for free. I guess that after everybody’s out of work, we can have a kumbaya moment, convened like someone like you who hasn’t lost anything. Does this seem grim? Sure. But I’m just following the logic of Mo’s vision. I’m not imposing my own.
Mo lets us know that his AI child is going to bring up pain, that’s there’s not anything we can really do about it at this point, but he does have good news. In the podcast, he presents an acronym. for things “we” (I’m not sure he means you and me, but I’m damn sure he means his tech bros) can redifine during and after the dystopia. He calls that acronym FACE RIP. Charming, right? Here’s what it means:
Freedom: Concentration of power in AI platforms could limit individual freedoms.
Accountability: AI decision-making may lack transparent accountability mechanisms.
Connection: Human relationships are changing as digital interactions increase, potentially leading to social isolation and widening economic disparities.
Economics: Traditional models of employment and value creation are being disrupted as AI automates tasks across industries, leading to job displacement and potential economic inequality.
Reality: AI-generated content and media could blur the lines between reality and artificiality, making it difficult to discern truth.
Innovation: AI surpasses human cognitive abilities in various domains, rendering human innovation moot (and maybe just wrong).
Power: Shifts in power dynamics where those controlling AI platforms hold immense influence.
Each one of those points merits a monograph in its own right. But let me reassure you that they were devised by a guy who literally wrote a book about how happiness can be achieved by following a formula. Kind of gives you pause about what emma.love is going to be, doesn’t it? But in the podcast, he leaves us with the utterly comforting bit of wisdom: “I started to take an active role in building amazing AIs.So. AIs that will not only make our world better but that will understand us, understand what humanity is through that process.” I couldn’t be more reassured.
Now STFU.
P.S. Here’s the podcast: Ex-Google Exec (WARNING): The Next 15 Years Will Be Hell Before We Get To Heaven! – Mo Gawdat
