I
We’ll start in a cubicle. It’s ten paces from a window overlooking a parking lot that always seems to be at quarter capacity. The lot is bordered to the east by a highway, and on the other side of the highway, there are strip malls and assorted mainstays of suburbia: Advance Auto Parts, Chinese food buffets, medical clinics that used to be something else. My daily journey to the QuickChek for iced black coffee and Camel filters starts in the cubicle, hits peak excitement at the threshold of Route 9W, and ends where it began. I time it to exactly twelve minutes.
The cubicle is on the ground floor of a corporate compound the locals used to call tech city. IBM commissioned it in the 1950s, and for a while, seven thousand workers clocked in daily to lay the groundwork for the digital revolution. The building and surrounding acreage were sold off in 1998, and in 2012, tech city houses a regional branch of Firstsource, a Mumbai-based company which serves the growing market for outsourceable data processing gigs.
The late crew arrives at three and leaves at eleven. We get forty-five minutes for dinner plus two fifteen-minute breaks. When we’re not on break, our eyes are glued to health insurance claim forms. We are data entry operators, human OCR machines, cognitive muscle hired to give digital workflows the false sheen of full automation. The job’s only qualification is grade school level facility with the Latin alphabet and the ability to sit still for eight hours. Our work is monitored through keystroke logging software that runs on the desktop PCs, which means we can’t spend too many seconds not typing, including trips to the bathroom. At the start of every shift, a manager sends out fun riddles to our screens.
Imagine, if you will, me at twenty-two, ruthlessly critical of everything existing, self-consciously clever and even vivacious among a small group of friends, but for the most part lonely and terrified for my career prospects. I had a humanities degree and a dietary program for the severely depressed: unregulated diet pills, coffee at all hours, nutrition bars selected for their protein-to-calorie ratio. Cross-legged on a sinking mattress, it was vodka, straight, no chaser. The late schedule suited me.
In tech city I’m vaguely conscious of the walls between my thoughts and anything that feels like a world. My skull, the computer screen, my laminate-paneled triptych, the permanently sealed windows. Once or twice, I break down sobbing in front of all that HIPAA-protected information. Music drifts from my iPod, prog rock from 2003, overwrought and deeply uncool but it moves me…
She carries me through days of apathy
She washes over me
She saved my life in a manner of speaking
When she gave me back the power to believe
I keep looking for an exit, keep feeling around for the unassimilable threads in my life. On weekends I meet queers and musicians. I play piano and read philosophy.
But data knows nothing it can't assimilate, no ritual for invoking novel states of consciousness or futures that look different from the present. Two months into the job, I start falling asleep only to be plunged into a phenomenologically flat universe: medical claims on an infinite scroll. These are meaningless sequences of numbers and letters punctuated by single-word descriptions for qualifying conditions (“bereavement”) and names of towns I’ve never heard of. All of life’s dimensions collapse into two axes.
That’s my workweek for the better part of eight months. Wake, coffee, drive, music, biometric ID scan, data, ephedra, coffee, music, data, aspartame-sweetened meal replacement, data, Stoli, sleep, data. One night I’m pulled over on suspicion of drunk driving: low blood sugar and poor sleep have made me a menace to society. The whites of my eyes turn gray.
II
The next scene is in 2025. I’m looking for a job that’s better than being unemployed, meaning anything less awful than the one I just resigned from. I apply to an organization devoted to mitigating AI x-risk, that is, the apocalyptic danger that’s said to inhere in superintelligent systems. Most accounts of “superintelligence” refer to AIs that can do everything humans can, but faster and smarter by several orders of magnitude. The logical conclusion of the “superintelligence” framing is that AI cannot be understood or managed the same way we understand and manage other technologies. It also implies that AI will make human intelligence superfluous.
I think x-risk discourse is disingenuous, not much more than cynical marketing for the AI industry, since tech that’s powerful enough to eliminate the need for human cognition tends to attract a lot of investment. I am considering this as I fill out the application. I don’t want my name on the kind of content I’d be producing for this company. It’d be embarrassing, a blight on my professional image. But I am unemployed in a collapsing economy, and the possibility of having a remote full-time job whose salary range matches my last one sounds okay. Eventually I’d quit and deal with the reputational damage.
I get far enough in the multi-phase hiring process to qualify for a poorly compensated “work trial.” If the trial is successful, I’ll be offered a full-time job. I knew it would be stressful, but I didn’t bank on my dreams getting colonized again. As a teenager I had recurring nightmares so grotesque I’d feel bad describing them here. Experientially, these are even worse.
Here’s what happens this time around: I fall asleep only to see Google docs filled with differently-worded summaries of the same arXiv preprint, each one stacked with notes from an editor who wants me to distill their meaning without compromising technical detail. Then there are side-by-side comparisons of nearly-identical algorithms; speculations on which benchmarks are most useful for evaluating reasoning models; and anxious comments to the editor re: which forms of jargon I need to disambiguate and which I can expect our readers to know. Right before waking, I delete those comments for fear of seeming incompetent. I decide to do that in real life, too.
III
I’d love to identify the organization, but I signed a confidentiality agreement. That’s why this section is light on specifics.
The trial was originally set to last for two weeks. They didn’t tell me much about my assignments ahead of time. I knew I’d be writing about AI at a fairly high level of expertise. I also figured I’d be subjected to philosophical scrutiny. The organization is mission-driven but “not ideological,” as they put it, which is a red flag to people with my political sensibilities. I wondered if I could use my fluency with theory-informed tech criticism to defang weaker ideological arguments against x-risk discourse.
In other words, I was considering selling out. Not only am I not “not ideological,” I’ve written for overtly ideological publications. Maybe they hadn’t found those articles when my trial started, but I figured they would eventually. To play it safe, then, I’d have to do the shameful thing. I’d have to AI-pill myself. It wouldn’t be enough to understand their point of view in a detached sense; I had to put my heart in it. I was worried about coming across as too ambivalent otherwise. Or as a crypto-Marxist. I couldn’t take that chance.
I read white paper after white paper, wrote questions for interviews and conducted them with a smile. I annotated the interview transcripts and spent countless hours in Google docs. All the while, I kept asking myself why I’d been so confident in my Frankfurt School and French theory-inspired critiques of big tech. Maybe because it’s hard to break with the views you learn in grad school. That was a good thought to go with, since I have a contrarian streak. I could move past my past! I’d just turned my back on academia anyway!
In an age of “degraded democratic publics” — Henry Farrell’s excellent conceptualization of how we misunderstand other peoples’ worldviews — any change in perspective can be experienced as a major awakening. In truth, my AI-pill trip was not that extreme, even though it felt like it. There was a moment where I started to seriously consider that large language models might understand language better than human beings, in part because an engineer told me so. In order to accept that claim, I had to suspend philosophical judgment. Normally I’d think pretty hard about what “understand” means in the context of “LLMs understand language better than humans.” It was a big deal for me not to do that. Ultimately, I can’t accept that statement as coherent; there’s no meaningful sense in which any form of AI “understands” anything. But it was interesting to try to think otherwise.
Another viewpoint shift that felt like a big deal at the time: I’ve been curious about the behavior of reasoning models, or generative applications (like ChatGPT) that reveal the steps they took to arrive at their output. My initial take was critical: I thought these steps, AKA “chains of thought,” were post-hoc fabrications. Like lying to your math teacher about following a specific formula when really you copied the answer off a friend’s exam. But during the trial, I kept running into research on reasoning models that challenged that view. At this point, I’m not so sure that chains of thought bear no necessary (i.e., non-incidental) likeness to the processes they’re meant to represent. I want to learn more about this.
I wish I could say who I interviewed and what I learned. I’ll mention that most of my sources seemed very sincere. They really believed that AI is a net positive for society. All of them were thoughtful and eloquent, and most were willing to consider objections to the premises of whatever work they were doing. Not that I pushed them very hard, but I’ll give credit where credit’s due.
Here’s something I feel comfortable mentioning: a lot of researchers working on projects that actually could lead to superintelligence think it’s silly to speculate on whether such capacities are possible or not. That makes me feel a tiny bit better about AI, although it doesn’t change the fact that the industry is run by guys drunk on power who seem constitutionally incapable of distinguishing simulations from reality. One of the problems with the superintelligence framing is that the most advanced AI models can’t be easily tested in open-ended environments, which means their hypothetical capabilities are poor indicators of what they can do in the real world. But I digress.
More than anything else, the trial made me realize how difficult it is to do justice to AI as a scholar and writer, especially when your audience is diverse. I’ve been trying to do this for years, and while I always knew I was falling short, I didn’t appreciate how big my knowledge gaps really were. I suspect a lot of scholars are in this boat, especially if we’re adjacent to rather than neatly within the “computer” disciplines (science, engineering, etc.). Innovation happens quickly, and when you’re taking the time to add context to it, you’re going to fall behind.
For that reason and many others, lot of courses and publications about AI are so factually incomplete as to be misleading. None of them are perfect. Everybody who’s doing detailed research, teaching, and reporting on this topic misses stuff from time to time. Anybody doing it out of a real commitment to providing accurate information — which in this context is entirely different from “non-ideological” information — deserves a ton of respect.
The trial ended up lasting closer to three weeks. After it concluded, they delayed the hiring decision for a long time. Eventually I was told I didn’t get it. I don’t know if their rationale was bullshit, but I suspect it was, in part for reasons I can’t disclose. (Sorry to keep being cryptic, I just don’t want to get sued).
They never told me if I could use AI writing assistance or not. I didn’t, since I think of it as cheating unless I’m explicitly informed otherwise. Maybe that’s old-fashioned of me.
While I never bought into x-risk or superintelligence, my outlook changed a bit. I’m less convinced that Silicon Valley’s most flamboyant visions for AI are total drivel. I still think it’s much more pragmatic and ethically defensible to focus on AI’s actually-existing problems as opposed to its speculative harms. In all likelihood, its evils will remain banal well into the future.
IV
One takeaway from all of this is that we need to promote AI literacy. Promoting AI literacy is not the same thing as uncritically boosting AI. At least not any more than we can construe nutrition literacy as advertising for food. The comparison might seem ill-conceived, since food is a vital necessity and AI isn’t. But we’ll be better prepared to deal with AI if we treat it as an inevitability. I know that a lot of people think the AI bubble is bursting right now, but innovation will continue apace. It’ll probably outlive us all. Headlines about stocks, companies, and geopolitical affairs are not good indicators for where the tech is going in the long run.
I’ll also say a few things about how people learn and think about AI. As I mentioned earlier, a lot of publicly available information about it is poor quality. So much AI news is sponsored by the industry itself. It’s not a good sign when reputable outlets like The Guardian strike deals with OpenAI, but considering the economy and the Trump regime’s AI-friendliness, I suspect we’ll see more partnerships like that over time. (I hope literary authors don’t continue to sing the praises of AI-generated fiction, but that’s a personal gripe).
Another problem is that a lot of well-intentioned AI criticism is painfully out of touch with the realities of the technology. Recently I came across an essay that framed the stylistic features of generative AI art as symptomatic of fascist ideology. This was in an established venue, one that’s widely read and vetted by academics. While it’s true that the characteristic distortions of free-to-use apps like Dall-E aren’t going anywhere, at least for a while, it’s silly to speak of AI art as having a particular style. At least for the purposes of serious analysis. The algorithms are way too flexible for that. (This has nothing to do with whether America’s present political circumstances should be described as fascist or not — if you’re still unsure about that, go read anything John Ganz has published since 2023).
That piece was also quick to draw conclusions from the DeepSeek fiasco. I got pretty into this story right after it broke, in part because I thought I might write about it. I soon realized it was too early to speak decisively about what DeepSeek meant for the future of AI, and I’m glad I held off. We do not need more poorly-informed takes here. Not from Substackers, and especially not from publications with a budget and an audience of any significant size.
Last but not least, I’m concerned about high-level academic work that’s doing essentially the same job as x-risk content. I’m talking about theoretical interventions that make machine learning algorithms seem more powerful and interesting than they really are. Not from analytic philosophers — they’ve always been chummy with AI — but from critical and media-theoretical scholars. Last summer, I posted an essay about transformer architectures and Timothy Bewes’s Free Indirect: the Novel in a Postfictional Age, a work of literary theory that combines György Lukács’s pre-Marxist writing with contemporary critical philosophy. The essay considered Free Indirect through the lens of recent scholarship that examined peripheral techniques in transformer algorithms through poststructuralist lenses. This felt like weighty scholarship to me, the kind of philosophizing that expands the scope of the thinkable. Nothing else seemed nearly as interesting.
And it might have been nonsense. I don’t think that I understand transformer architectures better than Aurelie Herbelot, a computational linguist whose work has been derailed by them. She sees transformers as complete hot-air technology, at least when it comes to modeling language, which is the use case transformers were originally developed for and what drove my interest in them. For context, I’ll defer to Herbelot’s Mastodon thread re: how transformers hijacked her research career. I don’t think any of the scholars I was reading last summer understand this technology better than she does.
Even highly credentialed researchers need to tread lightly here. Perhaps especially so. When it comes to understanding AI, some of us will be victims of our proclivity for awe or our eagerness to keep an open mind. Others won’t be able to see beyond their own self-styled shrewdness. Those who think they can always spot the difference between corporate swill and useful knowledge might end up ignoring important details as they go off about AI to whatever end (anti-, pro-, merely intrigued…). A little knowledge is a dangerous thing.
If we’re going to live good lives in the algocene — “the contemporary digitally interconnected world of ubiquitous computing, drones, data mining, smart cities, social media, automated trading and other data-driven technologies that are heavily reliant upon algorithmic processes and deep learning networks” — we have to be intentional about the assumptions and worldviews we bring to it. At its core, superintelligence is a worldview. I’m afraid that it seems more relevant to the present day than worldviews too old to explicitly account for AI. Instead of readily dismissing superintelligence, we should try to be curious about its appeal. At least a little bit. If we refuse to take sincere interest in how lots of people are already thinking, we won’t be able to articulate alternative worldviews with lasting power.
If that sounds reasonable, I recommend Arvind Narayanan and Sayash Kapoor’s “AI As Normal Technology.” It’s the best recent work on AI I’ve seen. I truly wish everybody would read it. If you don’t have the time for a 15,000 word essay right now, I put some of its best quotes in a Google doc. Also, it’s going to be the basis of their next book (their first is AI Snake Oil, and it’s a great place to start learning about AI).
Wherever my career takes me next — and I am doing my best to put some cool things things in motion — I’ll keep it as human as possible. While I’ve tried to be fair-minded in this post, I’m still haunted by the algocene’s mundane terrors. X-risk scares me a lot less than, e.g., the low-skilled data processing work that’s been fueling automation for years. This labor tends to be brutal, far more psychologically challenging than what I described at the beginning of this post. Any truly useful AI education will address such realities with moral clarity and grounded awareness that this technology is not going away. We don’t need “not ideology,” but ideological purity isn’t useful, either.
That’s a good note to end on. I hope your dreams are full of life rather than screens.
My gosh, this is so good. A couple paragraphs in and I was genuinely baffled that this post only had 13 likes, you deserve thousands. I really appreciate the dialectic from the perspective of purity of argument over ideology, and your advocating for stronger, more substantive voices that aren’t reflexively critical. Even as an AI enthusiast, that line about how context slows you down reminds me to slow down as well. Reading a couple papers on superintelligence or supermoral rights doesn’t make me an expert, it just enhances my worldview for the potential arguments that exist. We really really need stronger voices on all sides to complicate the conversation a bit more so we can deliberate effortfully and with purpose. I loved your piece, and I genuinely hope to read more in the future!
Did you read David Auerbach’s fascinating dialogue with ChatGPT on ethical issues of its pretense to consciousness? I think you’d find it fascinating. He’s a philosopher and critic with a lot of experience as a programmer. Anyhow, glance at this?
https://open.substack.com/pub/auerstack/p/chatgpt-insincerity-and-inauthenticity?r=4ce8wg&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false