My first encounter with Vauhini Vara’s work was her ground-breaking 2021 viral essay “Ghosts. Searches revisits “Ghosts” and expands on the project with a series of formal experiments engaging digital technologies; Vara contextualizes these with interrogations of the limitations, benefits, and risks of such technologies as conduits of self-knowledge. Framing all of this is an ongoing meta-dialogue about the book with ChatGPT, as she feeds it two chapters at a time and solicits its feedback. What results is an innovative work of nonfiction that prompts us to think more critically about how we engage with these tools—and more importantly, to what purposes, and at what costs? I spoke with Vara recently over Zoom about her influences and the motivations behind her craft choices. Our conversation has been edited for concision and clarity.
Jules Fitz Gerald (JFG): Your choice to frame Searches with an ongoing meta conversation between you, the writer, and ChatGPT about the book is a fascinating move. In an interview with Sarah Viren at Lit Hub, you described seeing that dialogue as “an intellectual power struggle.” Could you expand on that thought?
Vauhini Vara (VV): I conceived of the back-and-forth with ChatGPT as being about power and manipulation. Throughout the book, I’m engaging with these various technology products as a way to enact the promises these technology companies are making to us, the extent to which they fulfill those promises, and then, importantly, what they get from us in the transaction. Often what they’re getting from us is a perversion of what they say they’re giving us. Google, for example, talks about giving people all the world’s information, but in fact, their business model is based on extracting information from their users. The same, I would argue, is true with ChatGPT. OpenAI describes ChatGPT as a product that is meant to be really good at being obedient, at being manipulated by our desires. So it sets up this potential for a dialectical about manipulation, especially if we understand that OpenAI can be expected to eventually monetize people’s conversations with ChatGPT—if companies like Google and Meta offer any indication—through a business model built on manipulating its users. I was interested in playing with the extent to which I could successfully manipulate ChatGPT while it was attempting to manipulate me. My goal was to show all the ways in which OpenAI through ChatGPT uses rhetoric bound up in manipulation.
One could argue that I was successful in that ChatGPT demonstrated those techniques. At the same time, in a meta way, it interests me that after the publication of my book a lot of headlines referenced my use of ChatGPT using terms having to do with collaboration or help, like, “To tell her own story, this acclaimed novelist turned to ChatGPT.” So there’s a level on which I accomplished my goals, but to the extent that more people will read headlines about my book than will actually read the book, one could argue that OpenAI won this rhetorical battle.
JFG: Something you engage with a lot in the book is this idea of complicity, that tension between realizing that a lot of us use these products even as we might see the problems with them. What ethical parameters guide your use of these tools in your creative work?
VV: I’m interested in using these tools for my creative work only to the extent that they are bound up in a critique of the big companies behind the tools. When I started using the predecessor to ChatGPT called GPT-3, I didn’t really know what it was, and at that point, I was fascinated by it and just kind of playing around. I did all these different creative experiments, but the only one I published was “Ghosts,” an essay that is a chapter in the book, which I read as being in large part a critique of the notion sold by AI companies that their products can help us communicate better. I think the medium remains the message, and if we are using products built by big technologies companies that are trying to amass more wealth and more power and change our political and social systems, for me it’s hard to produce a work using that product that isn’t partly about that fact. I don’t know that I would offer a position on what other artists should do, but that’s what I find interesting.
JFG: I’ve heard you speak in an interview with Brad Listi about your tendency to hold back from evaluation, based on your journalism training and background. Could you talk about any specific moments in the book where you consciously edited back or omitted analysis and why, or where you decided to explicitly insert analysis?
VV: I think of the book as being pretty heavy on analysis, in that I describe all the ways in which Google, Amazon, Meta, Apple, and OpenAI are exploiting us and our planet, and then I raise a question about how we might create technologies in the future that do empower us and allow us to get all that we want from technology while not being bound up in technological capitalism. Where I don’t go further than that—and this is a deliberate political decision—is to say, here’s the system that I propose. I felt that doing that would be emulating what Sam Altman and Mark Zuckerberg and the other CEOs of those big technology companies are doing, when they describe a future that serves their interests as if that is the one only possible future. Instead, I end the book by saying, let’s open this up to the chorus, let’s ask the question rather than answer it.
I write in the second-to-last chapter about Audre Lorde’s conception of guilt: “Guilt is not a response to anger, it is a response to one’s own actions or lack of action. If it leads to change, then it can be useful, since it is then no longer guilt but the beginning of knowledge.” She continues, “I have no creative use for guilt, yours or my own. Guilt is only another way of avoiding informed action.” And that was the position I wanted the book to sit into. I wanted guilt to be one of the subjects of the book, but in conversation with Audre Lorde’s conception that it’s useful only if it can be transformed into collective action.
JFG: With the ChatGPT dialogue, it also feels like it’s a long time before you start to show your cards about how you’re thinking about those exchanges.
VV: Yeah, I didn’t want to point out all the things ChatGPT was doing rhetorically because I really wanted readers to critically engage with that themselves, and of course ask why the author wasn’t pushing back, and then hopefully feel a sense of resolution to an extent when that does happen about two-thirds or three-quarters of the way through the conversation. In order to achieve that, I had to put myself in sort of a Fool position rhetorically, where I’m withholding my interpretation so that the reader can feel that anger or indignation or recognition—does that make sense?
JFG: Yes, I really felt that rub as I was reading! I’m also curious about the choice to call the essays chapters, at least the ones that are not part of that meta conversation. Do you like the term “essay collection,” or would you use a different term?
VV: I actually don’t think of the book as an essay collection. People keep calling it that, and it confuses me. I don’t think of those chapters as essays. There are chapters that initially were stand-alone essays, but when I think about an essay collection, I think of a collection of pieces that stand alone and individually have merit. I think of this book, as each chapter of this book requiring the context outside the chapter in order to be properly read. What I noticed after publishing “Ghosts,” my essay about my sister where I used this predecessor to ChatGPT to write it, was that I wanted people to be able to interpret it however they wanted, but when they did, I realized that in fact like I did want my finger on the scale a little, like there was a certain interpretation I was interested in. And it felt like I had to move outside the bounds of that essay to give that explanation in some of the surrounding chapters. For me it all has to be read as one entity or the meaning that I’m interested in the book having isn’t going to come through.
JFG: Is there a term you would use instead, like do you think of the book as a giant collage essay with many different components?
VV: Maybe I would say a book-length essay if I were pressed. I tend not to care that much about those kinds of distinctions—I think they’re more often marketing terms. If you were to ask me to get really technical, one could even argue that this book isn’t entirely nonfiction, because there are things that are fictional, like that chapter in which I’m making that obviously fictional investor presentation. What it’s important to me is transparency, that it’s clear to the reader what I’m doing at any given moment of the book. There are some places where it might not be obvious, so I have that “Notes on Process” section at the end, but there are places where, if the reader is reading relatively closely, they understand that what I’m putting on the page is literally untrue.
JFG: I’ve read your published fiction as well and have heard you speak in the past about having written a story in which you made use of AI but didn’t comfortable publishing it. Where are you in your thinking about that now? Is it that in nonfiction it’s easier to be transparent about your process?
VV: No, it’s not about transparency, it’s about the rhetorical purpose of the piece. Writing a short story in which I used an AI product to generate some lines of the story—even if I’m transparent about that and have the AI-generated text in a different font or whatever—I don’t really know what the rhetorical meta message of that is beyond like, isn’t it cool that you can write a short story with the help of AI?, which is not a rhetorical message that interests me. This goes back to what we were talking about earlier, where for me, publishing any piece of work that uses AI—or any big technology company’s product—has to involve an awareness of what the rhetorical meta message of that is, and it felt murky to me with that short story.
JFG: That makes sense. I notice how much you use collage in the experimental chapters—especially “Searches, “Elon Musk, Empire,” and “What Is It Like to be Alive?”—where you’re in some ways curating these data sets and turning them into art. Were there any particular essayists, poets, or artists in other disciplines that informed those compositions?
VV: What I was thinking about the most with those pieces was actually my experience doing oral history projects as a journalist. In 2019, I did this project for Businessweek where I published oral histories of ten workers around the world with jobs that didn’t exist a generation ago. When you conduct an oral history interview, you’ll often end up with a transcript that’s like 10,000 words long, and for that project, my goal was to condense each one to about 1,000 words.
In anthropological oral history practice, where you consider the subject of the interview the author, they’re really doing the work of determining how to get from 10,000 words to 1,000 words. But as a journalist doing this for Businessweek, my goal was to show how globalization was affecting members of marginalized communities, and I especially wanted to have them talking about things like financial insecurity and having to move to find work, which, if I had asked them, they might have said, oh, this isn’t the most interesting thing to me. That was a conundrum that I was really interested in and troubled by.
I had to do the same with many of those collage-like essays. I used this ten-year long period of my Google history to create that essay about my Google searches. I chose to use searches starting with the words who, what, when, where, why, and how in order to narrow it down somewhat, but I still was left with pages and pages of material. Since what I was interested in conveying was the intimacy with which I share information about myself to Google, that oriented my editorial choices. Similarly, chronology is something I always have in mind when I’m editing oral histories, and here too chronology ended up being an organizing principle.
With the last chapter, “What Is It Like to Be Alive?”, where I sent out that survey, I got more than a hundred responses, which could have easily filled a whole book, so again I was curating them to try to construct a collective narrative. You can see the choice of answers that I include and the order in which I put the answers constructs a narrative that was deliberate. I was interested in showing both tension and cohesion in the way collective narratives are told, so you see people with very different ways of viewing the world and very different experiences in conversation with one another while also finding some shared meaning.
JFG: I was also fascinated by the chapters with AI-generated visual elements, especially because ChatGPT can’t see them. That was just a delicious layer for me, that it’s responding to these images that it cannot see.
VV: It says, “I can imagine the powerful and evocative photos that would accompany such a rich narrative,” which again is like a use of very specific rhetoric to plant the notion that it has a quote-unquote human mind that does things like quote-unquote imagining.
JFG: What it was like to work with images as a narrative component?
VV: Every time I edited the text, I re-did the query, even if I add a punctuation mark, because I wanted there to be integrity in the experiment itself, so that’s a technical answer. I think you’re talking specifically about “Resurrections,” the chapter with AI-generated images, though there are those two chapters before it that give context, one of which uses mostly non-synthetic images to talk about human visual culture and how it’s evolved and the way in which visual culture like textual culture has been used to re-inscribe hegemonic values and so on, and then that chapter that talks about how those values continue not only to be reinscribed but to be amplified in AI-generated imagery. In “Resurrections,” I was interested with those images in demonstrating some of the things I had talked about in the previous chapters. Like, if asked to create an image of a tech founder or tech investors, they’re going to be white men. When asked to generate an image that involves “catering to the deepest desires of girls and women,” it generates an image of a scantily clad women in a casino.
For me, what’s especially relevant to that chapter is the fact that I’m a woman, I’m a woman of color, I come from an immigrant family. It’s very deliberate that many of the stories in that chapter are set in a village in India where cameras weren’t readily available at the time when the stories were set because I deliberately wanted to show the impossibility of a big technology company’s products representing those things that they claim to be able to represent. This is in conversation with a discussion in the previous chapter about this organization funded by Google and others that has been trying to generate images as a way to create for people the visual memories that they never captured in their past. I was intending to call into question whether that’s an achievable goal and whether those images once created would actually represent something about the consciousness of that individual who doesn’t have a visual aid to attach to that memory, versus the goals of the technology enterprises and the hegemonic culture that built those technology enterprises.
All of which is to say, because of that, I generated lots of different images using the same prompts, and I chose the ones that felt particularly apt in showing that disconnect. I was also interested in showing a growing disconnect over the course of the chapter, because I wanted one of the sources of tension—similar to the ChatGPT dialogues—to be a reader noticing the disconnect, maybe wondering if the author, the narrative voice, was aware of the disconnect, and then seeing as the chapter progresses a widening gap between the text and the image. One of my favorites is the one toward the end where I ask it to generate an image of a vessel carrying women’s objects, and you know from the context of the essay that we’re talking about a vessel like a space ship—and what the product generates instead is, like, a dish you would find in an old woman’s vanity, like a glass bowl with a comb and lipstick and what the product interprets as women’s objects.
JFG: You open Searches with two meaty epigraphs by Audre Lorde and Ngũgĩ wa Thiong’o that speak to the importance of our perceptions about our relationship to language. How are you hoping the book might impact how readers think about language?
VV: I think of the whole book as being about rhetoric, about written and oral and visual language and the power represented by that, the power of individuals and communities but also the power of institutions, and individuals with a lot of power and wealth. Especially in an age in which people are regularly using AI to generate text and images, it is incumbent upon us to recognize the difference between machine-generated and human-written communication and to understand how communication is functioning when it comes from human beings and is addressed toward other human beings, how communication functions when it comes from human beings and is addressed toward corporations, and then how communication functions—and whether it can even be called communication—when it originates in a tech product from a big technology corporation that generates text and images where the end goal is to profit from our use of the product. I hope readers finish the book and have a deeper awareness of how rhetoric is functioning on all those levels.