Reflecting on Human-AI Relations Through Art
An Interview with Lauren Lee McCarthy
Lauren Lee McCarthy’s artistic performances and installations frequently explore technologies such as social media, smart home devices, and AI. In this interview, she reflects on recent advancements in AI, the role of artists in shaping technology, and her hopes for the future of AI.
DAILOGUES: What is your work about, which centers around Lauren?
McCarthy: Lauren is a work that began in 2017, but it has continued to evolve since then. At the time that it began, systems like Amazon’s Alexa were starting to roll out. I was thinking a lot about the increasing presence of AI in our lives. AI was no longer just something out in the world. It was now coming into our homes and getting involved in our personal, day-to-day lives. I was questioning that. What does that mean for us? How do we balance the convenience of these systems or the possible functionality they provide with the way that they encroach on our privacy, our agency, and our ability to control our own lives and homes? So I created the project, Lauren, as a way to explore those questions. In the project, I proposed to replace Amazon’s Alexa with myself. I would come to homes and install a series of cameras and devices, microphones, and appliances. And then I would leave and remotely control them for a period of time. People could talk to me as with an AI assistant: "Lauren, turn on the lights," or "Lauren, play some music." But I was a human on the other end, so I'd be watching and remotely controlling the home for them.
DAILOGUES: In neutral terms, one could say that Lauren depicts automated decision-making and information gathering. How would you characterize the project in less neutral terms? Does it rather suggest forms of surveillance and control or forms of support and empowerment?
McCarthy: I think it suggests both forms. What was interesting about the project was that every interaction, every performance of it, was different. It became this relationship between me and the people in their homes that I was controlling. Different aspects came to the front, depending on who was experiencing the performance. Some people were focused on the feeling of being surveilled all the time, and the cameras became the most important part of it. Others were rather interested in the relationship we could have. The exploration of what an AI personality could be in the future became more central. I maintain a critical perspective on AI. I’m not saying that we shouldn't use AI at all, but that we should be aware of what the tradeoffs are. The point of my performances is not to impose a specific point of view, but to open a space for people to make up their own minds.
DAILOGUES: On your project’s website get-lauren.com, the promise is to attempt to be better than an AI, understanding the other as a person. Can this promise still be upheld in light of the newest AI systems, such as ChatGPT or Gemini?
McCarthy: If the promise is to be understood as a person, I believe that's something that humans still hold over AI. A lot of the AI systems can understand the patterns of humans and maybe even optimize parts of our experience better than any human could. However, something I was trying to offer in the performance was the feeling that there is another human on the other end, that you're feeling the reciprocity of being seen and understood and cared for by another human presence. One question embedded in the performance is how that feels different from being seen, understood and cared for by a machine.
DAILOGUES: Speaking of current AI systems, in your work "Voice In My Head" you're using a large language model to replicate inner monologues. It is used to intervene in people's activities through earbuds in real time. How has the participants’ experience with this experiment been?
McCarthy: "Voice In My Head" is focused on our inner monologue. It is about the voice in our head that's kind of running and always thinking or always commenting. Apparently, some people claim to have no inner monologue. I'm always curious about that. But the point of the piece is to ask what happens if we replace our inner monologue with AI. We're increasingly shifting to the point where AI constantly primes and suggests things for us. It's almost like our thoughts are being controlled by AI. What happens if we take that to its “natural conclusion”, take it all the way? This is the idea the piece is about. We would ask people what they wanted their inner monologue to be like. For example, for a lot of people, the voice in their head is not always the most helpful. Sometimes it's self-doubtful or it's anxious or it feels angry. The participants’ answers to our question would then be used as the seed prompt that would drive the AI replacement for the voice in their head. I’d say that this set-up is trying to play with the tension between, on the one hand, a dystopic scenario where all thoughts, emotions, the inner monologue, that is the thing that's the most personal to us, could be replaced by AI or controlled by it and, on the other hand, the utility of such devices. What if they are helpful? What if they improve our general state of being or our feelings because the voice is more supportive or more productive or healthier? I think the experience with the voice varied for each person who used it. The other thing about this piece is that it would clone the sound of your own voice. As it was speaking to you, it would sound like you. A lot of people were struck by this aspect.
DAILOGUES: If people have a choice to adjust the voice to their needs, it seems that it allows a positive projection of themselves into such systems. In other words, AI can be used for personal reinforcements that we might want to have.
McCarthy: Yes, potentially. It depends on how comfortable we are with letting go of that control of saying, okay, I'm going to have the system generate my thoughts for me because I think that would be more positive than whatever comes out of my head, which of course is influenced by everything around us anyway. I'm not saying that it's right or wrong but asking that we take a moment to reflect on what that means for us or how we feel about it.
DAILOGUES: The theorist Donna Haraway uses the narrative of a cyborg as a utopian vision for blurring the often-problematic divisions between men and women, humans and machines, and humans and nature. Put differently, through the cyborg, Haraway seeks to merge humans with technology to transcend such divisions. In a way, bringing an AI voice into our heads makes us more like these cyborg-like creatures. Do you think that is a desirable path forward, or should we rather preserve what is genuinely human?
McCarthy: Let me answer this with another story. I also work as a professor and there are a lot of instances where AI is increasingly being used by students. I was talking to a friend of mine who works at a different educational institution. She told me about the experience she had of teaching a class online. She taught over Zoom, everyone's video was off and then she'd ask questions. Many students, rather than speaking, put their answers in the chat to participate. However, she could tell that the answers the students were giving to her discussion questions were generated from ChatGPT. She felt that this was not utopian, but rather dystopic, that she was performing a course for people who dissociate themselves by using AI to respond. The meaning of the classroom interaction was gone. In similar fashion, many have started to use AI to draft their responses for emails or text messages. I think there's a risk of us going into this autopilot mode.
DAILOGUES: Haraway's vision of an emancipatory cyborg might not be Microsoft's copilot.
McCarthy: Haraway had a rather idealized vision of what such a being would be. However, I find questions like how technological systems are built and designed as well as who's making the tools for whom essential to address whether any AI can be utopian.
DAILOGUES: Do you think technology, taken by itself, is neutral?
McCarthy: No, I don't. Every tool, every technology, every model is something that's created by humans with a particular point of view. Whenever something is created, there’s certain assumptions that are made about how the world is or how it works. Those assumptions are incorporated into technologies. Think about walking on the street. There's a street curb with the assumption that there should be the road and there should be the sidewalk. Then think about a ramp which allows strollers, wheelchairs, and people who can't easily get from one level to another to move. If you compare the curb versus the ramp, you can notice that different value systems are embedded in them. The value system of the ramp is based on access for all sorts of people and vehicles. The curb mostly favors the movement of cars. I'm giving this mundane example because one could believe that a sidewalk is just a simple construction; obviously, it must be neutral. But even with something simple as sidewalks, we can observe that they are imbued with values. When we think about an AI system, the number of assumptions that are made is much larger because the system is more complex. What data is it being trained on? How does it work? How does it get applied? It is the developers of such a system who decide these questions. Often, their decisions are implicit because our worldviews usually seem neutral to us.
DAILOGUES: Where and how can artists help us with shaping our technological future?
McCarthy: I believe that in the design of new technology, there needs to be a wide range of voices represented – not just the people who are experts in AI, computing, or hardware – because technologies affect a great variety of people. This is one motivation of my work. I want to provide people with the feeling that everyone can have a point of view. However, the narrative that's coming from tech a lot of times is that most technologies are so advanced and opaque, that there's no way that the everyday person could even understand what they are. I'm pushing back on that and saying, no, I think we can understand all technologies because we're experiencing them in our everyday lives all the time. That's a first step toward the public being more engaged and feeling more agency in terms of where we're headed as a community or as a society. I think artists can help facilitate this engagement. Artists can also imagine things that don't already seem possible. When you're working outside of the arts, the constraints and parameters, the realities of what is currently happening, take a high priority because you're trying to make things that fit into the existing world. The logic that artists are working with often allows them to go beyond what seems feasible or practical or logical or even possible. If you really want something different to happen, you have to start by imagining it. Therefore, you need those people who can imagine an alternative future, guiding the way towards seeing it being realized.
DAILOGUES: Let’s come back to engaging the broader public. The arts have an ongoing struggle with exclusivity and elitism. Similarly, AI, as a highly specified field, is also exclusive and often elitist. How can AI like art be more democratized? What can AI experts learn from artists?
McCarthy: I think in both spaces, there's often a feeling that if you're not on the inside, if you're not an expert, then you cannot possibly understand what's happening at the cutting edge of it. You wouldn't have anything meaningful to contribute. In both cases, that's a fallacy. For example, the most interesting art for me is the art that really engages everyday people. If there was more art like that, then the public would not be so disinterested in it. It could really be a tool for having dialogue as a community. With technology it's the same thing. If we had the view that the everyday person or user of technology has something meaningful to contribute to its development, then we would have different conversations. I don't think that's the position that most tech companies are taking right now. They see the users often as that, simply users, consumers of technology, or potentially even as data, the product that they're selling. Both fields need to open up, there needs to be a consciousness shift of what the role of the public is and can be.
DAILOGUES: What is your greatest hope regarding our technological future, including AI?
McCarthy: My utopic vision is that we end up on a different trajectory with technology than we are on right now. Currently, we're developing systems that increasingly prioritize a small group of people, and reinforce structures of inequality, inequity, power dynamics, and hierarchies. It feels like it makes life better and easier for some people at the top and continues to make the conditions of living harder for a lot of other people around the world. My hope is that this shifts, that technology can become more democratized, can be more shaped by the people who use it, and that it can serve them. This requires a whole rethinking of how we're approaching the development of technology and the rollout of it in the ways that it's funded and distributed. So that's my hopeful view. Again, you have to believe that some of these things are possible if they're going to have any chance. I also believe in the power of the arts to open minds and offer new narratives and new possibilities. And maybe we'll get there.
We thank Lauren Lee McCarthy for the DAILOGUE.
About the Author
Lauren Lee McCarthy
Artist and Professor at UCLA’s Department of Design, Media, and Arts