What We Lose by Letting AI Speak for Us

Wait 5 sec.

How do you cheat at a conversation? This could have been a Zen koan, in different times. In the unfortunate times we live in, it is the question driving an artificial-intelligence tool called Cluely.The company’s manifesto says, “We want to cheat on everything.” Right now Cluely is best suited for cheating at computer-related tasks: The program appears as a translucent overlay on top of your screen and can read and answer questions about any text that you’re currently looking at. You could use Cluely to cheat on your homework, or on a test, if that test were taking place on a laptop. But the main thing Cluely seems designed to “cheat” at is live conversation. It can listen in on video or audio calls, provide a real-time summary of what’s being said, offer related information from the web, and, when prompted, suggest follow-up questions or other things to say. If you zone out for a while, you can ask it to summarize the past few minutes for you.The assumption behind Cluely is that letting an AI pull a Cyrano yields better interactions than relying on your own brain. Curious to test this claim, I tried Cluely out—in casual chats and formal interviews—to see if I could successfully cheat at conversation and to explore how using AI changes the experience of communicating. I came away certain that any understanding or sense of connection that resulted from my Cluely-assisted conversations was despite the AI, not because of it.First, I used Cluely with my editor as we sat across from each other at a conference-room table, making chitchat. She told me about an issue her kid was having at summer camp; she was open to any advice, man-made or otherwise. Cluely suggested that I say to her: “Approaches include supportive conversations, tracking for ongoing stressors, and involving counselors if needed.”The advice was fine, if a little generic, but it did not sound like me. It did not sound like any human being. A key element of conversation, according to Deborah Tannen, a Georgetown University linguistics professor, is “conversational style”—the personality of speech, the unique way people say things. AI can’t replicate this. My conversational style is an accumulation of all the people I’ve ever talked to, and all the experiences I’ve ever had. Even if an AI scraped every article I’ve ever written and listened in on all my Zoom calls, it still wouldn’t know the whole of my life. The most efficient way to come up with a statement that sounds like something I would say is for me to just say something.[Read: The question all colleges should ask themselves about AI]Of course, you don’t have to read Cluely’s suggestions verbatim. One could argue that the best way to use it is to take its ideas and translate them into your own words. I gave this a good-faith effort in interviews with Tannen and with N. J. Enfield, a linguistic anthropologist at the University of Sydney. (Tannen didn’t know I was using Cluely until I filled her in at the end of the interview. I told Enfield up front because he was in Europe, where privacy and recording laws are stricter.) I integrated Cluely into our conversations as seamlessly as I could. I summoned all of my high-school-theater acting skills and did my best to phrase Cluely’s questions in my own voice. During the interviews, I mixed Cluely’s suggestions with my natural responses and questions I had preplanned. The people I spoke with said they couldn’t tell which questions were from me and which were from AI. When they guessed, they often guessed incorrectly.So if cheating at conversation means using AI without anyone being able to tell, then for me, in these conversations, the tool worked. But Cluely’s co-founder and CEO, Roy Lee, told me that what he means by cheating is “to be so leveraged”—that is, to have such power on your side—“that you can achieve something that other people would consider unfair.” The company’s manifesto compares Cluely to using a calculator or spell-check; Lee told me it’s like driving a car instead of a horse-drawn carriage. Basically, the company uses the word cheat because it’s spicy, but what it means is doing something more easily and efficiently, and producing a better result than what you could have on your own.If I accept this dubious premise that cheating is merely a matter of gaining leverage, then Cluely did not help me cheat at conversations. It made them more difficult, less efficient, and worse.To evaluate the success of a conversation, you need to ask: What is conversation for? What is the point of talking to anyone about anything?According to Lee, a good conversation “is one in which both parties extract value” from it, “whether it is an emotional value or it’s a more material capitalist value.” What AI can do, he said, is give you information to help you extract that value more efficiently.According to Enfield, a conversation has two purposes. One is simple understanding—picking up what the other person is putting down. The other is a social purpose—forming or solidifying a relationship.The odd thing about inventing a machine to help humans make conversation is that, Enfield told me, humans already are conversation machines—finely tuned ones, at that. Conversation is “a very high-performance kind of activity,” Enfield said. It requires “processing on multiple levels.” You’re not only responding to what the other person says but also anticipating how they might react to you, and intuiting what they already know. “Your performance depends very much on the feedback that you’re getting from the other person,” Enfield said, “and that feedback is mutual.”In its efforts to augment social interactions, Cluely can actually gum up the works of the human conversation machine. For one thing, it creates delays. Enfield told me that people have “exquisite sensitivity” to disruptions in timing. If the delay has a clear reason—for example, one person takes an enormous bite of food—then the conversation can proceed without much trouble. If not, people tend to come up with a social reason for the timing issue: that the two people in the conversation aren’t clicking, or one person is socially awkward or maybe even doesn’t like the other person. So the time it takes to look at an AI’s suggestion might harm a relationship, even if the suggestion is awesome.[Read: The costs of instant translation]Cluely is also extremely distracting. As I Zoomed with Tannen and Enfield, it was always there, in the corner of my eye, tick-tick-ticking away with a live scroll of suggestions for me to click on. Distraction is another thing that humans are sensitive to in conversation, Enfield told me. If one person senses that the other is not paying attention, “then I become disfluent,” he said. “I become less well able to tell my story because, like, Are you following me here?” My conversation with Tannen had a moment like this. I asked a follow-up question Cluely had suggested, and she seemed to freeze for a moment and struggle to answer. Later, Tannen told me that she thought she had just answered the question earlier in the conversation. And looking at the transcript, I saw that she was correct. That’s probably why the AI asked the question to begin with—it was iterating on what she’d just been saying. And I was too distracted to notice. When I told Tannen I’d been using Cluely, she seemed more amused than upset, but did say that using it “violates the ideal of conversation.”The only time I’ve felt a similarly intense split focus is when I was doing interviews for a podcast I hosted. During those conversations, I would try to stay present with the interviewee on Zoom while looking at my notes in a Google Doc, where my producer would move questions around and make real-time suggestions. Attempting to focus on so many things at once and channel them smoothly into the discussion created a pressure in my head as though my brain were overheating like a laptop fan. It was exhausting; on days when I did podcast interviews, I would fall asleep at 9 p.m. Using Cluely was kind of like that, except the suggestions were not helpful.The ad released when Cluely launched this year shows Lee on a date with a young woman. A giant screen visible only to Lee floats on the table between them, coaching him to lie about his age and pulling up images of the woman’s artwork so that he can soothe her with compliments when she finds out he’s lying and gets upset. The imagery of a giant barrier in between them as they’re trying to connect is a pretty perfect metaphor for my experience trying to converse using AI.When we spoke, Lee said, “The ad is a little bit misleading,” in the sense that “an AI conversation assistant is likely the least helpful when you’re speaking with someone romantically and then the pure objective is just emotional.” Why use that as the example in the ad, then? I asked. “It will get more impressions than a sales call or a customer-support call,” Lee said. (The company seems to have realized that work is Cluely’s most obvious use case for now, and recently updated its website to promote it as the “#1 AI assistant for meetings.”)This is the thing: Being intentionally provocative is a big part of Lee’s and Cluely’s brands.. Cluely began as an app for cheating on job interviews that got Lee and his co-founder, Neel Shanmugam, in trouble at Columbia University while they were students. They have since dropped out and doubled down on the whole cheating thing. And the provocations seem to be working. The company secured $5.3 million in its initial fundraising round earlier this year, followed by $15 million from the venture-capital firm Andreessen Horowitz. Lee has said that the app has about 100,000 users. His social-media post announcing Cluely, which said, “Today is the start of a world where you never have to think again,” has been viewed 3.7 million times as of this writing. Does he really believe that a world without thought could and should be achieved? “We’re just stirring the pot. It’s Twitter,” Lee said. Points for honesty, I guess.[Read: The age of de-skilling]Yet, as with many things done for LOLs, Lee and his colleagues at least halfway mean it. Although Lee claimed that the date scenario was not representative, he also said he has used Cluely on a couple of “e-dates.” Lee contradicted himself a few times when we spoke; he seemed unsure of whether there were limits to the kinds of conversations Cluely is meant to be used for—or at least unsure of what he thought I wanted to hear. “I imagine you would use this for any sort of conversation” n order to get to your goal more quickly, told me.“Destination over journey?” I said, and he responded, “Exactly,” before quickly noting that sometimes the journey is the destination, and that, for example, a father who wants to “spend some time blabbing with his infant” probably wouldn’t need AI to do so. Yet Lee also said he believes that the end goal for artificial intelligence is a brain chip, which, if achieved, would mean the possibility of using AI in any conversation. (When I spoke with Enfield, his take was that widespread adoption of tech like Cluely would mean that “everyone who we meet could be a kind of deepfake.”)Lee’s argument in favor of using Cluely boiled down to: More knowledge is better for relationships. For instance, he asked, if you’re on a date, wouldn’t you like to know if this person is a sex offender? Wouldn’t you like to know if you’re being lied to? The only problem with the date ad, he said, was that the girl wasn’t also using Cluely. If she had been, then he couldn’t have deceived her so effectively. But, I asked, does he consider parroting words that a robot wrote to be a kind of deception? Turns out he does. “But you can’t close Pandora’s box on AI; it’s already out there,” he said. So the solution, as he sees it, is for everyone to use it, for everyone to deceive one another. Cheat on everything. Lee wants to “get people used to a world where everybody’s just using AI maximally.”This is so very Big Tech, the idea that problems created by technology can be solved by more technology. It’s like trying to solve the distraction caused by smartphones by selling another tech gadget that blocks access to distracting apps. The solution Lee proposed is also, conveniently, the one that would make him more money.What makes using AI to “cheat” on conversation more troubling than doing math with a calculator, Enfield told me, is that in relationships, investment “is where authenticity comes from. That’s what makes you a genuine person.” Think of a man who asks his secretary to buy flowers for his spouse, and to write a card to go with them. Those flowers mean less than nothing. Because the point of the gesture is not the flowers themselves or the words on the card. It’s the attention and effort that went into them. As Tannen put it to me: “I mean, what is being in love? It means this person has all your attention.”Even when conversations are transactional, or strictly business, attention matters. My business is journalism, and Enfield told me that knowing in advance that I was using Cluely made him “slightly less interested in your questions.” He said that he would expect a question from me to be a result of thought and care, and a desire to know the answer. But “the AI doesn’t really care about the answer,” he said. “It polluted the conversation.” And that’s bad for business.Moments after I hung up with Lee, he sent me an email thanking me for the conversation. I knew it had been written by Cluely; it was marked as such, and the app had automatically drafted a similar one for me to send to him. Reading the note didn’t make me feel connected to Lee, or appreciated. It didn’t make me feel much of anything, except maybe a little cheated.