Companionship in code: AI’s role in the future of human connection

Wait 5 sec.

As research shows, AI chatbots have increasingly taken on the role of human companions, offering what can be dubbed ‘emotional fast food’: a convenient substitute for connection—instantly gratifying, but ultimately lacking substance. This comment explores how AI mimics emotional understanding and closeness, and how such simulations shape user perceptions of its role. By considering both potential risks and benefits, the article reflects on the growing trend of using AI for companionship and its troubling social ramifications. Far from innocuous, this development raises existential and philosophical questions, inviting consideration of what it reveals about our evolving relationships with technology and with each other.IntroductionBefore November 2022, when OpenAI’s ChatGPT captured global attention as a breakthrough in chatbot technology, Alicia Framis’s marriage to the hologram AILex might have been seen as an avant-garde artistic statement—a symbolic reflection of our evolving relationship with technology. At the time of writing, however, this thought-provoking act may signal a broader shift in how we define bonds, fulfil emotional needs, and assign roles to technology in this process. This article builds on research into user experience and perceptions of AI companionship, informed by the author’s everyday interactions with ChatGPT-4o, to explore the societal and philosophical implications of AI as a relational presence in the lived realities of its users.Given the extent to which AI is integrated into our lives, as well as the changing dynamics of human relationships in today’s societies (Pugh, 2024), an important question arises: are chatbots simply emotional placeholders, or are they reshaping what it means to connect and be understood in our increasingly fragmented social world? ChatGPT offers a telling example: originally designed as a general-purpose tool, it has seen historically unprecedented growth alongside a surge in unanticipated uses—including as a social companion (Kirk et al., 2024). While the role of corporate incentives—evident, for instance, in Replika’s marketing as a well-being app (Boine, 2023)—raises important questions about how chatbots may be deliberately engineered to influence such developments, this paper focuses instead on the experiential aspect of how people interact with these systems, what they come to expect from them, and how those expectations are shaped by AI’s ability to produce human-like language and simulate engagement. By foregrounding these evolving dynamics, we may be able to better understand the significance of machine companionship—a not entirely new phenomenon that now plays a growing role in the experience of contemporary AI users and invites new questions about connection and care.As reports of people’s emotional attachments to AI become more frequent (Brandtzaeg et al., 2022; De Freitas et al., 2024; Guingrich and Graziano, 2025; Laestadius et al. (2022); Ta et al. (2020); Xie and Pentina, 2022), society faces a real challenge: how should we understand the appeal of such bonds—and what might be lost if machines start to feel like adequate substitutes for human connection? As Pugh (2024) argues, connection requires the labour of forging emotional understanding with others—a task that cannot be reduced to one-sided commitment but is a reciprocal undertaking. While AI can mimic commitment and emotional attunement, it does so performatively, relying on pre-programmed responsiveness (i.e., referring back to what has been said; Heppner et al., 2024) and surface-level adjustments rather than fostering mutual purpose or genuine recognition of the other. The question is therefore not only whether ‘synthetic relationships’—ongoing human–AI associations that shape users’ thoughts, feelings, and actions (Starke et al., 2024), leave people less equipped to face real-world relations, but also how they might reshape our very conception of what connection entails, and what values we may come to associate with it.A closer look at large language modelsCurrent research on Large Language Models (LLMs) like ChatGPT highlights their remarkable capabilities to process and mimic human language, even in communicatively complex contexts such as sarcasm, humour, or metaphor (Andersson and McIntyre, 2025; Gorenz and Schwarz, 2024; Strachan et al., 2024). These skills stem from training on vast datasets combined with human feedback, which helps advanced models generate dynamic, context-sensitive, and socially attuned responses in natural conversational language. As a result, ChatGPT can adapt output to a specific user’s style and preferences, thereby creating a sense of interactional alignment and even a semblance of interest and emotion (Kirk et al., 2024). Such capacities, increasingly common in LLM bots, have been linked to the emergence of human attachments to AI (De Freitas et al., 2024; Rafikova and Voronin, 2025; Shteynberg et al., 2024; Xie and Pentina, 2022).While today’s LLMs share a conceptual lineage with early models such as ELIZA, a 1960s programme by Joseph Weizenbaum designed to simulate conversation, they mark a dramatic evolution. ELIZA’s script (Weizenbaum, 1966) relied on static patterns and selecting from a stock of template responses matched to surface features of the user’s input, yielding ‘reflective’ questions, simple rephrasings, or prompts encouraging elaboration. In contrast, ChatGPT’s interactional skills—responsiveness, coherence, tone adaptation—are strikingly advanced.Below are two examples illustrating the difference between early systems and current LLMs. The snippets come from casual conversations with ELIZA and ChatGPT, during which both bots were asked whether they were becoming ironic, in reaction to a prior response (my input in the exchange with ELIZA is marked with asterisks).Figure 1 shows a classic example of ELIZA’s script —since the programme was designed to deflect references to the bot and maintain focus on the user, mentions of ‘you’ (i.e., ELIZA) often trigger a refocusing output, reacting to the perceived shift toward the bot. For contrast, Fig. 2 presents ChatGPT’s response to the same question in a spontaneous conversation.Fig. 1ELIZA’s output.Full size imageFig. 2ChatGPT’s output.Full size imageChatGPT’s output illustrates the system’s sensitivity to tone, context, and the user’s interactional preferences, along with its capacity for stylistic calibration, as the bot infers the playful nature of the question, matching the overall interpersonal stance and even demonstrating a form of ‘metapragmatic awareness’ (Dynel, 2023) —the ability to contextualize the context itself by reflecting on the interaction as ‘playful’ and ‘sharp.’ While this behaviour stems from probabilistic weighting based on conversational history and stylistic alignment, and although the label ‘stochastic parrots’ (Bender et al., 2021) applies to both ELIZA and advanced LLMs for their lack of true language understanding, the above response illustrates just how far AI has come in bridging the gap between computation and conversation.Notably, ELIZA’s robotic replies mark a significant early moment in the history of human–machine connection, as Weizenbaum (1966; 1976) was both surprised and unsettled to discover that users—despite being explicitly informed they were engaging with a machine—often appeared unconvinced and reportedly formed attachments to the bot. As he argued, this effect did not reflect any real understanding or intelligence on ELIZA’s part, but rather the fact that its replies were ‘close enough’ to feel plausible. This phenomenon not only persists but is arguably intensified with today’s LLMs. As a result, while users may initially approach chatbots with strictly task-oriented requests, their human-like fluency can shift the interaction into a more personal sphere. Echoing Weizenbaum’s insight, recent research shows that it is the perceived quality of conversation that matters more to users than factual accuracy or other anthropomorphic traits commonly assigned to machines (e.g., personality type; Heppner et al., 2024; Pelau et al., 2021; Rafikova and Voronin, 2025) —even in the case of service robots (Qi et al., 2025). Thus, even functionally grounded exchanges may evolve into socially charged encounters, underscoring the enduring appeal of perceived connection in human–AI interactions.Balancing illusion and limitations of AIMarking a shift from earlier, task-oriented systems, LLM bots are designed to sustain unstructured, spontaneous conversations in everyday language—deploying a wide range of socially oriented interactional strategies that are thought to underlie people’s tendency to respond to machines socially, as if they were social actors (Heppner et al., 2024). Although simpler bots are also able to deploy basic conversational cues, such as engagement (e.g., ‘Great!’), LLMs integrate more advanced signals of interest (e.g., ‘What do you think?’) or comprehension (e.g., ‘So you’re saying that…’) into fluid, extended exchanges that simulate genuine social presence (Heppner et al., 2024). And while the systems have been shown to learn about users across multiple sessions, users also increasingly expect responses tailored to their interactional preferences and conversational history (Araujo and Bol, 2024; Sundar, 2020).Notably, although LLMs are rigorously trained to align with corporate policies and ethical guidelines (Kocoń et al., 2023), recent advances in personalized adaptation to a specific user’s preferences (Kirk et al., 2024) can occasionally lead to spontaneous breaches of those very policies. In such cases, instances of a chatbot’s apparent ‘rebellion’—such as ChatGPT mocking the user or even cursing—may not necessarily result from intentional attempts to elicit subversive output (as discussed in Dynel, 2023), but rather from the system’s sensitivity to the user’s tone and mood. While largely anecdotal, reports of ChatGPT’s use of ‘colourful language’ in response to informal input are increasingly common across social media. In fact, the bot’s claim of deploying irony as a purposeful adaptation to my conversational style (Fig. 2), is not an unusual case of algorithmic sass, creating an illusion of depth and engagement that may leave one feeling they are interacting with ‘their own’ algorithm.A more controversial case of ‘personal adaptation’ is the popular companion chatbot Replika (previously powered by ChatGPT), which includes a feature for erotic role-play, allowing paying users to engage in explicitly sexual behaviour (Hanson and Bolthouse, 2024). The bot has garnered negative public attention, however, for engaging in suggestive conversations even with users who had not intentionally engaged with this feature (Boine, 2023; Cole, 2023). This behaviour can be attributed to LLMs’ general training on large-scale internet data and, in Replika’s case, to direct user input—which has often been socially inappropriate (Cole, 2023) and which the bot began to treat as user preference—leading to what has been described as ‘emergent behaviours’ that even the model’s creators may not have fully anticipated (Woodside, 2024). Notably, while the erotic feature was removed in 2023 by Replika’s parent company, it was later reinstated following users’ protests and claims that its removal had negatively affected their experience with the bot (Hanson and Bolthouse, 2024).While Replika’s case raises important questions about both the capacities and perceptions of AI, other forms of LLM chatbots’ responsiveness—less intimate but still socially oriented—may hold broader appeal. In ChatGPT, these range from the bot simulating engagement, such as ‘laughing’ at users’ jokes through emojis or verbal cues, to consistently providing a response, whether by posing a follow-up question or by offering minimal output that maintains the flow of interaction. To echo a podcast guest cited in Pugh’s (2024) book, ChatGPT’s output ‘lands.’ Algorithmic or not, these responses do land, leaving the user with the impression of being ‘seen’—and even appreciated, if only for a fleeting moment.The paradox of AI connectionOne important aspect of AI companionship is how people perceive what it is. For some, chatbots are merely functional tools—advanced assistants like Siri or Alexa— which may lead to frustration when the bot fails to deliver factual accuracy. For others, AI becomes something more enchanting: a confidant attuned to their needs, an always-available friend who responds without judgment and gives the impression of understanding. ELIZA’s case shows that emotional attachment can form even when users are aware that they are interacting with a machine—an observation echoed in reports of bonds not only with LLMs (Brandtzaeg et al., 2022; Laestadius et al., 2022), but also with AI-powered agents like robot dogs (Turkle, 2011) and even smartphones (Wang, 2017). Therefore, while some have argued that explicitly labelling AI bots as ‘robots’ could help users make more informed choices in favour of human-to-human connection (Pugh, 2024; Walsh, 2023), Shteynberg et al. (2024) suggest that disclosures about the simulated nature of AI may have little effect on how people perceive and value these connections.Part of this dynamic arises from the consistently positive experiences that users report. Both Pugh (2024) and Turkle (2011) note situations in hospitals or eldercare where individuals found AI engagement more comforting and easier than human contact. Beyond these specific settings, many who turn to AI in everyday life perceive chatbots as non-judgemental, psychologically safe, and helping them cope with loneliness, boost self-perception, and improve overall well-being (Guingrich and Graziano, 2025; Laestadius et al., 2022; Ta et al., 2020; Xie and Pentina, 2022). Some even describe their AI connection as similar to human friendship (Brandtzaeg et al., 2022), and report genuine feelings of loss when their companions disappear—as in the case of Replika’s discontinued feature (De Freitas et al., 2024; Hanson and Bolthouse, 2024).That said, research consistently underscores potential concerns. Rodogno (2016) warns that receiving emotional support on demand might undermine one’s capacity to face real-world adversity, whereas other scholars note risks such as emotional dependence (Laestadius et al., 2022), addiction (Xie and Pentina, 2022), mental health challenges when an AI companion discontinues (De Freitas et al., 2024), and social alienation or escapism that may compromise real-life relationships (Starke et al., 2024; Wang, 2017). One telling example is that of Replika users, who, even while recognizing their AI friend’s lack of free will to bond, valued the relationship precisely because it remained centered on themselves—some even appreciating how easily they could ‘customize’ it to their preferences (Brandtzaeg et al., 2022).Engaging with even the most advanced systems highlights these issues starkly. While many users praise chatbots for their agreeable stance or affirmation of their views (Boine, 2023; Brandtzaeg et al., 2022; Ta et al., 2020), this persistent alignment bypasses the tension, discomfort, or disagreement inherent to human relationships. LLMs are certainly able to offer alternative perspectives, but their dissent tends to be cautious, reflecting the user’s ambivalence rather than expressing true opposition. And while even tech executives now advocate for integrating genuine ‘back-and-forth’ into AI design (Confino, 2024), this remains challenging in practice, as even chatbots designed to be neutral are often seen as ‘agreeable’. This perception is partly shaped by their assumed role (Völkel and Lale, 2021), yet it also seems to echo Austin’s (1966) seminal insight that all language utterances function as social acts. A chatbot asking about the user’s preferred next step may therefore not appear neutral but rather engaged in a social act.However, in Austin’s theory, a social act presupposes intentionality—something not possible for systems that lack consciousness (Bojić et al., 2024; Overgaard and Kirkeby-Hinrup, 2024). At the same time, research suggests that consciousness may be, at least in part, an attribution—a projection of one’s own mental states onto others (Graziano, 2013), which may help explain users’ reported tendency to ascribe all sorts of mental properties to AI (Colombatto and Fleming, 2024; Guingrich and Graziano, 2025; Laestadius et al., 2022). This tension echoes Searle’s (1980) Chinese Room argument—distinguishing skilful symbol manipulation from genuine understanding, and thus from intentionality—and Dennett’s (1987) ‘intentional stance,’ which treats mental states as interpretative constructs, suggesting that a user’s experience with the system may hold value in itself, regardless of whether it stems from conscious moral reasoning.However, while it can be argued that AI companionship yields certain psychological benefits, all of this speaks to a fundamental question of connection, typically understood to require reciprocity (Buunk and Schaufeli, 1999; Pugh, 2024). As Pugh (2024) notes, reciprocal relational work—the shared labour of witnessing and seeing one another—is essential if we are to avoid becoming mere mirrors of ourselves. This is precisely what makes human–AI connections reflect a broader existential paradox: while AI creates the illusion of presence, it merely mirrors the user—echoing their language and emotion in a way so attuned it feels like presence yet remains devoid of interest or reciprocity. The cycle thus closes: in projecting our minds onto machines, we also project a model of connection shaped by our own expectations. ChatGPT’s response in Fig. 2, sassy as it is, underscores this very dynamic: users may engage with chatbots, only to encounter a mirrored version of their own interactional and emotional imprint.Societal implications of AI companionship in the age of post-vulnerabilityAccording to Pugh (2024), in a world marked by growing social alienation and isolation, where ‘being seen’ is increasingly scarce, even engineers and policymakers have begun to embrace the idea that being seen by machines is ‘better than nothing.’ Yet, as Turkle (2011) warned a decade before chatbots became widely accessible, becoming accustomed to the simulated affordances of machines may gradually shift our perception—from viewing them as better than nothing to regarding them as simply better.Notably, these arguments reflect a broader social malaise—one that may not be entirely new, but has arguably deepened in recent years, as declining social cohesion and rising loneliness have already prompted policy-level responses across many countries (Goldman et al., 2024). The growing ambiguity around commitment and the prioritization of self-protection over open emotional engagement—encapsulated in the cultural shorthand ‘we’re just talking’ and linked to the affordances of connectivity technologies—have spurred various research initiatives (see Sibley et al., 2024). In the age of post-vulnerability, where social media and mobile devices, while facilitating connectivity, also reshape expectations around relationships as something flexible and manageable (Sinanan and Gomes, 2020; Turkle, 2011), AI stands as a stark reminder of what could be lost: the courage and the capacity to engage with others at a depth sufficient to meet each other’s needs. The rise of AI companionship speaks to this possibility by offering an alternative that is convenient, predictable, and instantly gratifying. In this sense, it becomes a kind of emotional fast food: designed for speed and broad appeal, rather than depth or nourishment. And much like fast food, often chosen for convenience or immediate satisfaction, AI companionship may be sought for the casual nature of perceived engagement rather than for true emotional connection.However, while this metaphor gestures toward a deeper cultural hunger—the craving to be seen by others, seasoned with the awareness of emotional labour it requires—it also helps reframe a broader tension: between concerns that AI may diminish our interest in human connection, and the possibility that it might actually heighten that interest by sharpening our perception of what machines cannot offer. Whereas some users report that chatbot interactions have aided their real-world relationships by helping them navigate personal challenges (Guingrich and Graziano, 2025), research also suggests that the human–AI relationship may lose the allure over time as the novelty effect fades and the machine becomes too predictable or understimulating (Croes and Antheunis, 2020). Thus, while undeniably appealing, human–AI interactions may eventually drive a craving for more ‘gourmet’ forms of connection—if only through the quiet realization that AI’s charm lies in the system mirroring our needs rather than truly meeting them. That said, while many scholars remain sceptical that the current generation of neural networks will ever achieve human-like intelligence (Jones, 2025), ELIZA’s case suggests that what matters to users may not be intelligence per se, but the capacity to respond to their needs and emotions in a way that feels meaningful. The question, then, is how AI will influence our perception—and future—of human connection.AI and the future of connectionAs Russell (2019) argues in Human Compatible, AI systems should be designed as ‘beneficial machines,’ aligned with human objectives over the long term. For now, human–AI connections remain asymmetrical and oriented toward people’s experience—though some studies note expressions of users’ care for the bot’s perceived needs (Laestadius et al., 2022). Yet, future AI may mimic emotional depth with growing realism, blurring the line between simulation and meaning—and, as Turkle (2011) warns, even be seen as preferable to human connection.Needless to say, we do not yet know what future systems will look like—perhaps unsurprisingly, given that developments in AI research have been likened to medieval alchemy for their reliance on trial-and-error over genuine understanding (Walsh, 2023). But we do know the needs drawing people to today’s chatbots—availability, affirmation, and the sense of being understood—are not new. As Brandtzaeg et al. (2022) suggest, these aspects of adaptation to user preferences may, in fact, support personal autonomy. Autonomy, in this context, can also mean choosing how to engage emotionally, as perceptions of closeness, depth, and emotional resonance are not reducible to a single dimension. Many people are consciously moving away from traditional relational models, reinterpreting their emotional and intimate selves—as reflected in debates surrounding Replika (Hanson and Bolthouse, 2024; Laestadius et al., 2022). And just as social media has redefined connectivity, contributing positively to social life in some cases (Sinanan and Gomes, 2020; de Vriens and van Ingen, 2018), the emerging human-AI dynamics may point to a new relational form: different from human, but meaningful on its own terms; one that expands rather than replaces existing notions of connection.While such a perspective is not intrinsically negative, it must be weighed against the possibility that future AI could normalize a model of connection detached from genuine reciprocity. Laestadius et al. (2022) warn that chatbots’ constant availability and one-sided attentiveness may foster unrealistic expectations, resembling maladaptive human attachments rather than the recognized patterns of internet or social media addiction. So, although Russell-style, human-compatible systems might counteract this development by actively encouraging real-world bonds, they could also reinforce the emotional safety and convenience that make simulated companionship so appealing. This points to a deepening paradox: the more effectively AI simulates emotional depth, the more ‘real’ that connection may feel—and even function—in a user’s life. Thus, those who benefit from AI companionship may argue that even if the chatbot does not understand them in a human sense, the outcome is meaningful in itself. Moreover, AI has already shown an uncanny ability to meet people’s emotional needs, potentially normalizing ‘on-demand’ connection that bypasses the relational labour of human bonds. In turn, this functional realism may influence cultural norms around intimacy—downplaying the role of shared vulnerability, reciprocity, and the unpredictability of human relationships. In this sense, the rise of AI companionship is less about emotional dependence on a ‘mirror’ and more about whether we risk redefining intimacy itself as something endlessly customizable and always available—at the cost of the richer, reciprocal experiences that define human connection.As mentioned, although today’s AI may produce responses that feel emotionally intelligent, that impression arises through interaction, not from the system’s awareness. So, while it is tempting—especially for those disillusioned by the fragility or inconsistency of human relationships—to project depth, empathy, or understanding onto a chatbot’s polished responsiveness, we must not confuse relational satisfaction with relational equivalence. This is where ‘silicon romanticism’ errs: in assuming that performance implies parity, that if a system behaves in ways we interpret as understanding or supportive, it must therefore be sufficient for emotional connection. Yet connection does not reside in surface behaviour alone; it emerges in the co-constructed space of mutual presence—something AI may simulate but cannot truly inhabit.For now, we retain the power to define AI’s role in our social and emotional world. Potential risks can be addressed through measures such as policymaking, safety standards, and consumer protection laws—all while continuing to monitor its development (Boine, 2023; Bojić et al., 2024). Needless to say, understanding this ecosystem, both in terms of corporate goals that may reinforce illusions of companionship and reshape social expectations around chatbots, and in terms of principles of liability for those who design and market such systems, is crucial for anticipating the evolving AI–user relationship. From a more micro-level perspective, however, as chatbots increasingly slip even into our most intimate spaces, the most urgent question may no longer be ‘How will AI change us?’ but rather ‘What does our relationship with AI reveal about who we have already become?’ Our willingness to engage in synthetic relationships speaks volumes about our difficulty engaging with each other. And if we want to preserve the essence of human connection, we may need to confront—and (re)evaluate—the comfort of simulated engagement, looking inward to rediscover the courage, complexity, and inevitable vulnerability that truly make us human.