IntroductionA chatbot developed by OpenAI as a large language model known as ChatGPT was introduced to the world in November 2022. This state-of-the-art language model took the world by storm when it was first introduced. Many questions about its capabilities are ongoing. It is designed to provide human-like responses to the users and, hence, received significant attention (King, 2022). ChatGPT users can ask questions and make requests, and the system will respond in seconds (McMurtrie, 2022). In 2022, many news outlets released articles about ChatGPT, such as The New York Times, the Washington Post, and The Atlantic. This highly anticipated new technology garnered more than a million users who accessed it in the first week of its release (Jin and Kruppa, 2023; McMurtrie, 2022).So far, ChatGPT is considered one of the most advanced chatbot introduced to the world and has caught much attention (Elyoseph et al., 2023). People in education, research, and policymaking discuss its pros and cons and make their own relevant predictions. In addition, the release of any new technology provokes strong opinions and emotions and ChatGPT is no exception (Imran and Almusharraf, 2023). On the news and social media platforms, many controversial responses were garnered, making predictions about its effects on the future of the world and humanity. Users and adopters of the new AI technology have shared their experiences, opinions, and questions on social media. Their responses on X ranged from doomsday to utopia predictions and mind blowing to terrifying.The advent of ChatGPT quickly triggered researchers from different fields to publish articles on its effectiveness and potential concerns. Huh (2023) noted that in academia, some proposed that traditional teaching methods would be replaced by ChatGPT. On the other hand, Qadir (2022) argued that essays generated by ChatGPT can be identified when carefully read as they contain fake references and inaccurate statements. Moreover, Pavlik (2023) discussed the strengths and weaknesses of ChatGPT and what it can offer to the education of media and journalism. Other researchers compared ChatGPT’s ability to take exams to the students’ ability, and their findings indicated that it is not yet comparable to students’ ability and knowledge (Aslam and Nisar, 2023; Bommarito and Katz, 2022; Huh, 2023; King, 2022). In addition, others concluded that ChatGPT can be utilized in teaching students how to write and generate ideas for research purposes (Aydın and Karaarslan, 2022; Dowling and Lucey, 2023).Due to the novelty of the topic, no articles were found exploring the writer’s stance and attitude towards ChatGPT on X or any other platform. Also, no studies investigated how writers of English and Arabic engaged with their readers when discussing the recent topic of ChatGPT on X. However, Haque et al. (2022) explored the sentiments of early adopters of ChatGPT on X. They noted that understanding users’ sentiments is essential and that it provides valuable information about its strengths, weaknesses, and future potential. In their study, they use topic modeling to identify the main topics. The findings indicated that the majority of early adopters of ChatGPT expressed positive sentiments, while a few expressed their concerns about misuse in education and privacy matters. A further study relevant to the present one was conducted by Bian et al. (2016) who explored Twitter to find more information about the sentiments of the public towards the Internet of Things (IoT). Similar to Haque et al. (2022), they conducted both topic modeling and sentiment analysis. Their findings indicated that users are more interested in business and technology compared to the other domains of IoT. Additionally, Al-Khalifa, Alhumaidhi, Alotaibi, and Al-Khalifa (2023) explored how early Arab users of ChatGPT discussed it on X. They focused on the topics, sentiments and sarcasm in tweets posted in Arabic. Their findings indicated strong positive perceptions, in addition to some concerns regarding ethical risks of ChatGPT among Arabic speakers. Their findings showed that Arabic speakers were more objective than emotional when discussing ChatGPT on X. Studies comparing Arabic and English discourse suggested that attitudinal resources are more explicit in English than in Arabic. Alotaibi (2021) noted that English language in academic writing tends to be objective and impersonal. Writers usually use hedging strategies to cautiously present claims. Whereas Arabic language in Academic writing is more personal and writers employ rhetorical devices to engage with the readers and establish credibility. In Arabic writers convey attitudes and stance through verbs, stylistic devices, and contextual cues. In English writers employ explicit linguistics markers like, adverbs, modal verbs and lexical choices to convey attitude and epistemic stance.Other studies also used X to collect essential information about hybrid work models and predict the public’s views on the political elections in Pakistan (Nawaz et al., 2022). The authors reported that Twitter data showed 98% efficiency and accuracy compared to alternative data. Hu, Talamadupula, and Kambhampati (2013) added that Twitter as a medium of communication is widely perceived as a means of expressing opinions, feelings and beliefs to a broad audience. They added that Twitter reflects aspects of oral discourse similar to other forms of online media such as chatting and SMS. In addition, Ross and Bhatia (2019) noted that Twitter is like other forms of digital communication where writers engage with their readers. Writers on Twitter use a wide range of linguistic and multimodal resources to reflect their identities and beliefs. They also noted that writers on Twitter can regulate the temperature of what they express using specific linguistic devices such as boosters and hedges. Luzón (2023) found that academics use Twitter as a tool to share information, network, and for self-promotion. They strategically employ various types of semiotic resources in their tweets to achieve their goals. Furthermore, Breeze (2019) and Zappavigna (2015) found that tweets posted on timelines have different styles conveyed through various emotional lexis linguistic resources. Tardy (2023) described tweets as an intriguing, complex digital means of communication due to the ambiguity of the audience and writers’ intentions. As a result, exploring Twitter discourse can potentially help us understand digital communication more. The literature indicates that Twitter is a rich data source that can convey how writers construct meaning and engage with the putative reader (Breeze, 2019; Tardy, 2023).Because of the current study’s focus on interpersonal meaning, the theoretical model of appraisal theory by Martin and White (2005) within the model of Systemic Functional Linguistics by Halliday (1978) was selected as the framework for analysis. Martin and White (2005) explained that in developing the appraisal theory, they attended to and extended what was traditionally known as (1) affect, (2) modality, and (3) intensification and vague language. They extend traditional accounts of ‘affect’ that refer to how writers/speakers overtly evaluate states of affairs and entities positively or negatively to the more indirect means they use to indicate evaluative stance and attitude. Additionally, appraisal theory goes beyond revealing the writer’s/speaker’s values and attitudes to highlighting the relations of rapport and authority between the producer of the text and the receivers. In terms of ‘modality’, appraisal theory focuses not only on the writer’s/speaker’s certainty but also on the positioning of textual voice in relation to other voices in the text. In Martin and White’s (2005) appraisal account; they focus on meanings that provide writers and speakers with the means to be critical, value, reject, accept, and challenge other positions. They also attend to how writers and speakers increase or decrease the force of their propositions. Martin and White (2005) noted that the appraisal theory includes three subsystems: (1) attitude, (2) graduation, and (3) engagement.The first subsystem is ‘attitude’, which is concerned with the writer’s feelings, evaluation of things, and judgement of behavior. Attitude refers to the resources used by writers to either positively or negatively evaluate phenomena. It is divided into three categories; (1) affect, (2) appreciation and (3) judgement. Affect is when writers express their feelings (e.g., ChatGPT frightened me). Appreciation is the assessment of objects rather than people behaviors (e.g., AI is such a fraud), and judgment is concerned with moral judgments of human behavior (e.g., Authors using ChatGPT are cheaters). Second, ‘graduation’ deals with the values by which writers decrease or increase the interpersonal impact of their propositions (e.g., ChatGPT seems pretty good). Third, ‘engagement’ attends to the various voices and opinions in discourse and how the author’s voice is positioned in relation to the propositions conveyed by the text (e.g., we need to remain vigilant and resist the AI hype). Appraisal theory integrates different aspects of evaluation, such as how attitude is expressed and its source, how writers position themselves and position others, and what values the authors have. The appraisal system takes into consideration many kinds of author voices and various types of stances. The appraisal considers the vast array of resources used by both writers and speakers when evaluating phenomena, interacting with their readers and listeners, and describing their audience’s position and point of view. Applying appraisal theory to the study of evaluative stance allows researchers to answer questions such as ones concerned with the ways and means evaluative stance is encoded. Therefore, the three subsystems of appraisal are explored in the current study of evaluative stance to answer the research questions.As ChatGPT continues to evolve, several studies explored the perceptions and sentiments of English-speaking users on X (Haque et al., 2022; Huh, 2023). Other studies studied the topics and sentiments of Arabic-speaking users of X towards ChatGPT (Al-Khalifa et al., 2023). Most studies focused on exploring either English or Arabic written discourse. While the findings are valuable, there is a lack of research on the similarities and differences of how both writers construct interpersonal meaning, express authorial stance and the relationships they assume with their putative readers on X. This shows an important research gap that needs to be addressed. It is important to provide detailed explanations of (1) how writers express their attitudes through features of affect, judgment, and appreciation on X about ChatGPT in Arabic and English, (2) how Arabic and English writers construct their voices through the use of evaluative language, and (3) the assumptions that writers in English and Arabic make about the values and beliefs of their readers. The findings will provide valuable insights into the language resources writers in English and Arabic employ to express their views of ChatGPT on X. Additionally, the findings will provide important understandings of how writers interact with their readers and the assumptions they make about them. Such findings may help improve academic writing courses for both languages and shed light on some recent ChatGPT- related concerns relevant to both communities. Therefore, the study aims to answer the following questions:1.What features of attitude (affect, judgment, and appreciation) are disseminated in the discussion about ChatGPT on X by writers in English and Arabic?2.How is the authorial voice constructed through using evaluative language on X by writers of English and Arabic when discussing ChatGPT?3.What assumptions do the writers make about the values and beliefs of their putative audience on X?MethodologyAccording to Guba (1990), the constructivist/ interpretive paradigm is based on the assumption that knowledge is socially constructed. Therefore, a constructivist/ interpretive paradigm has been chosen for the present study. Two sets of data (i.e., tweets in Arabic and tweets in English) were explored quantitatively through statistical comparison of frequencies to objectify the qualitative findings. The study involves a detailed qualitative analysis of two sets of tweets about ChatGPT. The first set of data is written in English while the second set is in Arabic. The writers of the English tweets were not the same writers of the Arabic ones. The criterion for selecting tweets was all available tweets about ChatGPT from the 1st of Dec 2022 to the 30th of Oct 2023 in Arabic and English. The size of the data was unequal, but all frequencies were normalized. To normalize the data, raw counts were converted to percentages within each dataset (i.e., occurrences were calculated separately and then frequencies were adjusted based on the total number of tweets in each set). The two sets of data (234 tweets in English and 223 tweets in Arabic) were cleaned, and duplicates were removed. Then, the resulting data (192 tweets in English and 161 tweets in Arabic) were imported into the UAM Corpus Tool for processing. The UAM Corpus Tool is a free, open-source, downloadable program for annotating each text in a corpus at multiple levels (O’Donnell, 2011). The UAM Corpus Tool has a number of functions that facilitate automatic and manual annotation. It also provides descriptive and contrastive statistics of the data. In addition, it provides essential information such as word count and lexical density. The data was exported from X the UAM Corpus tool for qualitative and quantitative analysis. For coding reliability purposes, the author recoded 20% of the data after one month of the first coding attempt to guarantee consistency and eliminate differences.ResultsThe first research objective was to explore how writers express their attitude on X about ChatGPT in Arabic and English. To achieve the objective the dissemination of affect, judgment, and appreciation on X about ChatGPT were explored. The findings indicated that attitudinal instances were more frequent in the Arabic data (42%) compared to the English data (34%) as indicated in Fig. 1.Fig. 1Appraisal resources in the data.Full size imageIn addition, the subsystem of “appreciation” was the most dominant attitudinal type in both sets of data on ChatGPT posted on X, as indicated in Table 1.Table 1 Attitudinal resources in tweets about ChatGPT.Full size tableAppreciation is concerned with evaluating objects, phenomena, processes and states of affairs, but not human behavior. Martin and White (2005) noted that appreciation is a means of institutionalizing feelings as propositions and presents the evaluation as a quality of a phenomenon or object rather than behavior. They added that appreciation helps the writer to objectify their involvement in the text. As a result, the dominance of appreciation in both data sets gave them a sense of objectivity. In addition, Martin and White (2005) pointed out that appreciation involves less subjective involvement of the writer compared to affect. Therefore, when writers encoded values of appreciation in their texts, they were depersonalizing and objectifying their evaluations, as presented in the following examples:Martin and White (2005) noted that “affect” is also a subsystem of attitude, and it is used to express negative or positive emotions (e.g., happy vs. sad) or emotions expressed physically (e.g., laugh vs. cry). In the present study, writers in Arabic employed more affectual values in their tweets, 30%, than in English, 19%, as indicated in Table 1. The authorial voice expressed affect in English and Arabic tweets by using words such as ‘scary’ and ‘despise-استحقر’ as indicated in the examples below:According to Martin and White (2005), “judgement”, the third type of attitude, explores to what extent writers tended to judge behaviors, that is, what they believed, said or did according to certain values. Under the judgement, behavior can be assessed as normal or odd, moral or immoral, truthful or untruthful and so on. In Table 1. above, Judgement values were the least attitudinal values encoded in both sets of tweets on the topic of ChatGPT. According to Table 2, expressions of attitude can be positively or negatively encoded depending on the authorial voice. In the present data, attitudinal resources that the writers employed in both languages were clearly more positive than negative on the topic of ChatGPT, as illustrated in Table 2.Table 2 Positive and negative attitude posted on Twitter about ChatGPT.Full size tableMoreover, “graduation” is the second subsystem of appraisal, and it is concerned with grading feelings, judgements and assessments. It is the toning-down or toning-up of meanings; graduation mechanisms are those used by writers to tone-down or tone-up the force or focus of their utterances (Martin and White, 2005). In the present data, locutions in graduation accounted for 28% of the appraisal features in the data written in Arabic and for 24% of the data in English on the topic of ChatGPT on Twitter as indicated in Fig. 1. The findings showed that both groups of writers tended to employ lexicogrammatical resources to up-scale attitudinal meanings more than down-scale them when writing about ChatGPT on Twitter, as presented in Table 3.Table 3 Up-scaling and down-scaling attitudinal expressions in tweets.Full size tableWhen writers on Twitter discussed ChatGPT, they mostly upscaled and sharpened their expressions. Resources from the category of sharpen (e.g., real threat) were used to indicate the author’s investment in the value position and to align the readers with the meaning being conveyed. For example:The second research objective was to explore how different authorial voices were constructed through the use of evaluative language on X by English and Arabic writers discussing the topic of ChatGPT. This was explored through the framework of engagement. Engagement refers to the linguistics resources available by the language that writers use to take a stance towards the positions and values in the text and with consideration of their readers. The framework of engagement provides a systematic account of how writers linguistically voice their values and how they position their readers and the backdrop of other voices and points of view. As noted by Martin and White (2005), all verbal communication (i.e., engagement) is considered ‘dialogic’ since writers encode their point of view and voice their values towards what they write and whenever they write. Martin and White (2005) distinguished between contractive propositions and expansive ones. They explained that when an utterance allowed for dialogical alternative voices and positions, it was considered dialogically expansive, but when it restricted or challenged alternative positions, it was dialogically contractive. In the current study, writers in Arabic were more dialogically expansive than English writers. However, tweets posted in English were slightly more contractive than expansive, as presented in Table 4.Table 4 Frequency of contractive vs. expansive resources on Twitter.Full size tableWriters who posted in English about ChatGPT were more contractive (51%) than expansive (49%). This indicates that they preferred to restrict the scope of alternative positions and contract the dialogical space in their tweets about ChatGPT rather than open it up. However, the writers posting on Twitter in Arabic were more expansive (79%), indicating that they preferred to expand the dialogical space and open it up for discussion when engaging with their readers.Furthermore, the resources of dialogic contraction are divided into two categories. The first subcategory of dialogic contraction is ‘disclaim’ which refers to meanings that reject certain dialogic alternatives or consider them not applying. While ‘proclaim’ involves explicit authorial interventions that act to challenge and confront dialogic alternatives in the ongoing discussion. In both sets, the frequency of resources of dialogic contraction employed by writers in English and writers in Arabic are illustrated in Table 5.Table 5 Engagement resources in the data about ChatGPT.Full size tableAs indicated above in Table 5, both the writers employed more resources of disclaiming than proclaiming. Disclaim includes the two subtypes of deny and counter, which are both dialogistic in the sense that they both invoke a specific proposition and then consider it not to hold. However, counter differs from deny in that counter also replaces a proposition with an alternative one. The data in English had more instances of countering, 28% than denying 17%, while in the data written in Arabic, there were no instances of countering. In addition to disclaiming, the resources of dialogic contraction include proclaiming. Proclaim includes formulations that represent the propositions as valid and warrantable, and the authorial voice sets itself against and rules out other alternative positions. The difference between disclaim and proclaim is that the later does not directly reject or overrule propositions but instead limits their scope and suppresses them. In the present data, proclaim accounted for 6% of the tweets posted in English and for 4% of the ones posted in Arabic. This means that both writers preferred not to show authorial interventions that limit the scope of alternative propositions when tweeting about ChatGPT. Consider the following examples of how engagement resources were employed in Arabic and English tweets about ChatGPT:Regarding dialogic expansion, the data written in Arabic was more expansive than the data in English. Expansive resources are divided into two major categories of entertain and attribute. The first type of dialogistic expansion was ‘entertain’, which refers to meanings where the authorial voice represented a proposition as one possible position among others; hence, it entertained other possible dialogic alternatives and made space for them. Formulations of entertain were the most common types of dialogistic expansion in both data sets. Resources of entertain accounted for 36% of the data in English and 48% of the data in Arabic. The second type is attribution, which falls into two subtypes: acknowledge and distance. The first subtype of attributions is “acknowledge”, and it refers to meanings where some proposition is attributed to an external voice while disassociated from the authorial voice. Formulations of acknowledge accounted for 13% of the data in English and 31% of the data in Arabic. However, there were no instances of distance in both sets of data. The findings show that the authorial voice in Arabic tweets tended to expand and open up the dialogical space for discussion, whereas the authorial voice in English tweets preferred to restrict the scope of different positions and contract the dialogical space.The third research objective was to explore the assumptions that the writers made about the values and beliefs of their putative audience on Twitter when discussing the topic of ChatGPT. According to the dialogic perspective of appraisal, writers not only announce and express their attitudes and positions, but also use signals that indicate how the authorial voice is in alignment or not with the addressee or putative reader. Writers come into the text with imagined or envisaged readers that are considered either in agreement or disagreement or need to be convinced with the writer’s point of view. Therefore, engagement formulations are used to signal both the position of the authorial voice towards the propositions being discussed in the colloquy and, at the same time, how the putative reader is positioned in the text. In the current study, the relations of alignment, misalignment and solidarity between the writer and the putative reader in the tweets posted about ChatGPT were explored, and the findings from both sets of data are presented in Table 6.Table 6 The Authorial voice – Putative Reader Relationship in tweets.Full size tableAs demonstrated in Table 6, the authorial voice in the English tweets viewed by 36% of their putative readers is divided into positions towards the topic under discussion. Similar to the English tweets, the authorial self in the Arabic tweets viewed 48% of its readers as divided. This indicates that the authorial voice construed a heteroglossic backdrop for the ongoing discussion. In the tweets, there were propositions that were being advanced that were potentially in disagreement with other dialogistic alternatives. Therefore, the textual voice assumed that the putative readers may be divided over the matter under discussion. As a result, the authorial voice, maintained solidarity with both types of readers, ones holding similar positions and ones holding alternative propositions in both sets of tweets through formulations of entertain. Additionally, the authorial voice in both sets of tweets, English and Arabic, viewed 22% and 19%, respectively, of their reader audience as disaligned with their propositions. The authorial voice acted against the beliefs of the putative addressee and assumed having greater expertise in the proposition under discussion by employing resources of deny. Hence, the authorial voice acted in a corrective manner to address a misconception on the part of the reader. However, there were only 5% and 2% of pronounce formulations in the English and Arabic tweets, respectively, which indicates that the textual voice avoided acting in a confronting manner and did not explicitly place interventions as such formulations of pronounce might create a clear threat to the putative reader’s solidarity with the authorial voice.Also, writers of English tweets assumed a relation of alignment with 29% of their putative readers by using concur, endorse, and counter resources. At the same time, the Arabic writers assumed that only 2% of their readers were aligned and would share the same point of view. In Arabic tweets, the writers assumed that 31% of their readers had a neutral position. Also, the writers in English tweets viewed 13% of their audience as neutral. The finding shows that the writers viewed their readers as neither aligned nor disaligned and remained aloof from any expectations of solidarity with their audience. This was accomplished through acknowledgement resources that allowed the reader to enter into the text with no specific relationship with the putative reader.DiscussionThe study aimed to understand how writers of English and Arabic on X construct interpersonal meaning, take stance and maintain a relationship of solidarity with their readers when discussing ChatGPT. Thus, the theoretical model of appraisal theory (Martin and White, 2005) within the model of Systemic Functional Linguistics by Halliday (1978) was selected as the framework for analysis. In the data, the Arabic writers employed more attitudinal instances when discussing ChatGPT on X. They were concerned with evaluating and showing emotions towards ChatGPT more than the writers in English. Both writers in Arabic and English, similarly, avoided making judgements of others using ChatGPT or AI developers. Additionally, they focused mainly on evaluating objects, phenomena, processes and states of affairs, but not human behavior. This was achieved through the use of “appreciation” resources. This allowed them to objectify their involvement in the text and minimize subjectivity (Martin and White, 2005). This finding supports results reported by Al-Khalifa et al. (2023) on Arabic speakers discussions on X on topics relevant to ChatGPT. They reported that the Arabic speakers were more objective than emotional in their discussions.In both sets of data, the writers expressed positive attitudes when discussing ChatGPT on X. The findings of the present study are similar to those reported by Haque, Dharmadasa, Sworna, Rajapakse, and Ahmad (2022) and Al-Khalifa et al. (2023), who found that the sentiments of early adopters of ChatGPT on Twitter were mostly positive. In the present study, writers in English on X used expressions such as ‘a masterpiece, amazing, we love, excited, thrilling, game changer, mind-blowing, astonishing, and incredibly impressed’. Also, in Arabic tweets, they used the following expressions, ‘أعجبني - I liked it, أبهر الناس - amazed people, أقوى أداة - the strongest tool, مشروعهم المذهل - an amazing project, and قدرات استثنائية - exceptional capabilities. There were also some negative expressions, such as ‘doomsday, crisis, chaos, untrustworthy, fake, incredibly terrified, lights out for all and freaked out’ in the English tweets. Similarly, there were also negative expressions in the Arabic tweets, such as, ‘المخيفة – scary, صرخات قُرْب نهاية العالم – end of the world’. Interestingly, writers in both sets of data tended to increase the intensity of their expressions when discussing ChatGPT on X. They employed various lexicogrammatical resources to grade and tone-up their feelings, judgements and evaluations. The writers sharpened their expressions to show their investment in the discussion and alignment with their audience readers.A further aim of the present study was to explore how writers linguistically voice their values, points of view and take stance on X when discussing the topic of ChatGPT. As noted by Martin and White (2005), all verbal communication (i.e., engagement) is considered ‘dialogic’ since writers encode their point of view and voice their values towards what they write and whenever they write. On X, writers in Arabic were more expansive than the writers in English. They allowed for dialogical alternative voices, whereas writers in English were more dialogically contractive indicating that they preferred to restrict the scope of alternative positions and contract the dialogical space in their tweets about ChatGPT rather than open it up. In both sets of data, the writers preferred to use linguistic resources that allowed them to reject alternative points of view and consider them not applying. However, they avoided explicit authorial interventions that act to challenge and confront dialogic alternatives in the ongoing discussion. They preferred to not directly reject or overrule propositions but instead tended to limit their scope and suppresses them.The third research objective was to explore the assumptions that the writers made about the values and beliefs of their putative readers on X when discussing the topic of ChatGPT. From the dialogic perspective of appraisal, writers not only announce and express their attitudes and positions, but also use signals that indicate whether the authorial voice aligns or disaligns with the addressee or the reader. It is assumed that writers come into the text with imagined or envisaged readers that are considered either in agreement, disagreement, or need to be convinced with the writer’s point of view. Therefore, engagement formulations are not only used to signal the position of the authorial voice towards the propositions being discussed in the colloquy, but also, how the putative reader is positioned in the text. In the present study, the authorial voice in both English and Arabic tweets viewed the majority of their putative readers as divided over the matter under discussion (i.e., ChatGPT). As a result, the authorial voice maintained solidarity with both types of readers, ones holding similar positions as the writers and ones holding alternative propositions in both sets of tweets through formulations of “entertain”. Additionally, the authorial voice in both sets of tweets, English and Arabic, viewed some of their reader audience as disaligned with their propositions. The authorial voice acted against the beliefs of the addressee and assumed having greater expertise in the proposition under discussion by employing resources of “deny”. Hence, the authorial voice acted in a corrective manner to address a misconception on the part of the reader. However, the textual voice avoided acting in a confronting manner and did not explicitly place interventions as such formulations of “pronounce” might create a clear threat to the putative reader’s solidarity with the authorial voice.Interestingly, the writers of English tweets assumed a relation of alignment with their putative readers while the writers in Arabic did not assume alignment with their readers on the topic of ChatGPT. Instead, writers in Arabic assumed that their readers had a neutral position. This indicates that they viewed their readers as neither aligned nor disaligned and remained aloof from any expectations of solidarity with their audience. The writers in Arabic entered into the discussion with no specific relationship with the putative reader. In sum, writers in both English and Arabic tweets assumed that the majority of their readers were divided, while others were disaligned. Additionally, the writers in English assumed alignment with their readers while the writers in Arabic did not. Alternatively, writers in Arabic viewed their readers as holding a neutral position and assumed no relationship with them as a result. The findings of the study are similar to previous findings of Alghazo et al. (2024), Al-Khalifa et al. (2023), and Haque et al. (2022) who reported similar findings. They reported that writers in Arabic argued more on emotional grounds and employed more attitudinal instances compared to writers in English when discussing the topic of ChatGPT. The findings may be due to the linguistic nature of each language and the differences between the writers as indicated by Alotaibi (2021).ConclusionThe present study aimed to contribute to understanding how interpersonal meaning is constructed cross-culturally in intertextual digital spaces such as X. Tweets posted in English and Arabic on the topic of ChatGPT were explored from the functional perspective of appraisal theory (Martin and White, 2005). The findings indicated that writers in Arabic argued more on emotional grounds and employed more attitudinal instances compared to writers in English when discussing the topic of ChatGPT. In both datasets, similarly, the writers were more objective and avoided judgements of behavior. They mainly focused on evaluating objects, phenomena, processes, and states of affairs through the use of “appreciation” resources. This finding supports results reported by Al-Khalifa et al. (2023) on Arabic speakers’ discussions on X, which stated they were more objective than emotional. Similar to findings reported by Haque et al. (2022) and Al-Khalifa et al. (2023), in the present study, both sets of writers expressed positive attitudes when discussing ChatGPT on X. They also increased and sharpened the intensity of their expressions by employing various lexicogrammatical resources of graduation.The results also showed that writers in English were slightly more contractive (51%) than expansive (49%), indicating a preference to restrict the scope of alternative positions and contract the dialogical space in their tweets about ChatGPT. However, their Arabic counterparts posting on X preferred to expand the dialogical space allowing for alternative voices. Both writers preferred not to show authorial interventions that limit the scope of alternative propositions when discussing ChatGPT.In terms of the writer-reader relationship, in both sets of tweets, the authorial voice maintained a relationship of solidarity with the readers, those holding similar positions and those holding alternative propositions. Writers in both English and Arabic tweets assumed that the majority of their readers were divided, while others were disaligned. English writers positioned their readers as in alignment with communication on vastly networked digital platforms, such as X and blogs. Second, acknowledging cross-cultural nuances in them while Arabic writers did not, and instead positioned their readers as holding a neutral view.The analysis reveals several findings with implications for cross-cultural communication and writing instruction. First, writers should be made aware of the linguistic resources available to construct the different kinds of meaning. Meanings that allow them to evaluate information and objectively establish a difference allowing space for their own propositions and at the same time maintaining solidarity with their readers. By doing so, writers can gain insights into the strategic and rhetorical nature of argumentation, whether in academic writing or written communication on vastly networked digital platforms, such as X and blogs. Second, acknowledging cross-cultural nuances in academic writing discourse and equipping writers with the linguistic skills that enable them to adapt their writing to culturally diverse contexts. Third, equally important is equipping writers with the linguistics resources that allow them to write constructive arguments and objectively navigate and engage with opposing propositions. Fourth, in academic writing courses learners should be guided on how to avoid relying on excessive intensifiers and how to appropriately express conviction and how to strengthen their arguments. Fifth, preparing writers to anticipate objections and potential disagreements with their readers and how to tailor their arguments accordingly. Finally, since using social media in ESP courses can be fun and motivating for learners (Tardy, 2021), exposing writers in ESP courses to the lexicogrammatical options employed in social media platforms and the semantic differences they pose can increase their awareness of the available rhetorical patterns and the linguistic means of evaluation, self-presentation, argumentation and persuasion. Finally, the present study is limited in scope and time. Exploring more tweets over a longer period is recommended to expand our understanding of cross-cultural interpersonal meaning in digital spaces when discussing new trends and phenomena. Additionally, the study did not consider user demographics such as gender, location or profession, which could influence engagement and how stance is constructed. Also, the analysis focused on the linguistic content and did not examine the multimodal elements such as the accompanying images, that play a role in shaping meaning on digital platforms.