Jimmy Wales describes himself as a “pathological optimist.” And yet, when the co-founder of Wikipedia spoke with TIME in October, he still seemed somewhat surprised that his online encyclopedia actually worked. “Wikipedia is very trusting, in a way that always seemed a bit crazy,” Wales says. If you think about the chaos of social media, Wikipedia’s model of allowing anyone to edit any entry seems “completely insane,” he says.We’re speaking because Wales just penned his first book, The Seven Rules of Trust, which tries to distill what Wikipedia and a few other bright corners of the internet—Wales cites Airbnb, Uber, and Ebay—can teach us about rebuilding trust in a world awash in skepticism. Since Wikipedia’s launch in 2001, trust in politicians, mainstream media, and “to some extent each other” has all plummeted, Wales says—with consequences extending beyond political deadlocks. Wales, 59, was friends with Jo Cox, the British Labour Member of Parliament who was murdered in 2016 by a far-right extremist days before the Brexit referendum. He believes the rise of politically motivated violence is “a natural result of this feeling of a complete breakdown of societal norms and of the idea of trust—of being able to say, ‘Look, I disagree with you, but I trust that we can have a dialogue and we’ll find a compromise and we can move forward,’” he says. And yet, “Wikipedia has gone from being kind of a joke to one of the few things people trust.”[time-brightcove not-tgx=”true”]Lately, though, that breakdown of trust has started nipping at Wikipedia’s heels. Billionaire Elon Musk, who was once a big fan of Wikipedia, has turned on the encyclopedia, as has White House AI and crypto czar David Sacks, conservative commentator Tucker Carlson, and even Wales’ estranged co-founder Larry Sanger, who have all claimed Wikipedia is biased.In October, the day before Wales published his book, Musk released a Wikipedia rival called Grokipedia, which he said used his AI chatbot Grok to generate entries. Currently, the AI-driven encyclopedia has more than 885,000 articles—many of which appear very similar to their Wikipedia counterparts. While Grokipedia is dwarfed by Wikipedia’s more than 7 million English-language articles, Musk said in a post on his social media platform X that Grokipedia will exceed Wikipedia by several orders of magnitude in breadth, depth, and accuracy. Musk has been critical of Wikipedia for some time, calling it “Wokipedia” and in 2023 offering to give the platform, which is overseen by the nonprofit Wikimedia Foundation, $1 billion if he could rename it “Dickipedia.” Wales told Bloomberg in October that Musk’s accusations of bias are “not true,” adding, “A better message is to say, if you feel like Wikipedia has got some bias, encourage people to come and participate—people who agree with you. Don’t paint us as … crazy left-wing activists or something. We aren’t.”Early responses to Grokipedia have split along familiar lines. Musk fans have lauded Grokipedia for having “no human bias and no errors” and for its “nuance and detail” in entries on topics like George Floyd’s death. Grokipedia’s article foregrounds Floyd’s criminal record in its opening lines, mentioning his murder by a police officer only later. Critics, meanwhile, note that articles about Musk and his companies are longer than their Wikipedia counterparts yet omit unflattering details. Unlike Wikipedia, Grokipedia can’t be directly edited by users. They can inspect the sources and submit correction suggestions, but these aren’t debated on public talk pages or decided by human moderators the way Wikipedia’s are. They are instead processed by Grok, a version of the same AI chatbot that made antisemitic statements after an update in July, forcing xAI to apologize and deactivate the update. Wales’ response to all this? “I don’t think we’re about to see fragmentation in online encyclopedias. Wikipedia will continue to strive to be high quality and neutral,” he says. “If Elon makes an encyclopedia skewed to his world view, I’m sure it will have some traffic but it won’t be anything like Wikipedia.” Wales seems keenly aware of Wikipedia’s shortcomings. His book revisits infamous episodes like when an online troll used the site to falsely implicate journalist John Seigenthaler in the Kennedy assassinations. Wales writes that governments, activists, and ideologues have sought to use the platform’s editing tools to push their worldview. But the site’s continued growth suggests these interests haven’t won out over the voluntary army of “Wikipedians,” he says. “The fact that Wikipedia is still massive, more popular than any newspaper, is partly because we try really hard—not perfect for sure—to stick to the facts and to give transparency,” Wales says. “You can see where the information came from. You can click on it and check.”Wales himself waded into an editing conflict over the site’s entry titled “Gaza genocide” on Nov. 2, writing on a page for discussing edits that the article “fails to meet our high standards” for stating in Wikipedia’s voice that Israel is committing genocide in Gaza. He called it “a particularly egregious example” of the site’s broader neutrality issues. Wales’ comments prompted pushback from some editors. “Why should the opinions of the largely impartial U.N. and human rights scholars be weighed equally to the obviously partisan opinions of commentators and governments?” one commenter asked. “Because that’s what neutrality demands,” Wales responded. “Our job, as Wikipedians, is not to take sides in that debate but to carefully and neutrally document it.” (The Wikimedia Foundation said in a statement that even as co-founder, Wales is just “one of hundreds of thousands of editors, all striving to present information, including on contentious topics, in line with Wikipedia’s policies.”)Grokipedia isn’t the only AI-driven threat to Wikipedia. Some 65% of the nonprofit’s most server-straining traffic now comes from bots, some of which scrape the site to feed into chatbots for training. Instead of clicking through to Wikipedia, search-engine users can now often find their answers in—sometimes wrong—AI-generated summaries. That’s if they don’t go straight to ChatGPT or Claude. Wales says all of this means islands of human-generated content like Wikipedia “become more important than ever.” He says his principles of trust are just as relevant to AI developers, “because every time you get an AI answer and find out that the AI hallucinated and just made that up, it reduces your trust.”That’s where the “real world” comes in. Part of Wales’ pitch is that most of us already practice trust in “very routine ways,” such as getting into a rideshare or sharing an elevator with strangers. He points to Braver Angels, a U.S. group that hosts in-person conversations between people with opposing politics. Participants often emerge “a little more understanding … a little more ready to think about compromises,” Wales says. The challenge is designing institutions and online spaces that tap into those impulses. Wikipedia’s collaborative culture, at its best, is a web version of that: slow, structured, and imperfect.And for internet interactions, Wales’ best advice is disarmingly simple. Direct your attention toward activities that build trust. Audit your feeds. “If you find yourself spending too much time using social media and being fed information that you don’t trust, then stop doing that,” he says. He offers one specific nudge: delete X from your phone.