Wikipedia's volunteer editors have rejected founder Jimmy Wales' proposal to use ChatGPT for article review guidance after the AI tool produced error-filled feedback when Wales tested it on a draft submission. The ChatGPT response misidentified Wikipedia policies, suggested citing non-existent sources and recommended using press releases despite explicit policy prohibitions. Editors argued automated systems producing incorrect advice would undermine Wikipedia's human-centered model. The conflict follows earlier tensions over the Wikimedia Foundation's AI experiments, including a paused AI summary feature and new policies targeting AI-generated content.Read more of this story at Slashdot.