CNET now labeled as an untrusted source by Wikipedia as both it and especially its parent company have been caught using AI written articles.
AI-generated content and other unfavorable practices have put longtime staple CNET on Wikipedia's blacklisted sources
Average readers aren't the only ones tired of sorting through AI "work"— Wikipedia editors have had it, too.
www.tomshardware.com
Before the age of AI, Wikipedia editors already had to deal with unwanted auto-generated content in the form of spambots and malicious actors. In this way, editors' treatment of AI-generated content is remarkably consistent with their past policy: it is just spam, isn't it?
Particularly when we also consider that lawsuits like The New York Times v. OpenAI and Microsoft remind us that these so-called generative AIs are pretty much required to steal other people's work to function at all. At least when a regular thief steals an object, it still works. With generative AI, you can't even guarantee that the result will be accurate, especially if you already lack the expertise to tell the difference.
Last edited: