GPT-fabricated scientific papers on Google Scholar: Key features, spread, and implications for preempting evidence manipulation | HKS Misinformation Review
Academic journals, archives, and repositories are seeing an increasing number of questionable research papers clearly produced using generative AI. They are often created with widely available, general-purpose AI applications, most likely ChatGPT, and mimic scientific writing. Google Scholar...
misinforeview.hks.harvard.edu
Academic search engines like Google Scholar, especially Google Scholar given the way it’s setup, is becoming choked with papers either partially or wholly generated by GPTs. While the authors acknowledge legitimate uses of such tech like non-English speakers improving their writing, this seems to be more than that and such papers are especially concentrated in fields that affect public policy. This is a huge risk not just to the scientific community’s communications getting overwhelmed but to the public and policy makers. Not to mention the possibility mentioned in an earlier post of mine that the greater percentage of material generated by AI that gets included in future training sets poisons that data. Seemingly scientific papers written by AI, especially those that actually manage to get published either in fly by night journals or even reputable ones, will likely pass human curators given that we’ve seen Onion articles and the like serve as the basis for training LLMs.