The Ai thread

CNET now labeled as an untrusted source by Wikipedia as both it and especially its parent company have been caught using AI written articles.


Before the age of AI, Wikipedia editors already had to deal with unwanted auto-generated content in the form of spambots and malicious actors. In this way, editors' treatment of AI-generated content is remarkably consistent with their past policy: it is just spam, isn't it?

Particularly when we also consider that lawsuits like The New York Times v. OpenAI and Microsoft remind us that these so-called generative AIs are pretty much required to steal other people's work to function at all. At least when a regular thief steals an object, it still works. With generative AI, you can't even guarantee that the result will be accurate, especially if you already lack the expertise to tell the difference.
 
Last edited:
CNET now labeled as an untrusted source by Wikipedia as both it and especially its parent company have been caught using AI written articles.

Look at how useless we've become as human beings, can't even even formulate an article on our own any longer, and this is just the beginning.
 
Look at how useless we've become as human beings, can't even even formulate an article on our own any longer, and this is just the beginning.
An interesting corollary to all of this is that I’ve increasingly seen accusations of articles being “AI-written” simply by virtue of the accusers not liking the content - especially movie reviews and even more especially game reviews. Many of those accounts are themselves created that same day and the posts could easily have been generated by AI.

Whether they were or not, I “look forward” to different AIs accusing each other of being AIs in the comments section of an article written by AIs. The Future!

As the article notes we’ve been dealing with this from human-manned spam and troll farms for years. In some ways it’s no different but the volume is likely to go up waaaaay higher.
 
Last edited:
As AI increasingly trains on AI generated data the greater the “group think” and less variable the outputs become.

There was some discussion on this topic on Ars around Reddit and Tumblr negotiating deals for access to user data for AI model training. The goal of these deals apparently is to be able to get access to data sets where you can know if the content is "pre-LLM" or "post-LLM", and at least in the short term, just ignore any data recent enough to be known to carry LLM contamination. Backdated articles and the like might actually be enough of a concern that they want the raw data where certain timestamps are unlikely to be altered.
 
Good ole Capitalism at work. 🤔🔥
It’s true though, part of it is ourselves including my own personal behavior! of wanting “content” but not wanting (or for some being able) to pay what it actually costs to generate that content, especially quality content. To be fair the other part is less under consumer control: ad revenue being reduced and perhaps even more importantly centralized in all the wrong places. That’s why places like patreon or a merchandise store if you’re really big are so much more valuable than the actual website or especially YouTube/Twitch channel.
 
It’s true though, part of it is ourselves including my own personal behavior! of wanting “content” but not wanting (or for some being able) to pay what it actually costs to generate that content, especially quality content. To be fair the other part is less under consumer control: ad revenue being reduced and perhaps even more importantly centralized in all the wrong places. That’s why places like patreon or a merchandise store if you’re really big are so much more valuable than the actual website or especially YouTube/Twitch channel.

Merchandising isn't exactly a new thing though. This is just the next level of it.
 
Aye I was referring more to content creators and how small the revenue stream is from ad-support compared to merchandising, but yes major studios learned that lesson a long time ago - even compared to ticket sales, merchandising is king.

Yeah, I guess I'm just not that surprised that for smaller creatives, it's even more pronounced though. Places like YouTube are built on top of a system that undermines any attempt to collectively bargain for better cuts of the revenue generated.

It's a bit like a casino. The algorithm determines whether you win or lose, but YouTube picks up enough from everyone that they always win.
 
So not only are lawyers replacing themselves with Chatbots, Nvidia is now advertising that you can replace chip designers with them too!


How long before @Cmaier begins to take this personally? 🙃

On a completely unrelated note: scientists have been caught publishing papers written by Chatbots (Elsevier as a publishing house has been particularly bad with sentences from papers actually published in their various journals including "I don't have information on that, I am an AI language model") and now people are suspecting that scientists are writing "peer" reviews using Chatbots too. Which may explain why sentences like "I am an AI language model" passes ... the AI doing the "peer" review goes "Really? Me too!" I mean I suppose if it's being written by an AI and read/reviewed by an AI it still is sorta peer review?


I guess lazy PIs* are doing this instead of foisting the unwanted reviews off to postdocs and grad students? That's what we did in my day and WE LIKED IT. Actually not the worst practice if done intentionally with mentorship by the Professor to help grad students and postdocs learn to review papers properly. In fact, that's one of the best ways to learn, but there the PI is still in the loop and credits the grad student/postdoc. Obviously the people stooping to doing this, going to a chatbot, were not the kind to do that though prior to chat bots and were more likely to be the kind to just abuse their post docs/grad students for cheap labor.

*PI - principal investigator, head of the lab, professor, etc ...
 
So not only are lawyers replacing themselves with Chatbots, Nvidia is now advertising that you can replace chip designers with them too!


How long before @Cmaier begins to take this personally? 🙃
Eh, so far we have a couple guys sanctioned because AI did such a bad job lawyering, and this AI does part of the chip design humans weren’t doing anyway (making subtle modifications to the mask to improve yield). That’s called “DFM,” and we’ve always relied on software for that sort of thing.
 
Eh, so far we have a couple guys sanctioned because AI did such a bad job lawyering, and this AI does part of the chip design humans weren’t doing anyway (making subtle modifications to the mask to improve yield). That’s called “DFM,” and we’ve always relied on software for that sort of thing.
I believe that's cuLitho? I think Jensen was saying they now also have a Chatbot involved during the chip design process itself, but as far as I can tell from the Anandtech keynote summary, Jensen didn't really go into that in any depth and the one example put up was just a simple reference question "what is a CTL?" And "write an example CTL test". So no clue what its actual (useful) capabilities are.

NVIDIA ChipNeMo, based on Llama 2 70B

 
Last edited:
I believe that's cuLitho? I think Jensen was saying they now also have a Chatbot involved during the chip design process itself, but as far as I can tell from the Anandtech keynote summary, Jensen didn't really go into that in any depth and the one example put up was just a simple reference question "what is a CTL?" And "write an example CTL test". So no clue what its actual (useful) capabilities are.

NVIDIA ChipNeMo, based on Llama 2 70B


I‘m so excited. You, too, can have a computer design garbage as bad as nvidia engineers :) I just tried ChatGPT:

1710803586393.png
 
After giving it a bunch more direction, got something useful out of chatgpt:

1710804095037.png
 

So I don’t know the details of this deal (and apparently neither do the voice actors which is not a good look for their union). But I can see AI voice acting being okay under specific circumstances. This isn’t the only one but the main one I can think of which I think has been mentioned in this thread by others: imagine a massive sprawling RPG like Baldur’s Gate where human writers and an actor have been paid for a huge number of lines to create a compelling character one with an arc or even multiple arcs based on your choices in game. Then imagine being able to interact with said character by simply typing (or speaking!) your lines. Maybe you can still have prompts to trigger their preset responses but even so the AI is trained to reply (and act) as the character and where they are in the story so far. Human creativity for both voice acting and the lines (animation/mocap too) is still being paid, a necessary element for creating a strong character (so far, AI has its limits), but the freedom of expression for the player goes up tremendously. Now there may be other examples where AI can extend human creativity in this space rather than be a shallow imitation of it (and the business/ethics of the former compared to the latter for writers, actors, and animators), but I think that’s certainly the main area.

“baldurs gate dev talks appeal of ai but not as a replacement for human developers”
 
Last edited:
With everyone using AI (either full or enhanced) voices combined with outrage face on YouTube it's become flat out cringeworthy to use. It feels like we're losing humanity when it comes to interacting with each other on any sort of digital platform.
 
Back
Top