The Ai thread

The mother of a 12-year-old who remains in hospital after the shooting on Feb. 10, alleges the tech company OpenAI failed to alert authorities to chat prompts from the shooter related to violence.

The claim was filed in B.C. Supreme Court on Monday on behalf of Gebala by her mother.

It alleges that the company designed its chat tool, ChatGPT, in such a way that there were risks users "would become psychologically and socially dependent" upon it.

The lawsuit states that the company "had specific knowledge of the shooter's long-range planning of a mass casualty event," but "took no steps to act upon this knowledge."

This is a first, at least here, but I really hope they toast Sam & c. as much as they can
 

I wish I could push ChatGPT off a cliff’: professors scramble to save critical thinking in an age of AI

Some excerpts from the article

“Most professors described the experience of contending with the technology in despairing terms. “It’s driving so many of us up the wall,” one said. “Generative AI is the bane of my existence,” another wrote in an email. “I wish I could push ChatGPT (and Claude, Microsoft Copilot, etc.) off a cliff.”

“I now talk about AI with my students not under the framework of cheating or academic honesty but in terms that are frankly existential,” said Dora Zhang, a literature professor at the University of California, Berkeley. “What is it doing to us as a species?”

Michael Clune, a literature professor and novelist, said that already, many students have been left “incapable of reading and analyzing, synthesizing data, all kinds of skills”. In a recent essay, he warned that colleges and universities rushing to embrace the technology were preparing to “self-lobotomize”.

Many professors talked about keeping the technology out of the classroom as a battle already lost. As many as 92% of students have reported resorting to the technology in their school work, recent surveys show, and the numbers are rapidly increasing even as growing numbers express concerns about the technology’s accuracy and the integrity of using it. Reliance on AI among faculty is also on the rise, with observers pointing to the dystopian possibility that the college experience may soon be reduced to AIs grading AI-generated homework – “a conversation between two robots”.

Professors said they resorted to oral interrogations, handwritten notebooks, and class participation for grading purposes. Some require students to submit transparency statements describing their work process. Others have reportedly injected random words like “broccoli” and “Dua Lipa” into assignments to confuse learning models – exposing students who did not even read the prompts before pasting them into AI. 😅

 
One of early posts in this thread was about the working conditions of the people labeling scraped data for AIs. Things are starting to to come to ahead:

 
“I now talk about AI with my students not under the framework of cheating or academic honesty but in terms that are frankly existential,” said Dora Zhang, a literature professor at the University of California, Berkeley. “What is it doing to us as a species?”

Michael Clune, a literature professor and novelist, said that already, many students have been left “incapable of reading and analyzing, synthesizing data, all kinds of skills”. In a recent essay, he warned that colleges and universities rushing to embrace the technology were preparing to “self-lobotomize”.

I'm seeing this to some extent at work already. I've had to argue with and correct engineers that used Claude Code to analyze situations, provide faulty findings, and then try to implement those faulty findings. Even when there is something valuable there, they are less willing/able to go the next step and go "well if X is true, and we want Y long-term, we can do Z to solve this and push us towards that long-term goal." You know, the bread and butter of engineering things that don't fall over in a stiff breeze.

It feels like Idiocracy is both right and wrong. It's not about "smart people don't breed enough", it's "people are willing to sacrifice themselves on the altar of convenience".

Report: Creating a 5-second AI video is like running a microwave for an hour

I'll admit, I've had to basically try to ramp up on Claude Code in my own time.

I gave it the task of doing the tedious part of taking the output of a dead code analysis tool, and see how much could actually be stripped. Hit the usage 5hr window usage cap in 30 minutes. Took what i learned, built up a "skill", and tried again later. Hit the usage cap in 30 minutes again. This is on the 20$/month tier. I feel like I'd get more value subscribing to Lightroom again.
 

Unfortunately this is always where this was going … the newest version of DLSS is not just filling in to help render scenes at higher resolutions or making additional frames to smooth out frame rates, but now completely changing artistic intent to make textures that were never there for “realism”.
 

Unfortunately this is always where this was going … the newest version of DLSS is not just filling in to help render scenes at higher resolutions or making additional frames to smooth out frame rates, but now completely changing artistic intent to make textures that were never there for “realism”.
Eh, in games artistic intent usually also includes motion blur, film grain and chromatic aberration; all features folks usually turn off, so this probably wont be any different.
 
Eh, in games artistic intent usually also includes motion blur, film grain and chromatic aberration; all features folks usually turn off, so this probably wont be any different.

Eoof could not disagree more. You can see the pictures and the descriptions, Nvidia is basically just creating just a completely different image off of the barest hints in the original. I mean you say you turn off those things but you’d want this? Weird. This is way more intrusive than anything you just mentioned.
 
Last edited:
Eoof could not disagree more you can see the pictures and the descriptions, Nvidia is basically just creating just a completely different image. I mean you say you turn off those things but you’d want this? Weird. This is way more intrusive than anything you just mentioned.
I watched the videos. Nvidia claims it is just changing the way the lighting works/rendered. I guess we have to wait to see if that is a lie. Depending on the level of backlash this could end up like Reflex 2 or the 4080 12GB.

NOTE: I'm not saying I want this, was just pointing out, poorly, that the artistic intention usually includes features that almost everyone turns off. Quite frankly I'm unsure how else Nvidia could improve the performance of lighting (RT/PT) without relying on these tricks and still having to draw 1200W.
 
I watched the videos. Nvidia claims it is just changing the way the lighting works/rendered. I guess we have to wait to see if that is a lie. Depending on the level of backlash this could end up like Reflex 2 or the 4080 12GB.

NOTE: I'm not saying I want this, was just pointing out, poorly, that the artistic intention usually includes features that almost everyone turns off. Quite frankly I'm unsure how else Nvidia could improve the performance of lighting (RT/PT) without relying on these tricks and still having to draw 1200W.
I understand what you're saying, but even just changing how the lighting works and is rendered basically changes the entire tone and even art style. It changes how colors are rendered, it adds features to faces which may or may not make sense, as said in the article it homogenizes these games to all look and feel the same way. Part of the problem is that this is also how they're selling game advancements. Nvidia's big claim is that they'll be doing x1million faster ray/path rendering compared to Pascal (which didn't even have ray tracing) by using this neural rendering technique - i.e. actual raster performance is likely hitting a standstill.

Note, I'm not saying all DLSS is bad, but I gotta agree with the opinion article, this is definitely AI slop territory now.
 
I understand what you're saying, but even just changing how the lighting works and is rendered basically changes the entire tone and even art style. It changes how colors are rendered, it adds features to faces which may or may not make sense, as said in the article it homogenizes these games to all look and feel the same way. Part of the problem is that this is also how they're selling game advancements. Nvidia's big claim is that they'll be doing x1million faster ray/path rendering compared to Pascal (which didn't even have ray tracing) by using this neural rendering technique - i.e. actual raster performance is likely hitting a standstill.

Note, I'm not saying all DLSS is bad, but I gotta agree with the opinion article, this is definitely AI slop territory now.
Twitter (cause I'm not on Bluesky) and YT are very upset with DLSS 5 right now. It feels similar to the FG backlash that was had (initially).

As as aside, it seems like it looks better if you don't have characters in the scene (AC:Shadows looks better with DLSS 5).
 
Eh, in games artistic intent usually also includes motion blur, film grain and chromatic aberration; all features folks usually turn off, so this probably wont be any different.

Except those are choices made by the developer, with toggles made available to you by the developer. DLSS 5 here is taking the developer's rendered output as an input, and doing an additional pass on it with a generative AI model, to effectively replace it with something that isn't quite the same, but similar.

As others have pointed out in the online arguments, it's somewhat akin to the "make the character hot" takes floating around on social media. Only more subtle and insidious as a result. But ultimately, a lot of the edits made by a filter like this are coming from similar training data, which isn't going to be great when faced with the issues we already have around conventional standards of beauty being unmaintainable, airbrushed fantasies as it is. Their example of Grace is particularly concerning because it does a number of small edits to the character. Plumps up the lips, adds light makeup, changes the nose, draws in the skin a bit to make it look like she's a bit underweight...

So even if a developer tries to avoid certain stereotypes, or is trying to show a character in a particular light for narrative purposes, this sort of generative AI can come in and undermine it. That's the real concern here... that fundamentally, this takes artistic control away through this opinionated auto-enhance filter layered on top.
 
Back
Top