The Ai thread

The mother of a 12-year-old who remains in hospital after the shooting on Feb. 10, alleges the tech company OpenAI failed to alert authorities to chat prompts from the shooter related to violence.

The claim was filed in B.C. Supreme Court on Monday on behalf of Gebala by her mother.

It alleges that the company designed its chat tool, ChatGPT, in such a way that there were risks users "would become psychologically and socially dependent" upon it.

The lawsuit states that the company "had specific knowledge of the shooter's long-range planning of a mass casualty event," but "took no steps to act upon this knowledge."

This is a first, at least here, but I really hope they toast Sam & c. as much as they can
 

I wish I could push ChatGPT off a cliff’: professors scramble to save critical thinking in an age of AI

Some excerpts from the article

“Most professors described the experience of contending with the technology in despairing terms. “It’s driving so many of us up the wall,” one said. “Generative AI is the bane of my existence,” another wrote in an email. “I wish I could push ChatGPT (and Claude, Microsoft Copilot, etc.) off a cliff.”

“I now talk about AI with my students not under the framework of cheating or academic honesty but in terms that are frankly existential,” said Dora Zhang, a literature professor at the University of California, Berkeley. “What is it doing to us as a species?”

Michael Clune, a literature professor and novelist, said that already, many students have been left “incapable of reading and analyzing, synthesizing data, all kinds of skills”. In a recent essay, he warned that colleges and universities rushing to embrace the technology were preparing to “self-lobotomize”.

Many professors talked about keeping the technology out of the classroom as a battle already lost. As many as 92% of students have reported resorting to the technology in their school work, recent surveys show, and the numbers are rapidly increasing even as growing numbers express concerns about the technology’s accuracy and the integrity of using it. Reliance on AI among faculty is also on the rise, with observers pointing to the dystopian possibility that the college experience may soon be reduced to AIs grading AI-generated homework – “a conversation between two robots”.

Professors said they resorted to oral interrogations, handwritten notebooks, and class participation for grading purposes. Some require students to submit transparency statements describing their work process. Others have reportedly injected random words like “broccoli” and “Dua Lipa” into assignments to confuse learning models – exposing students who did not even read the prompts before pasting them into AI. 😅

 
One of early posts in this thread was about the working conditions of the people labeling scraped data for AIs. Things are starting to to come to ahead:

 
“I now talk about AI with my students not under the framework of cheating or academic honesty but in terms that are frankly existential,” said Dora Zhang, a literature professor at the University of California, Berkeley. “What is it doing to us as a species?”

Michael Clune, a literature professor and novelist, said that already, many students have been left “incapable of reading and analyzing, synthesizing data, all kinds of skills”. In a recent essay, he warned that colleges and universities rushing to embrace the technology were preparing to “self-lobotomize”.

I'm seeing this to some extent at work already. I've had to argue with and correct engineers that used Claude Code to analyze situations, provide faulty findings, and then try to implement those faulty findings. Even when there is something valuable there, they are less willing/able to go the next step and go "well if X is true, and we want Y long-term, we can do Z to solve this and push us towards that long-term goal." You know, the bread and butter of engineering things that don't fall over in a stiff breeze.

It feels like Idiocracy is both right and wrong. It's not about "smart people don't breed enough", it's "people are willing to sacrifice themselves on the altar of convenience".

Report: Creating a 5-second AI video is like running a microwave for an hour

I'll admit, I've had to basically try to ramp up on Claude Code in my own time.

I gave it the task of doing the tedious part of taking the output of a dead code analysis tool, and see how much could actually be stripped. Hit the usage 5hr window usage cap in 30 minutes. Took what i learned, built up a "skill", and tried again later. Hit the usage cap in 30 minutes again. This is on the 20$/month tier. I feel like I'd get more value subscribing to Lightroom again.
 
Back
Top