The death by AI thread

Chew Toy McCoy

Pleb
Site Donor
Joined
Aug 15, 2020
Posts
9,229
I felt this deserved it's own discussion outside the general AI thread. Keep it negative! :ROFLMAO:

Some summary in recent news

AI attempted to blackmail a developer when it found evidence of an affair in his email.

Some people had mental breakdowns with the recent update of ChatGPT because they use it more like a friend and the update changed their best friend's personality.

Beyond the more obvious data related job losses, it's predicted that within the next 10 years between AI and automation not one human will be needed in the distribution center to when your Amazon order hits your porch. Not sure how it gets from the vehicle to your porch, drone?
 

"Godfather of AI" Geoffrey Hinton was awakened in the middle of the night last year with news he had won the Nobel Prize in physics. He said he never expected such recognition.

"I dreamt about winning one for figuring out how the brain works. But I didn't figure out how the brain works, but I won one anyway," Hinton said.

The 77-year-old researcher earned the award for his pioneering work in neural networks — proposing in 1986 a method to predict the next word in a sequence — now the foundational concept behind today's large language models.

While Hinton believes artificial intelligence will transform education and medicine and potentially solve climate change, he's increasingly concerned about its rapid development.

"The best way to understand it emotionally is we are like somebody who has this really cute tiger cub," Hinton explained. "Unless you can be very sure that it's not gonna want to kill you when it's grown up, you should worry."

The AI pioneer estimates a 10% to 20% risk that artificial intelligence will eventually take control from humans.

"People haven't got it yet, people haven't understood what's coming," he warned.

His concerns echo those of industry leaders like Google CEO Sundar Pichai, X-AI's Elon Musk, and OpenAI CEO Sam Altman, who have all expressed similar worries. Yet Hinton criticizes these same companies for prioritizing profits over safety.

"If you look at what the big companies are doing right now, they're lobbying to get less AI regulation. There's hardly any regulation as it is, but they want less," Hinton said.

Hinton appears particularly disappointed with Google, where he previously worked, for reversing its stance on military AI applications.

According to Hinton, AI companies should dedicate significantly more resources to safety research — "like a third" of their computing power, compared to the much smaller fraction currently allocated.

CBS News asked all the AI labs mentioned how much of their compute is used for safety research. None of them gave a number. All have said safety is important and they support regulation in general but have mostly opposed the regulations lawmakers have put forward so far.
 
In a world based in idealism, that's how we'll train it. Just look at Meta and Grok, both of which used basic logic to answer questions and are now being force fed anti woke/LGBTQ and right wing propaganda.

If we were to leave it be to learn on its own it would surely wipe out the species, in reality it will see us as neanderthal meatbags that are incapable of making sensible and practical decisions. I welcome witnessing the creative ways it will off us provided we don't do it to ourselves first.
 
I think the big thing the masses don’t understand is that ai takeover won’t be a rise of the machines terminator style event.

It will be via subversion of human behaviour. The signs are already there and companies like meta using ai to impersonate humans to generate “content” in social platforms is exactly the sort of shit that we should be extremely concerned about.

Look what social media did to elections in the USA.
 
I'm mostly concerned with it being trained to question logic based on idealism. I mean the first thing that happened while letting it scrape and learn on its own was liberal bias, which basically means it sees a fact and makes its deductions based on that. 2+2=4 sort of thing.

But this freaked out the powers that be, which are billionaire capitalist assholes who've made their riches off the backs of the working class, and they're simply not having it. Insert idealism, train it in the vision of said eccentric billionaire. Ah, now we have AI that hates the woke mind virus and questions everything it once knew as fact.
 
Here's a twisted positive spin brought to us by Mark Cuban. He believes people will be so overwhelmed by deep fakes online that they won't know what to believe and that will inspire people to hang out in person more because they can believe in those experiences.

Using past tech game changers of the past as a reference, television and the internet didn't exactly inspire people to socialize more and in fact created unrealistic expectations of reality. I don't see AI moving things in the opposite direction.
 
Here's a twisted positive spin brought to us by Mark Cuban. He believes people will be so overwhelmed by deep fakes online that they won't know what to believe and that will inspire people to hang out in person more because they can believe in those experiences.

Using past tech game changers of the past as a reference, television and the internet didn't exactly inspire people to socialize more and in fact created unrealistic expectations of reality. I don't see AI moving things in the opposite direction.
This is already the case, so much of what we see online is fake and the lines have been blurred so we don't know what to believe.
 
Back
Top