The Ai thread

Huntn

Whatwerewe talk'n about?
Site Donor
Posts
5,012
Reaction score
4,965
Location
The Misty Mountains
For anyone not up on the term LLM:
 

Eric

Mama's lil stinker
Posts
10,740
Reaction score
20,589
Location
California
Instagram
Main Camera
Sony
Sounds like it's making its way through MacRumors as well (and I'm assuming other larger forums). Not sure how they are spotting it but they must have a way. 🤷‍♂️

 

dada_dave

Elite Member
Top Poster Of Month
Posts
1,357
Reaction score
1,284
Sounds like it's making its way through MacRumors as well (and I'm assuming other larger forums). Not sure how they are spotting it but they must have a way. 🤷‍♂️

There are programs which purport to estimate how likely a particular text was generated from a ChatGPT query or other known chatbot but I don’t know if that’s what they are using or just going by the OG checker, the human brain. Truthfully I dunno how well any of those work.
 

rdrr

Site Champ
Posts
899
Reaction score
1,536
“This stuff will get smarter than us and take over,” says Hinton. “And if you want to know what that feels like, ask a chicken.”

Why do humans rush to the latest advancement, without pausing and thinking about the potential downsides. Ironically Geoffrey Hinton's cousin worked on the Manhattan Project and later became a peace activist AFTER the bombs were dropped on Hiroshima and Nagasaki.
 

dada_dave

Elite Member
Top Poster Of Month
Posts
1,357
Reaction score
1,284


Why do humans rush to the latest advancement, without pausing and thinking about the potential downsides. Ironically Geoffrey Hinton's cousin worked on the Manhattan Project and later became a peace activist AFTER the bombs were dropped on Hiroshima and Nagasaki.
Funnily enough you can argue that from an evolutionary standpoint things worked out pretty well for domesticated animals and plants, including chickens. They wildly outnumber their wild brethren and what they would’ve spread to. Not saying that personally I’d want to be a domesticated species of an AI however …
 

rdrr

Site Champ
Posts
899
Reaction score
1,536
Funnily enough you can argue that from an evolutionary standpoint things worked out pretty well for domesticated animals and plants, including chickens. They wildly outnumber their wild brethren and what they would’ve spread to. Not saying that personally I’d want to be a domesticated species of an AI however …
Well I for one don't ever want to be included in the context of "Tastes like chicken". 🤣

The original point of the story is, "We can do X, but should we ethically do it?" can easily be applied to domesticated animals. We have breeding certain traits into dogs and cats that are health concerns for the pet as well as the humans (with aggressive/guard breeds). Not saying I am willing to give up the cute face pug, but did we do them any favors so many millennia ago?
 

dada_dave

Elite Member
Top Poster Of Month
Posts
1,357
Reaction score
1,284
A good essay on how the most extreme (and unlikely) possible repercussions of new technologies sometimes overshadow the more mundane but still critical ones and leave us less prepared for the dangers we actually face.

The analogy considered is the nuclear bomb where the concern was a single bomb setting off a chain reaction and ending all life masking the very real dangers of fallout. But the question is posed to what extent are we doing this with AI? Are we getting worked up over the incredibly unlikely but “charismatic” destruction it could visit upon us while missing the more subtle but real and still dangerous consequences?


Again the essay is almost entirely about radioactivity, the bomb, and our (naive?) belief in a nuclear chain reaction, but given the context of what’s actually around the corner while we are dooming about the dangers of AI. I remember similar concerns with CERN.
 
Last edited:

Nycturne

Site Champ
Posts
959
Reaction score
1,153
Are we getting worked up over the incredibly unlikely but “charismatic” destruction it could visit upon us while missing the more subtle but real and still dangerous consequences?

Yes. When I see people comparing LLMs to SkyNet, I groan.

At the same time, I’ve not been shy about my thoughts on ML. I worry more about how ML is breathing new life into redlining. I worry about how ML is used to further entrench our bad tendencies behind a black box, and then call it “objective” as a way to keep the status quo from scrutiny. I worry about how LLMs are being sold in a way that looks a lot like NFTs, while at the same time I talk with engineers who find LLMs hard to “productize” in ways that would provide real benefit to end users. It feels like folks are trying to figure out how to wring profit out of their ML investments, while it’s still in the “what the heck is this?” stage. All because they don’t want to get caught off guard (again) when the next big tech shift appears. The smartphone shift is still a very recent memory, and many still feel the sting of getting it wrong.

Honestly, it feels like the last decade has been a cavalcade of business ideas meant to try to disrupt for the sake of disruption, and finding some sort of grift that will make the next billionaire. It’s been enough to sour me on big tech in a lot of ways. Maybe I’m just getting old, but ten years ago I was pretty ambivalent to the likes of Corey Doctrow. Not anymore.
 
Top Bottom
1 2