There are programs which purport to estimate how likely a particular text was generated from a ChatGPT query or other known chatbot but I don’t know if that’s what they are using or just going by the OG checker, the human brain. Truthfully I dunno how well any of those work.Sounds like it's making its way through MacRumors as well (and I'm assuming other larger forums). Not sure how they are spotting it but they must have a way.
AI posters on Macrumors
Has anyone noticed a lot of posts recently that seem like they've been written by AI? They are completely off-topic and only give a rudimentary explanation of the topics discussed in a thread, i.e. describing what Android is or what an iPod is when everyone already knows... At first I was led to...forums.macrumors.com
“This stuff will get smarter than us and take over,” says Hinton. “And if you want to know what that feels like, ask a chicken.”
Funnily enough you can argue that from an evolutionary standpoint things worked out pretty well for domesticated animals and plants, including chickens. They wildly outnumber their wild brethren and what they would’ve spread to. Not saying that personally I’d want to be a domesticated species of an AI however …One of the Most Influential AI Researchers Has Regrets — TIME
Over the course of February, Geoffrey Hinton, one of the most influential AI researchers of the past 50 years, had a “slow eureka moment.”apple.news
Why do humans rush to the latest advancement, without pausing and thinking about the potential downsides. Ironically Geoffrey Hinton's cousin worked on the Manhattan Project and later became a peace activist AFTER the bombs were dropped on Hiroshima and Nagasaki.
Well I for one don't ever want to be included in the context of "Tastes like chicken".Funnily enough you can argue that from an evolutionary standpoint things worked out pretty well for domesticated animals and plants, including chickens. They wildly outnumber their wild brethren and what they would’ve spread to. Not saying that personally I’d want to be a domesticated species of an AI however …
People taste like pork "long pig"Well I for one don't ever want to be included in the context of "Tastes like chicken".
Are we getting worked up over the incredibly unlikely but “charismatic” destruction it could visit upon us while missing the more subtle but real and still dangerous consequences?
That article is troubling.Another excellent tonic to temper your expectations for both the promise and dangers of AI:
Why transformative artificial intelligence is really, really hard to achieve
A collection of the best technical, social, and economic arguments Humans have a good track record of innovation. The mechanization of agriculture, steam engines, electricity, modern medicine, computers, and the internet—these technologies radically changed the world. Still, the trend growth...thegradient.pub
I think the focus is useful as it provides a context for what previous transformative technologies achieved and did not achieve and is generally a response to the direct claims made by AI proponents and detractors. Basically that the likelihood of it resulting in runaway growth or some of singularity is remarkably low. That doesn’t mean it won’t have a massive impact on people’s lives. To their credit they try disentangle those two parts while acknowledging the two are inextricably linked. True they spend much of the focus on the hard economics of it but in fairness the social angle of any new technology is even more difficult to predict. But their point is that the two play together, complicating the future of it, and the idea that we’ll all be made defunct or all living a life of ease depending on your optimism level with respect to transformative technology is unlikely to be the case. And that’s even beyond the severe technical hurdles that remain to making AGI which may not be possible with our current methods or even our current understanding of intelligence.That article is troubling.
It hammers repeatedly on the importance of AI for increasing economic growth, written from a deeply-dogmatic neoliberal perspective (even so far as quoting Hayek).
Which, in its way, does raise the question of what the ideal role of AI should be in the overall mix of stuff. Should it be purely utilitarian, focussed on accelerating the economy, or does it fit better in a research milieu, where it is not driving wealth and profitability optimization?
(The other issue, relating to whether economic growth is something we should take for granted as desirable belongs elsewhere.)
I think the thing that bothers me the most about all of this is they way people, even hardcore geeks, are looking at "ChatGPT" and failing to recognize that it is fundamentally little more than a really well developed front end. The fact that it seems to be able to carry on rational conversations leads a lot of people to ascribe to it amazing capabilities that are not really present.And that’s even beyond the severe technical hurdles that remain to making AGI which may not be possible with our current methods or even our current understanding of intelligence.
Yeah a major point in the article is that AGI is not just a fancy chatbot.I think the thing that bothers me the most about all of this is they way people, even hardcore geeks, are looking at "ChatGPT" and failing to recognize that it is fundamentally little more than a really well developed front end. The fact that it seems to be able to carry on rational conversations leads a lot of people to ascribe to it amazing capabilities that are not really present.
Training а computer to recognize a face or drive me home safely or carry on a rational conversation is really hard, so when we get to that point, it feels like have gotten across the goal line, and the back end will just take care of itself. The AI that we have now is good, such as it is, but we have to step back and see that what underlies it is largely lacking in substance. It is like that suit of armor in the Great Hall that is all shiny and impressive-looking but in the end is just an empty shell.
Yeah a major point in the article is that AGI is not just a fancy chatbot.
Deep Thought in the book was described to be a city-sized computer with a single terminal on top of a desk. I would not be surprised if the first general AI looked a lot like this.
Second article illustrates that AI is not capable of operating outside of limited parameters. I’d say not ready for prime time. Is there a way for a third party to disable such a vehicle and push it out of the way?AI thinks that you can "melt eggs":
Can you melt eggs? Quora’s AI says “yes,” and Google is sharing the result
Incorrect AI-generated answers are forming a feedback loop of misinformation online.arstechnica.com
Several robotaxis cause a big traffic jam:
Cruise's robotaxis created a traffic jam in Austin, here's what went wrong
Cruise’s fully autonomous robotaxis recently contributed to some annoying road congestion in the streets of Austin, as captured by a...electrek.co
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.