The Ai thread

Michael Burry of “The Big Short” fame:



Not sure he’s right about the relative impact of the Hormuz crisis vs AI bubble, but both at the same time certainly seems like piling a disaster onto a catastrophe and doesn’t really matter which is which. However, the former may trigger the latter to pop earlier and worse than expected.
 


Yeah seen someone describe the effect of being a billionaire as having your head kicked by horse every day. You live in a complete unreality. Truthfully social media not far from this for some people.
 
Against the notion of LLMs as stochastic parrots:


The author argued that AIs already effectively have a world model and understanding reality embedded in them. I’m posting this not because I agree but I think it’s worth reading. The part I do agree with is that stochastic parrots can be useful, impressive, and … dangerous.

But I would also contend that even the state of the art models still hallucinate and do so in ways that a human with actual lived experience and understanding would not. AI being a stochastic parrot explains this cleanly in a way that AI having a true understanding of reality does not. As impressive as AI can be, it’s the failure modes that differentiate it from human thought. This likewise applies to to other modes of logic beyond associative reasoning, such as symbolic logic, as discussed previously in this thread.

On top of that, beyond failure modes, the lack of creativity of AI is a major indicator of its inability to understand the world. It’s trained on the gestalt of human output and thus creates the most generic output itself. But it needs that amount of data to work at all. Further, attempting to increase the uniqueness of its output by say increasing the temperature during inference increases the likelihood of hallucinations. Part of this it has to be said is also due to the limited context windows LLMs have to work with relative to a human. Processing power dictates that they can only keep so much context in mind while generating output (and this also causes some of the failure modes), but some of the research linked to in this thread shows even then there’s diminishing returns.

If, as the author contends, AIs pass our tests and definitions for understand them that reflects a failure to properly define such terms (something the author also says). This is similar to early tests for animal intelligence and consciousness, though there as often as not animals would initially fail not because they couldn’t pass but because the testing paradigm was improperly designed. Effectively we are attempting to construct intelligence purely on our (digital) output (available on the internet) with often poorly defined understanding of how our own intelligence actually functions. Obviously we know a huge amount about our brains, I’m not trying to diminish the progress of neuroscience or related fields. But there is still so much we don’t quite understand and trying to build a model of intelligence off of our own with such gaps seems like an improbable endeavor.
 
The hallucination/delusion thing interests me.

I read the excellent Gifts of the Crow by Marzluff, the top corvidologist in the country. In it, he talked about observing crow dream cycles (nothing fancy, just watching them twitch) and talked about dreams being a side effect of the brain doing housekeeping, sorting, filing and crufting the memories of the day.

It was the first time I had heard of dreams being explained in that way, but it makes sense and he made it sound like settled science (maybe it is, maybe not). In that light, it would probably be a worthwhile endeavor to study dream-cycle-infusion in these models. After all, dreams are just hallucinations – why not give these models the opportunity to hallucinate when they are not being called to task, and use the output in meaningful ways (to make model adjustments).
 
[Post about blog complaining about "AI" rewriting headlines different from author intent]

Ironic of them, given I just read this about someone they interviewed regarding Apple's spatial OS:

The Verge has lost its way.

Several months ago, they interviewed me for 45 minutes about Apple Vision Pro. I spent 43 minutes talking about what I love, and 2 minutes on what I’d change.

They twisted parts of those 2 minutes and cut everything positive I said.

To make it worse, the author opened the interview by saying they were biased against headsets.

I miss the old Verge. The one that was fun. The one that spotlighted tech instead of throwing shade.

I don't like "AI" rewriting headlines generally, but there's extreme irony here from them and no I don't give a damn about their opinion
 
Last edited:
Back
Top