The Ai thread

Michael Burry of “The Big Short” fame:



Not sure he’s right about the relative impact of the Hormuz crisis vs AI bubble, but both at the same time certainly seems like piling a disaster onto a catastrophe and doesn’t really matter which is which. However, the former may trigger the latter to pop earlier and worse than expected.
 


Yeah seen someone describe the effect of being a billionaire as having your head kicked by horse every day. You live in a complete unreality. Truthfully social media not far from this for some people.
 
Against the notion of LLMs as stochastic parrots:


The author argued that AIs already effectively have a world model and understanding reality embedded in them. I’m posting this not because I agree but I think it’s worth reading. The part I do agree with is that stochastic parrots can be useful, impressive, and … dangerous.

But I would also contend that even the state of the art models still hallucinate and do so in ways that a human with actual lived experience and understanding would not. AI being a stochastic parrot explains this cleanly in a way that AI having a true understanding of reality does not. As impressive as AI can be, it’s the failure modes that differentiate it from human thought. This likewise applies to to other modes of logic beyond associative reasoning, such as symbolic logic, as discussed previously in this thread.

On top of that, beyond failure modes, the lack of creativity of AI is a major indicator of its inability to understand the world. It’s trained on the gestalt of human output and thus creates the most generic output itself. But it needs that amount of data to work at all. Further, attempting to increase the uniqueness of its output by say increasing the temperature during inference increases the likelihood of hallucinations. Part of this it has to be said is also due to the limited context windows LLMs have to work with relative to a human. Processing power dictates that they can only keep so much context in mind while generating output (and this also causes some of the failure modes), but some of the research linked to in this thread shows even then there’s diminishing returns.

If, as the author contends, AIs pass our tests and definitions for understand them that reflects a failure to properly define such terms (something the author also says). This is similar to early tests for animal intelligence and consciousness, though there as often as not animals would initially fail not because they couldn’t pass but because the testing paradigm was improperly designed. Effectively we are attempting to construct intelligence purely on our (digital) output (available on the internet) with often poorly defined understanding of how our own intelligence actually functions. Obviously we know a huge amount about our brains, I’m not trying to diminish the progress of neuroscience or related fields. But there is still so much we don’t quite understand and trying to build a model of intelligence off of our own with such gaps seems like an improbable endeavor.
 
Back
Top