The Ai thread

Yeah, like Dotcom, there's a heap of money flying around and a lot of people cashing in on it without any intent of delivering a usable product.

Couple this with massive advances in small models, realisation that giving them access to tools is far more interesting than just making them bigger, and I suspect we're in for another Dotcom style bubble pop.
Aye, probably bigger though. Hopefully not 2008 levels though. Hopefully containable. But we have unfortunately not the most … competent government so if things do go out of hand in the next 3 years … it could go very badly.
 
Last edited:
Yeah, like Dotcom, there's a heap of money flying around and a lot of people cashing in on it without any intent of delivering a usable product.

Couple this with massive advances in small models, realisation that giving them access to tools is far more interesting than just making them bigger, and I suspect we're in for another Dotcom style bubble pop.
It's amazing how quickly this is turning into a conversation of the bubble bursting already. All of that money and effort being spent force feeding us so many useless tools, designed to make humans dumber and handing all the brain power over to machines.
 
Also overshadowing the huge potential of focused, issue-specific small model development.

(Which may open a way to save some of that bacon going down the drain)
 
So the government is going to give $1b so microsoft can reopen 3 mile island, a nuclear disaster site, because AI.

They only recently got the damned thing shut down.
Well billionaires have to eat too, we can cut waste from SNAP to pay for it. Easy peasy.
 
Also overshadowing the huge potential of focused, issue-specific small model development.

(Which may open a way to save some of that bacon going down the drain)

Yup, this is where the people who laughed at the way apple are trying to make use of AI are going to be in for a shock.

Small models, high performance neural engine on every device, task specific models to do small things everywhere.

Let other companies blow the huge capex on buying overpriced hardware and electricity to chase diminishing returns. Meanwhile, Nvidia are laughing all the way to the bank.
 

But a handful said something I found quite sad: “I just wanted to write the best essay I could.” Those students in question, who at least tried to provide some of their own thoughts before mixing them with the generated result, had already written the best essay they could. And I guess that’s why I hate AI in the classroom as much as I do.
Students are afraid to fail, and AI presents itself as a savior. But what we learn from history is that progress requires failure. It requires reflection. Students are not just undermining their ability to learn, but to someday lead.
 

Good article. Mostly about how language is not thinking. But another interesting point to highlight:

As currently conceived, an AI system that spans multiple cognitive domains could, supposedly, predict and replicate what a generally intelligent human would do or say in response to a given prompt. These predictions will be made based on electronically aggregating and modeling whatever existing data they have been fed. They could even incorporate new paradigms into their models in a way that appears human-like. But they have no apparent reason to become dissatisfied with the data they’re being fed — and by extension, to make great scientific and creative leaps.

Instead, the most obvious outcome is nothing more than a common-sense repository. Yes, an AI system might remix and recycle our knowledge in interesting ways. But that’s all it will be able to do. It will be forever trapped in the vocabulary we’ve encoded in our data and trained it upon — a dead-metaphor machine. And actual humans — thinking and reasoning and using language to communicate our thoughts to one another — will remain at the forefront of transforming our understanding of the world.
 
Exactly.

For some reason, the folks at the head of these companies think that if you shove enough text at a neural network, you'll get a network that starts to approximate functionality like ours. In effect, they are hoping that intelligence is an emergent behavior of a complex enough network. And there is seemingly little appetite to try to understand why the human brain is broken down the way it is, why different parts are given different tasks, and how it all gets coordinated. In other words, without knowing how or why intelligence emerged in humans, they seek to replicate the process.

Yet, my understanding is that complex language was one of the latest stages of our evolutionary development. And it ignores so many different things that have combined to allow intelligence to emerge beyond that. Our brains are built to engage with a world that poses problems to be solved on small scale constantly. So the ability to build a mental model of that world, make guesses about things based on that model, and act on it are key to doing simple things like getting food (just put a food puzzle in front of a dog or cat and watch them experiment). Emotions are part of a feedback loop that has any number of benefits, including those that lead to those creative/logical leaps. And all that developed looooong before human language ever did. So it seems silly to build reasoning on top of LLMs. Maybe it will work at some point down the line with a lot of extra effort, but I would be surprised if it bears fruit before VC sobers up. I'm thinking "make fusion net energy positive" level effort.
 

HP going to replace jobs with AI, you know, for innovation.

I almost took a job at HP working on PA-RISC. They gave me an HP-48G after the interview. That company used to be so cool. Now they spend all their time figuring out how to block you from using third-party toner cartridges.

(FWIW, the Hewlett family is very very nice. At least the ones I’ve spent time with. Name dropping is fun. But that’s about all I’ve got other than that I met steve jobs in an elevator once, and I interviewed with Linus Torvalds once and he was a dick).
 

HP going to replace jobs with AI, you know, for innovation.

I almost took a job at HP working on PA-RISC. They gave me an HP-48G after the interview. That company used to be so cool. Now they spend all their time figuring out how to block you from using third-party toner cartridges.

(FWIW, the Hewlett family is very very nice. At least the ones I’ve spent time with. Name dropping is fun. But that’s about all I’ve got other than that I met steve jobs in an elevator once, and I interviewed with Linus Torvalds once and he was a dick).
Definitely heard that more than once. I would still like to meet him since I have Tux tattooed on my shoulder. He would probably think me a dullard though... I heard that he is a dick because he looks down on those of average intelligence.
 
Definitely heard that more than once. I would still like to meet him since I have Tux tattooed on my shoulder. He would probably think me a dullard though... I heard that he is a dick because he looks down on those of average intelligence.
I find those who act like that generally have a vastly inflated sense of their own intelligence. They may still be very smart, but they ain’t as smart as they think they are.

It also explains a lot about the kernel development environment for Linux [derogatory].
 
What about Stallman, have you met him?
Nope. Only reason I met Linus was because he was at Transmeta when I interviewed there. I don’t hang out in open source circles.
 
Back
Top