The Ai thread

It's amazing how quickly this is turning into a conversation of the bubble bursting already. All of that money and effort being spent force feeding us so many useless tools, designed to make humans dumber and handing all the brain power over to machines.
To be clear: I don’t think the bubble will pop due to unusable or useless products.

I just think capex will fall off a cliff as we make models far more efficient and the resource demand collapses to do the same or better job.

The big boys are spending trillions on compute right now and turns out we are seeing rapid advance in smaller models giving as good results models 10x their size from only months prior.
 
To be clear: I don’t think the bubble will pop due to unusable or useless products.

I just think capex will fall off a cliff as we make models far more efficient and the resource demand collapses to do the same or better job.

The big boys are spending trillions on compute right now and turns out we are seeing rapid advance in smaller models giving as good results models 10x their size from only months prior.

The vast majority of people (evidenced by the number of subsidized subscriptions) are not clamoring to these "LLM Slopbots." I put that in bold because the actual costs are off the charts!

We were promised cures for cancer and solutions for global warming. While there are genuinely great use cases—particularly for my tastes in Machine Learning and cancer detection—the vast majority of users on Gemini, "Nano banana," OpenAI, etc., are producing photos of monkeys riding surfboards. Fun, I'm sure. Useful? Not so sure.

However, when most folks talk "AI" these days and the massive infrastructural build-out, it is largely in support of these LLMs, so my viewpoint should be seen through that lens. I'm a huge proponent for ML and even SLMs. LLMs... not so much.

To use the latest models means using a subsidized tier. The numbers are not adding up, and I'm not convinced that silicon development and energy efficiency will exponentially increase in time to offset the complexity of the latest reasoning models which burn through tokens internally to produce an output. Consumers are going to have to switch from "how many tokens do I get for $20" to a "what is the cost per task" model.
The challenge with this is that in order to familiarize the market with these tools, you need low-friction, flat-rate access. If users have to weigh the cost of every single prompt, it kills the experimentation needed to actually learn the technology.

Finally, I'll share with y'all the following paper that is quite eye opening (bolding my own highlights :) ) :

Please note the summary conclusion....
"
To put this in perspective, consider that the Hoover Dam in Nevada generates about 4 TWh per year; the Palo Verde nuclear power plant in Arizona generates 32 TWh per year, and the Three Gorges Dam in China is expected to generate 90 TWh per year. But between 2028 and 2030, given the current rate of growth, AI data center power demands will increase by 350 TWh, which is nearly three times as much energy as all of those generating facilities combined.[2]

No single change will shrink that gap. For the semiconductor industry to continue growing at the current pace, it will require changes from the grid on down, and from the chip up. And even then, it’s not clear if that will really close the gap, or whether it will simply enable AI data centers to grow even larger.

" ~ source : https://semiengineering.com/crisis-ahead-power-consumption-in-ai-data-centers/ and author : Ed Sperling


Just my 0.02.
 
1765919523265.jpeg


(Although I completely agree with @tomO2013 that specialized machine learning tasks including in the medical field can be very useful - heck even LLMs are occasionally impressive, the gap between what was “promised” both in utility and in timescales and what is being delivered is risible especially given the insane money and resources being spent)
 
Last edited:
The vast majority of people (evidenced by the number of subsidized subscriptions) are not clamoring to these "LLM Slopbots." I put that in bold because the actual costs are off the charts!

We were promised cures for cancer and solutions for global warming. While there are genuinely great use cases—particularly for my tastes in Machine Learning and cancer detection—the vast majority of users on Gemini, "Nano banana," OpenAI, etc., are producing photos of monkeys riding surfboards. Fun, I'm sure. Useful? Not so sure.

However, when most folks talk "AI" these days and the massive infrastructural build-out, it is largely in support of these LLMs, so my viewpoint should be seen through that lens. I'm a huge proponent for ML and even SLMs. LLMs... not so much.

To use the latest models means using a subsidized tier. The numbers are not adding up, and I'm not convinced that silicon development and energy efficiency will exponentially increase in time to offset the complexity of the latest reasoning models which burn through tokens internally to produce an output. Consumers are going to have to switch from "how many tokens do I get for $20" to a "what is the cost per task" model.
The challenge with this is that in order to familiarize the market with these tools, you need low-friction, flat-rate access. If users have to weigh the cost of every single prompt, it kills the experimentation needed to actually learn the technology.

Finally, I'll share with y'all the following paper that is quite eye opening (bolding my own highlights :) ) :

Please note the summary conclusion....
"
To put this in perspective, consider that the Hoover Dam in Nevada generates about 4 TWh per year; the Palo Verde nuclear power plant in Arizona generates 32 TWh per year, and the Three Gorges Dam in China is expected to generate 90 TWh per year. But between 2028 and 2030, given the current rate of growth, AI data center power demands will increase by 350 TWh, which is nearly three times as much energy as all of those generating facilities combined.[2]

No single change will shrink that gap. For the semiconductor industry to continue growing at the current pace, it will require changes from the grid on down, and from the chip up. And even then, it’s not clear if that will really close the gap, or whether it will simply enable AI data centers to grow even larger.

" ~ source : https://semiengineering.com/crisis-ahead-power-consumption-in-ai-data-centers/ and author : Ed Sperling


Just my 0.02.
By new small models I’m talking about stuff you can run locally on a laptop.

Not the latest models from OpenAI or wherever.
 
The vast majority of people (evidenced by the number of subsidized subscriptions) are not clamoring to these "LLM Slopbots." I put that in bold because the actual costs are off the charts!

We were promised cures for cancer and solutions for global warming. While there are genuinely great use cases—particularly for my tastes in Machine Learning and cancer detection—the vast majority of users on Gemini, "Nano banana," OpenAI, etc., are producing photos of monkeys riding surfboards. Fun, I'm sure. Useful? Not so sure.

However, when most folks talk "AI" these days and the massive infrastructural build-out, it is largely in support of these LLMs, so my viewpoint should be seen through that lens. I'm a huge proponent for ML and even SLMs. LLMs... not so much.

To use the latest models means using a subsidized tier. The numbers are not adding up, and I'm not convinced that silicon development and energy efficiency will exponentially increase in time to offset the complexity of the latest reasoning models which burn through tokens internally to produce an output. Consumers are going to have to switch from "how many tokens do I get for $20" to a "what is the cost per task" model.
The challenge with this is that in order to familiarize the market with these tools, you need low-friction, flat-rate access. If users have to weigh the cost of every single prompt, it kills the experimentation needed to actually learn the technology.

Finally, I'll share with y'all the following paper that is quite eye opening (bolding my own highlights :) ) :

Please note the summary conclusion....
"
To put this in perspective, consider that the Hoover Dam in Nevada generates about 4 TWh per year; the Palo Verde nuclear power plant in Arizona generates 32 TWh per year, and the Three Gorges Dam in China is expected to generate 90 TWh per year. But between 2028 and 2030, given the current rate of growth, AI data center power demands will increase by 350 TWh, which is nearly three times as much energy as all of those generating facilities combined.[2]

No single change will shrink that gap. For the semiconductor industry to continue growing at the current pace, it will require changes from the grid on down, and from the chip up. And even then, it’s not clear if that will really close the gap, or whether it will simply enable AI data centers to grow even larger.

" ~ source : https://semiengineering.com/crisis-ahead-power-consumption-in-ai-data-centers/ and author : Ed Sperling


Just my 0.02.
Excellent post. (y)
 

While it is not prohibited to use AI as a learning aid or a development tool (i.e. code completions), extension developers should be able to justify and explain the code they submit, within reason.

One of my early posts on the subject was about exactly this. I can definitely see use of the tools for learning, advanced code completion, and (not mentioned above but definitely for certain languages) compile error code explanations. So yes signed off on this as a rule, especially for major OSS projects were slop, especially an avalanche of it, could be really quite debilitating.
 
Jesus



OpenAI is facing increasing scrutiny over how it handles ChatGPT data after users die, only selectively sharing data in lawsuits over ChatGPT-linked suicides.

Last week, OpenAI was accused of hiding key ChatGPT logs from the days before a 56-year-old bodybuilder, Stein-Erik Soelberg, took his own life after “savagely” murdering his mother, 83-year-old Suzanne Adams.
 
They're right to do so, recently saw a story on this and the algorithm swing from minute to minute (or device to device) is pretty staggering. In the end they say it adds up to millions in additional profits for the vendors.
AI is really perfect, if you think about it. It will take your job, then increase the cost of your groceries. Get you from both ends.
 

Samsung employees accused of taking bribes from business customers desperate to get RAM as the AI induced shortage deepens.
 
The vast majority of people (evidenced by the number of subsidized subscriptions) are not clamoring to these "LLM Slopbots." I put that in bold because the actual costs are off the charts!

We were promised cures for cancer and solutions for global warming. While there are genuinely great use cases—particularly for my tastes in Machine Learning and cancer detection—the vast majority of users on Gemini, "Nano banana," OpenAI, etc., are producing photos of monkeys riding surfboards. Fun, I'm sure. Useful? Not so sure.

However, when most folks talk "AI" these days and the massive infrastructural build-out, it is largely in support of these LLMs, so my viewpoint should be seen through that lens. I'm a huge proponent for ML and even SLMs. LLMs... not so much.

To use the latest models means using a subsidized tier. The numbers are not adding up, and I'm not convinced that silicon development and energy efficiency will exponentially increase in time to offset the complexity of the latest reasoning models which burn through tokens internally to produce an output. Consumers are going to have to switch from "how many tokens do I get for $20" to a "what is the cost per task" model.
The challenge with this is that in order to familiarize the market with these tools, you need low-friction, flat-rate access. If users have to weigh the cost of every single prompt, it kills the experimentation needed to actually learn the technology.

Finally, I'll share with y'all the following paper that is quite eye opening (bolding my own highlights :) ) :

Please note the summary conclusion....
"
To put this in perspective, consider that the Hoover Dam in Nevada generates about 4 TWh per year; the Palo Verde nuclear power plant in Arizona generates 32 TWh per year, and the Three Gorges Dam in China is expected to generate 90 TWh per year. But between 2028 and 2030, given the current rate of growth, AI data center power demands will increase by 350 TWh, which is nearly three times as much energy as all of those generating facilities combined.[2]

No single change will shrink that gap. For the semiconductor industry to continue growing at the current pace, it will require changes from the grid on down, and from the chip up. And even then, it’s not clear if that will really close the gap, or whether it will simply enable AI data centers to grow even larger.

" ~ source : https://semiengineering.com/crisis-ahead-power-consumption-in-ai-data-centers/ and author : Ed Sperling


Just my 0.02.
Another article to push you to live on another planet.
 
(Although I completely agree with @tomO2013 that specialized machine learning tasks including in the medical field can be very useful - heck even LLMs are occasionally impressive, the gap between what was “promised” both in utility and in timescales and what is being delivered is risible especially given the insane money and resources being spent)

The thing I was excited about back in the 2018 era looking at ML was honestly as a pattern matching tool that could help raise signals on nuanced data that can be hard for a human to discern on their own. I'm thinking things like being able to use datasets with full hindsight to try to improve early cancer detection rates from mammograms. Something that models should be able to help improve, even with the thorny accuracy vs precision problem models tend to have. From a more mundane space, being able to build something like a "linter" for typed documents to allow an organization to produce more consistently formatted/structured internal memos/docs/etc still seems somewhat interesting. It's boring, but it also isn't expensive, and the fact that models don't have to be perfectly rigid means organizations can tune models based on their own corpus.

But ultimately, when LLMs landed, most of the general business space went whole hog on them. I haven't even seen a non-LLM ML/AI feature in my neck of the woods in a couple years now. (EDIT: And I suspect a big chunk of that is because LLMs are seen as a more generalized model that supercedes a lot of these specialized models, at the cost of not being as good as any specialized model at any of this stuff)
 
I'd be way less sour on AI if it were being constrained to things it's actually great at.

The forced rollout everywhere, on ordinary people, with zero concern for where it's a good fit and any concern at all about accuracy or negative externalities is really off putting.

Things like MS Copilot being jammed into LG TVs (last weeks news) with no way to remove it.

No thank you.
 
I'd be way less sour on AI if it were being constrained to things it's actually great at.

The forced rollout everywhere, on ordinary people, with zero concern for where it's a good fit and any concern at all about accuracy or negative externalities is really off putting.

Things like MS Copilot being jammed into LG TVs (last weeks news) with no way to remove it.

No thank you.
The problem is AI really should incorporate the block chain in the metaverse. That’s where the real opportunities lie.

/s
 
Back
Top