The Ai thread

With the recent advancements in small models, I'll bet the big guys are shitting their pants.

They're rapidly going to be stuck between "good enough" and "not general AI" in short order.

Samsung recently announced/released a model that beats out some of the massive cloud models and runs in 17M (not billion, million) parameters which will easily fit in most mobile devices.
 


The fun part about this even the AI bros admit the US energy bottleneck is currently causing the biggest hurdle for them as well and claim the lack of such a bottleneck is why China is “winning” the AI race.
 
cy5pbc21xuzf1.png
 


As a former scientist this makes me so angry (not at arxiv or biorxiv). Original research can still be submitted for preprint but I worry and this is still a loss for the science community.

Never mind, biorxiv, is all in on AI as an “experiment” replacing human reviewers with AI:



Note that this tool purports to replace peer-review, emphasis on peer. Peer review is far from perfect and academic publishing is a hell all of its own making, but JFC. When confronted as in the thread above they demure that it’s just an option to improve your paper, but in the original blog posts and threads, the claims are sky high.

As someone else noted the most obvious consequence will be AI paper mills running their papers through this tool to get a version that “passes” making it more difficult to tease out AI slop.
 
Last edited:




Look it’s easy to anthropomorphize chat bots, which is why some people have so much trouble with them, falling in love, treating them as a therapist or friend, etc … but this is someone, Ryan Grim, who is a supposed lefty journalist, really just a contrarian tankie, arguing in public with a chatbot, trying to threaten it, then backing down because he’s addicted to X and doesn’t want daddy Musk to limit his reach while mocking people who left X. Basically this has layers.
 




Look it’s easy to anthropomorphize chat bots, which is why some people have so much trouble with them, falling in love, treating them as a therapist or friend, etc … but this is someone, Ryan Grim, who is a supposed lefty journalist, really just a contrarian tankie, arguing in public with a chatbot, trying to threaten it, then backing down because he’s addicted to X and doesn’t want daddy Musk to limit his reach while mocking people who left X. Basically this has layers.

Replacing logic with idealism is the MAGA way, now throw all the money in the world at training it in such a way.
 
Essentially everything you see or hear has been ripped from someone who actually created it by AI sucking machines that do nothing but scour the internet for content to repurpose in their own image.

IMG_0221.jpg
 
Have to give props to ChatGPT for this one, I moderate a very active sub on reddit and we don't allow buying or selling, so I just went to ChatGPT and asked it to generate the automoderator code to put those posts into moderation automatically (mostly just regex stuff) and it spit out all nice and formatted for me.
 
Have to give props to ChatGPT for this one, I moderate a very active sub on reddit and we don't allow buying or selling, so I just went to ChatGPT and asked it to generate the automoderator code to put those posts into moderation automatically (mostly just regex stuff) and it spit out all nice and formatted for me.
These LLMs are tools.

If you treat them like tools and verify their outputs, they're useful.


The problem is when people start treating them like intelligent virtual senior employees, and they just aren't.

I've had some amazing stuff out of Claude for example. I've also had some utter trash. It depends on how accurate information on the internet is/was, and sometimes the source material is trash.

It can help teach things - by suggesting source material, courses, etc . If you're relying on it as an oracle of all knowledge in itself, you're in for a shock.
 
These LLMs are tools.

If you treat them like tools and verify their outputs, they're useful.


The problem is when people start treating them like intelligent virtual senior employees, and they just aren't.

I've had some amazing stuff out of Claude for example. I've also had some utter trash. It depends on how accurate information on the internet is/was, and sometimes the source material is trash.

It can help teach things - by suggesting source material, courses, etc . If you're relying on it as an oracle of all knowledge in itself, you're in for a shock.
I’m not disagreeing substantively with anything you wrote, in fact, I’d say I agree with most if not all of it.

The problem is even if everyone treated LLMs as you describe, as the replacement for a particular flaky intern, then eventually you’d still, under that ideal scenario which we are very far away from, then we’d still end up with the issue that the people who used to be interns never become junior engineers/artists/whatever who never become senior staff which we all agree you can’t replace.

And of course, we sadly don’t live in the world where the powers that be market these things, finance these things, and potentially don’t even believe themselves in the limitations of these things as such. This is now the 2nd coming of the Industrial Revolution, the internet, or both together, … or nothing. And that’s to say nothing of the ethics (copyright) or energy/environmental impact.

I actually really do think the tech is cool, but human nature, especially greed and laziness, is a bitch.
 
These LLMs are tools.

If you treat them like tools and verify their outputs, they're useful.


The problem is when people start treating them like intelligent virtual senior employees, and they just aren't.

I've had some amazing stuff out of Claude for example. I've also had some utter trash. It depends on how accurate information on the internet is/was, and sometimes the source material is trash.

It can help teach things - by suggesting source material, courses, etc . If you're relying on it as an oracle of all knowledge in itself, you're in for a shock.
Right, in the end it just executed what I asked for and I was the ultimate decision maker and I agree about the distinction here.

It just saved me a bunch of time formatting and troubleshooting syntax errors. However, it no more makes me a coder than typing in a sentence to "create a beautiful landscape scene" makes me an artist.

In the end I'm some lazy ass who is asking a machine to do the work for me and then taking credit for it.
 

The “broligarchy” is a sadly fitting description for the times we live in.

Altman gave a car crash of an interview that you might as well watch now because it’s going to end up in a Netflix documentary at some point in the not too distant future.
 

Even AI bros are saying this is a bubble.
Yeah, like Dotcom, there's a heap of money flying around and a lot of people cashing in on it without any intent of delivering a usable product.

Couple this with massive advances in small models, realisation that giving them access to tools is far more interesting than just making them bigger, and I suspect we're in for another Dotcom style bubble pop.
 
Back
Top