The Ai thread

makes it all the more critical to get the details right. Something I have not seen much interest in.
Yes, because it's a lab experiment taken out of the lab, not a product.

I've seen only Apple care about this aspect, hence why they haven't built a chatbot. They're the only company to care about guaranteeing correctly formatted output from a transformer model, which is the first step to Siri 2.0. They're the only company to care about energy efficiency in ML, both working on innovative transformer model architectures (like coding weights in a format that is hardware accelerated) and hardware that doesn't draw 10000000watts.
 
Yes, because it's a lab experiment taken out of the lab, not a product.

If they were treating it like an experiment, I'd probably be less annoyed. But they are treating it like a product, so I'm going to judge how it's being delivered like one.

But you are right that this is a good example of why Apple can waltz in "late" and wind up being a major player in some markets. People talk too much about "first mover advantage", forgetting it can hurt you if your first move erodes trust in your brand.
 
If they were treating it like an experiment, I'd probably be less annoyed. But they are treating it like a product, so I'm going to judge how it's being delivered like one.

I didn't say they were treating it as an experiment. I said that it was and is an experiment, which is was and is. I agree otherwise.

But you are right that this is a good example of why Apple can waltz in "late" and wind up being a major player in some markets. People talk too much about "first mover advantage", forgetting it can hurt you if your first move erodes trust in your brand.
I agree!
 
And even if the LLM *had* been trained on those million hypothetical documents, it would at best only increase the odds that it would happen upon the right answer, because an LLM has no concept of facts. It’s just autocompleting.

One of the best analogies for LLMs I've seen is a parrot: Both repeat things they've "heard", but they don't know what it means.
The problem is that often enough the answer fits the context, so people believe that there is intelligence behind it.

What is really disturbing is the fact that not only dumb people lock to LLMs.
My boss is one of the most intelligent people I know, and still he thinks LLMs are great...
 
LG TVs now installing Microsoft Copilot without the ability to remove it. You'll take AI whether you like it or not, the lack of interest is causing all of these companies who are spending billions on it to force it on you.

 
Back
Top