The Ai thread


Also saying AGI is not right around the corner, hardware value depreciates over time, so spending trillions now is a waste of resources as most of the hardware will be outdated within 5 years.

IBM CEO warns that ongoing trillion-dollar AI data center buildout is unsustainable — says there is 'no way' that infrastructure costs can turn a profit​


Putting aside my obvious bias against AI you just have to wonder how they expect it to be profitable when most people only ever just use free text chat features. I think it has its uses, especially in the medical field, but for the everyday person there is no real practical applications for it, at least for now.
 
Putting aside my obvious bias against AI you just have to wonder how they expect it to be profitable when most people only ever just use free text chat features. I think it has its uses, especially in the medical field, but for the everyday person there is no real practical applications for it, at least for now.

The effort I'm seeing pushed right now is more in the B2B space, trying to entice folks with the lure of lower labor costs. The free usage is crowd-sourced training data for that other usage.
 
The effort I'm seeing pushed right now is more in the B2B space, trying to entice folks with the lure of lower labor costs. The free usage is crowd-sourced training data for that other usage.
Yes, the whole "replace your humans" movement driven by billionaires that results in huge job losses and even worse customer service for their customers. I would seriously applaud this if it worked but every single real world experience I've had with it has been beyond frustrating. Instead of getting your issues resolved you just give up.
 
Yes, the whole "replace your humans" movement driven by billionaires that results in huge job losses and even worse customer service for their customers. I would seriously applaud this if it worked but every single real world experience I've had with it has been beyond frustrating. Instead of getting your issues resolved you just give up.

Oh, believe me, I know.
 
I would be willing to bet any company touting "Real human customer service, no AI" would actually attract customers. It's such a large part of any business to risk losing your base over.
Unfortunately, lots of people are cheap. May work with companies though, as those value actual support more than individuals do
 

AI usage may be affecting word choice in spoken language - basically words AI uses more often are becoming more common in human speech as well. I haven’t read the original paper, but one concern I have is eliminating AI generated data from the data set (eg some of their data is from YouTube and there’s a lot of AI generated voiceover on YouTube especially on shorts).
 

Somewhat related to AI, because the first attempt was thought to be an AI video. So the CEO came up with the great idea to become the victim of the kick.

Here is my first reaction to this story... So the biggest issue was to prove that this video wasn't AI, and not that a robot named the T800 is being trained in combat!?!?
1765216209266.png
 

Somewhat related to AI, because the first attempt was thought to be an AI video. So the CEO came up with the great idea to become the victim of the kick.

Here is my first reaction to this story... So the biggest issue was to prove that this video wasn't AI, and not that a robot named the T800 is being trained in combat!?!?
View attachment 37523
1765227018128.png

Listening to most tech CEOs expound on their favorite books/media which have any kind of nuance (or not even for some of the more obtuse CEOs) will disabuse one of the notion that the humanities in general or literary analysis in particular is a waste of time in school. Overly focusing on STEM has its drawbacks.
 
Last edited:
Someone from sales asked my to evaluate the requirements of a potential customer.
I didn't know one of the referenced IEEE standards, looked it up on Wikipedia but that didn't have much more than one or two sentences. I answered accordingly.
Later my boss told me that Copilot was included in our Office 365 license and I should use it.

To test Copilot I asked it a few questions that I most likely knew the answers of.
I asked for PowerMac emulators. It listed DingusPPC as the most compatible one. This is what the developers of DingusPPC want to achieve, but it definitely isn't there yet.
I asked for PC emulators on macOS. It listed all possible virtual machines (what I didn't ask for) and left out 86Box.

A few days later a colleague and I were discussing logical node classed from the IEC61850 standard.
I believe my colleague was using ChatGPT, and it said that DOPM stand for "Digital Output something something" (I cannot remember the other two words). It even referenced the correct standard document (IEC 61850-7-420).
I told my colleague that this is wrong, because the first letter in the logical node class is always the class grouping, and I remember that "D" stands for something with "Distributed".
He didn't believe me, so we both looked it up:
DOPM = DER OPerating Mode (where DER = Distributed Energy Resource)
I know that the meanings of acronyms is hard (especially for the pesky TLAs and ETLAs), but hallucinating something totally different while referencing the correct standard is another level.
 
Someone from sales asked my to evaluate the requirements of a potential customer.
I didn't know one of the referenced IEEE standards, looked it up on Wikipedia but that didn't have much more than one or two sentences. I answered accordingly.
Later my boss told me that Copilot was included in our Office 365 license and I should use it.

To test Copilot I asked it a few questions that I most likely knew the answers of.
I asked for PowerMac emulators. It listed DingusPPC as the most compatible one. This is what the developers of DingusPPC want to achieve, but it definitely isn't there yet.
I asked for PC emulators on macOS. It listed all possible virtual machines (what I didn't ask for) and left out 86Box.

A few days later a colleague and I were discussing logical node classed from the IEC61850 standard.
I believe my colleague was using ChatGPT, and it said that DOPM stand for "Digital Output something something" (I cannot remember the other two words). It even referenced the correct standard document (IEC 61850-7-420).
I told my colleague that this is wrong, because the first letter in the logical node class is always the class grouping, and I remember that "D" stands for something with "Distributed".
He didn't believe me, so we both looked it up:
DOPM = DER OPerating Mode (where DER = Distributed Energy Resource)
I know that the meanings of acronyms is hard (especially for the pesky TLAs and ETLAs), but hallucinating something totally different while referencing the correct standard is another level.
My job before retiring was managing a team of consultants for the Microsoft stack, mostly based around Office 365 and I feel lucky to have retired when I did just before all of the Copilot stuff came out. The firm I worked for is now pushing it really hard to their clients so I'm sure MS is incentivising it on the backend to boost sales as well.

I also find that often times when querying Google that their AI answers can be wrong and instead of simply saying something like "I'm unsure" it just doubles down and states the wrong answer as a fact. I'm sure this will sort itself eventually but it's something they need to work out. However, you still have to research multiple sources before you can trust an answer.
 
I know that the meanings of acronyms is hard (especially for the pesky TLAs and ETLAs), but hallucinating something totally different while referencing the correct standard is another level.

This is the pernicious part. Because the LLM isn't referencing anything in reality, it can generate these convincing half-truths which makes it seem like it's summarizing a source, when it isn't. So people can easily walk away with misinformation. Doesn't matter if it's because the user doesn't know how an LLM works or not. That's a failing on the company/service with the LLM, IMO. Bad UX.

The fact that these chatbots have evolved well beyond Eliza and the Markov Chain generators of the past, and can baffle a Turing Test, makes it all the more critical to get the details right. Something I have not seen much interest in.
 
This is the pernicious part. Because the LLM isn't referencing anything in reality, it can generate these convincing half-truths which makes it seem like it's summarizing a source, when it isn't. So people can easily walk away with misinformation. Doesn't matter if it's because the user doesn't know how an LLM works or not. That's a failing on the company/service with the LLM, IMO. Bad UX.

The fact that these chatbots have evolved well beyond Eliza and the Markov Chain generators of the past, and can baffle a Turing Test, makes it all the more critical to get the details right. Something I have not seen much interest in.
Precisely. Since the LLM wasn’t trained on a million documents that have the following text in them (“DOPM (Distributed Energy Resource Mode)” or “DOPM means Distributed Energy Resource Mode,” it is left to ponder “what words are often found in acronyms in technical documents that that might reduce to DOPM.”

And even if the LLM *had* been trained on those million hypothetical documents, it would at best only increase the odds that it would happen upon the right answer, because an LLM has no concept of facts. It’s just autocompleting.

Fuck LLMs.
 
Back
Top