The Ai thread

Not sure what anyone expected.

I see people swearing "agentic-AI" is different, including people I otherwise respect, and yet ... okay I haven't tried claude bot or any of them myself and maybe I would change my mind, but the underlying tech, the neural net, is the same as the chatbots and the "thinking" versions. Yes the bots and the "thinking" versions improve on the base way, the chatbot way, of interacting with the models, but at a much higher token cost and they still only partially remediate the problems with the underlying tech. Further, that such AI agents are security nightmares is obvious - the Signal president talked about that a while ago. Further this:



doesn't feel like actual productivity gains for the most part with people addicted to their uses, burning out over it, and like people discussing which bot hallucinates more "smoothly". It's not that it can't make productivity gains ever, but ... the Tom's article mentioned Solow's paradox, how computers "at first" didn't improve measures of productivity because of what they say is workers getting overworked by the newfangled machines in line with AI, but the article Tom's actually links to on the subject has several arguments against it being a real paradox in the first place.
 
Was browsing the urban dictionary, came upon

ai;dr

no need to even look at the definition.

That was kinda fast. But I guess anyone can put something up there, and hope it catches on.

I see people swearing "agentic-AI" is different, including people I otherwise respect, and yet ... okay I haven't tried claude bot or any of them myself and maybe I would change my mind, but the underlying tech, the neural net, is the same as the chatbots and the "thinking" versions. Yes the bots and the "thinking" versions improve on the base way, the chatbot way, of interacting with the models, but at a much higher token cost and they still only partially remediate the problems with the underlying tech. Further, that such AI agents are security nightmares is obvious

And I find myself rolling my eyes even harder whenever I see MSFT talking about re-investing in security. Well, if you want agentic AI, you need a good rethink about how the LLMs work, how to better isolate control plane and data plane, limit context along security boundaries, etc, etc, etc.

But Agentic AI is the thing behind all the "stop hiring humans" advertising, and the idea that agents can replace low-code/no-code line of business stuff. So it's not surprising that's where the race has ultimately gone to. Rather than say, more muted NL processing stuff.
 
That was kinda fast. But I guess anyone can put something up there, and hope it catches on.



And I find myself rolling my eyes even harder whenever I see MSFT talking about re-investing in security. Well, if you want agentic AI, you need a good rethink about how the LLMs work, how to better isolate control plane and data plane, limit context along security boundaries, etc, etc, etc.

But Agentic AI is the thing behind all the "stop hiring humans" advertising, and the idea that agents can replace low-code/no-code line of business stuff. So it's not surprising that's where the race has ultimately gone to. Rather than say, more muted NL processing stuff.
I know it's hard to predict where everything is going and everything could change, but in your experience, which is vastly more than I have in this space, is the Agentic AI race ... justified? Like despite the current drawbacks, there is something real behind the bullshit? And, if so, how much is real vs the marketing bullshit? I know you and @tomO2013 have written about this at length previously, so if nothing has changed since then, you know feel free to point to previous answers. :)
 
I hacked ChatGPT and Google's AI – and it only took 20 minutes

…It turns out changing the answers AI tools give other people can be as easy as writing a single, well-crafted blog post almost anywhere online. The trick exploits weaknesses in the systems built into chatbots, and it's harder to pull off in some cases, depending on the subject matter. But with a little effort, you can make the hack even more effective. I reviewed dozens of examples where AI tools are being coerced into promoting businesses and spreading misinformation. Data suggests it's happening on a massive scale.
 
1771530911528.png
 


Anyone seen this? It's a youtube video of George Will decimating Trump's Canada policy.
Coming from this prominent right wing Op-Ed author, it was quite eye-opening in its analysis, specifically of the recent WH meeting between T & Carney.
What meeting you say? Riiiiight. There was no such meeting.
It's a frankly astounding (IMHO) AI production.
Check out the thousands of comments it generated as well - 99% thanking Will for "his candid assessment" of T.... o_O

P.S. Note that the "alteration" flags etc. that Youtube is now showing were NOT present until today...

P.P.S. Video still available on Facebook:
 
Last edited:
This is an interesting article from the University of Oxford, UK. https://www.cs.ox.ac.uk/news/2356-full.html

The full journal referenced is here :

It's all somewhat self-explanatory - indiscriminate (key word) use of model-generated content in training material causes irreversible defects in the resulting models.

I can tell you that many companies that I have worked with, are using LLM's to generate training data to train other LLM's - in essence a form of recursive loop. This places a huge burden of responsibility on review and human audit.

There are legitimate reasons why a company may choose to go the 'model training a model' route.
For example, data privacy concerns may dictate that fictional but realistic training data is created (e.g. for training a model around PII such as healthcare info). This approach is often viewed as a workaround by to issues of privacy, data sovereignty and governance. Most importantly, executive leadership often see it as a means to accelerate time to market and the development of newer, more refined models.

It's really just food for thought.
 
On Microsoft:


As someone else said: M&M maker declares M&Ms to replace everyone's dinner in 18 months


Microsoft CEO Satya Nadella warned that AI will lose public support unless it's used to "do something useful that changes the outcomes of people and communities and countries and industries."


"We will quickly lose even the social permission to take something like energy, which is a scarce resource, and use it to generate these tokens, if these tokens are not improving health outcomes, education outcomes, public sector efficiency, private sector competitiveness, across all sectors, small and large, right?" said Nadella. "And that, to me, is ultimately the goal."

On the supply side, Nadella says that AI companies and policy makers must build out "a ubiquitous grid of energy and tokens," which is the task currently making it impossible to buy a stick of RAM at a reasonable price. But after that, he says it's on employers and job seekers to, more or less, just start using AI.

Even the one mentioned upside:

He did at least provide one real example of what he means by all this: "When a doctor can … spend more time with the patient, because the AI is doing the transcription and entering the records in the EMR system, entering the right billing code so that the healthcare industry is better served across the payer, the provider, and the patient, ultimately—that's an outcome that I think all of us can benefit from."

I wonder if I'll really want to spend more time talking to my doctor with an AI eavesdropper listening intently for reasons to reclassify my preventative care visit as a more expensive diagnostic visit (could we just redesign the US healthcare system instead?), but at least for some doctors, AI recording and note-taking tools have already been helpful. One study said that medical professionals reported "tremendous benefits" from using AI scribes, while calling for more research.

has known downsides. You don't remember notes as well if you didn't write them. Handwritten is the best, followed by typing. Auto transcription is the worst for actual information retention. That isn't to say there aren't situations where it can be useful, just like LLMs!, but ...


Yet this only gives a vendor competitive advantages when the data gathered is strictly internal. Microsoft has never seen fit to reveal live cross-corporate usage data for Excel or Visual Studio or Teams, let alone in a gamified leaderboard format. It needs to do it for Copilot, because the actual productivity gains of Copilot are not quantifiable or even visible. Nor are they becoming so.

Microsoft is forced to present Copilot usage by synthetic cohorts through undefined processes for undefined purposes because it is desperate to find a way to make people use the stuff. In general, people use productivity tools to the extent that it makes them more productive. You don't need to push actual usage after a sale unless there is a crisis in usage after a sale.

And of course confirmation that co-pilot was being allowed to read confidential emails:

 
Last edited:


Anyone seen this? It's a youtube video of George Will decimating Trump's Canada policy.
Coming from this prominent right wing Op-Ed author, it was quite eye-opening in its analysis, specifically of the recent WH meeting between T & Carney.
What meeting you say? Riiiiight. There was no such meeting.
It's a frankly astounding (IMHO) AI production.
Check out the thousands of comments it generated as well - 99% thanking Will for "his candid assessment" of T.... o_O

P.S. Note that the "alteration" flags etc. that Youtube is now showing were NOT present until today...

While we were watching other YouTube videos we were suggested to watch this one, but because I know that it never happened we didn’t watch it. There are so many fake AI and not that is becoming more difficult to navigate.
 
Back
Top