Google announces Bard

One of the things I saw ChatGPT fail most often was asking for the specs of 2000-era Macs. When you're giving hard specs you can't make vague generalizations to avoid a direct answer, so things like this happened:

How many RAM slots does the G4 Cube have?

The Power Mac G4 Cube has 2 RAM slots. The Power Mac G4 Cube was designed to support up to 1GB of RAM and it has two RAM slots where you can install memory modules. The memory modules used on the Power Mac G4 Cube should be DDR SDRAM (Double Data Rate Synchronous Dynamic Random Access Memory) and have a clock frequency of 400 MHz.
The G4 Cube had 3 RAM slots, for up to 1.5GB of RAM, SDR (not DDR), and with a maximum clock frequency of 100MHz (PC100). Those sure are a lot of mistakes for a single sentence. I also got ChatGPT to hallucinate an extra PCI slot on the Cube (which it didn't have):
Did Apple offer a Gigabit Ethernet option for the PowerMac G4 Cube?

The Power Mac G4 Cube did not come with a Gigabit Ethernet option as standard, but it had a built-in 10/100BASE-T Ethernet port which allows for a maximum data transfer rate of 100Mbps. However, users could add a Gigabit Ethernet card to the cube aftermarket, as the cube had an empty PCI slot which could be used to install an Ethernet card with Gigabit capabilities.

Worst part is that it seems reasonable if you don't know the true answer.
 
The rush to the marketplace with half baked AI only reveals the edges of the titanic clashes going on between the hedge fund managers (or CEOs of publicly traded outfits) and the designers and engineers in the trenches.

Ship it now!​
It ain't ready, boss.​
Ship it anyway!​
It won't work right.​
So what, no one else's does either, SHIP IT today!​

Should we care about this if the product is bath towels or can openers? Maybe.

When it's an AI communicator, aping what it has been allowed to learn of how humans pitch stuff to each other today, you're damn straight we should care. This is some bolt-on to the original Pandora's box.
 
Yeah, but so do humans. I start spouting gibberish whenever I hear the word “moist,” for example.

Well exactly. And AI apps are constructed by humans who have been allowing the app to learn from human contribution to the body of existing knowledge.... with limitations ("guardrails") which are poorly understood by most of us.

Wait until they let it learn off the internet going forward, when what's out there is partly constructed by AI critters reading other AI critters' offerings. Yee hah.
 
Well exactly. And AI apps are constructed by humans who have been allowing the app to learn from human contribution to the body of existing knowledge.... with limitations ("guardrails") which are poorly understood by most of us.

Wait until they let it learn off the internet going forward, when what's out there is partly constructed by AI critters reading other AI critters' offerings. Yee hah.
There’s also going to be an unfortunate feedback effect. As more “content” becomes AI-constructed, more will be fed into the models that the AI uses to generate new content.

ChatGPT stopped with content a couple years old, presumably to try and avoid this issue. But Bing is feeding in new data, and eventually most (by quantity) content will be AI-generated which will make it difficult to avoid.
 
There’s also going to be an unfortunate feedback effect. As more “content” becomes AI-constructed, more will be fed into the models that the AI uses to generate new content.

ChatGPT stopped with content a couple years old, presumably to try and avoid this issue. But Bing is feeding in new data, and eventually most (by quantity) content will be AI-generated which will make it difficult to avoid.
Oooh now here’s an idea: corporate AI wars where they deliberately try to generate adversarial data that they hope to get swept up by their competitors to sabotage the training of their AI.
 
Oooh now here’s an idea: corporate AI wars where they deliberately try to generate adversarial data that they hope to get swept up by their competitors to sabotage the training of their AI.

Yep. "It's all about me all the time" is the shared motto of for-profit institutions and a few others as well.

Wait until the AI get that idea firmly in their teeth and run with it for real. Talk about speculative fiction.
 
Google emplyees blaming CEO for bad Bard presentation:

BTW, I recently read that the recognition of AI generated texts aparently isn't that good.
Does anyone know how that works? Is it just like writer identification for people (i.e. use of very specific words, word class frequency, word class distance)?
 
You know, I am not surprised. This is what happens when you try to compress knowledge into a lossy network. These things are great for imitating and permuting styles (generators in a wide sense), but a search engine actually needs to be able to recover facts, not just structured gibberish. Humans are more than just a ChatGPT-like generator network, we also have actual memory.
 
BTW, I recently read that the recognition of AI generated texts aparently isn't that good.
Does anyone know how that works? Is it just like writer identification for people (i.e. use of very specific words, word class frequency, word class distance)?

I suppose if you have the model you can check the likelihood that a signal was generated by this model, but I am no expert in this field.
 
Humans are more than just a ChatGPT-like generator network, we also have actual memory.
More important than that, wet brains seem to have massive redundancy. AI models take a minimalist approach, not unlike basic deterministic logic. To be genuinely effective, they need to develop designs that converge on a result from various angles and generate an ouput that is arrived at by some means other than merely a weighted vector average.
 
More important than that, wet brains seem to have massive redundancy. AI models take a minimalist approach, not unlike basic deterministic logic. To be genuinely effective, they need to develop designs that converge on a result from various angles and generate an ouput that is arrived at by some means other than merely a weighted vector average.
Agreed... although I am struck by the resemblance between the convincingness of the crap generators like ChatGPT etc can produce and what we humans routinely produce
 
Agreed... although I am struck by the resemblance between the convincingness of the crap generators like ChatGPT etc can produce and what we humans routinely produce

There is an open, circular question on that. There was some dude that apparently gained a massive following perhaps partially or largely based on nonsense spewed by hordes of tweetbots. AI (which is albeit not a very good description of tweetbots) can have a significant effect on the behavior of CLFs – who is actually influencing whom?
 
Back
Top