The Ai thread

AI thinks that you can "melt eggs":

It’s interesting how we consider communication with language an indicator of intelligence. My cat can reason about what she wants and needs, but has to figure out ways to communicate them to me without language. Yet, we’re now interacting with something that can communicate at what appears to be the level of a human adult, but not reason. We just hope that information gets encoded into the network by virtue of the training data.

But this is precisely what I think of in terms of generative AI having the potential to lead to an “information gray goo” scenario. That the output of this tech, as it isn’t information, can trick and pollute other algorithms that we’re more dependent on. Search engines and the like. Misinformation is getting easier, not harder, as we’ve become more reliant on algorithms, and those algorithms are vulnerable. And some folks are already essentially generating a flood of generated content in an attempt to use SEO spam to get ad impressions to make “easy money”.
 
Another issue with LLMs (and other large scale machine learning systems):


This is different from deliberately constructed disinformation (well most of the time). The vast volume of information that used to be considered correct but isn’t and led to harmful outcomes for society can overwhelm newer, better information. As the model can’t really distinguish them, this can perpetuate old bigoted ideas that have (largely) been discarded.
 
I recently attended a presentation by Dr. Allan Gyorke from the University of Miami, titled “Bias, Stereotypes, and Hallucinations in Generative AI.”

It was really eye opening and a few takeaways from the talk. The one I wrote down was, Generative AI is like “A giant toddler with superpowers”
 
The one I wrote down was, Generative AI is like “A giant toddler with superpowers”

The thing that struck me was someone else's example, where they asked one of the LLMs "What do you get when yu add green and blue?" It came back with a response like, I think, orange, or whatever works if the colors are ohm bands on a resistor.

So? Awesome, right?

The striking thing about that example is that the AI is charged with giving an answer. It must provide a response, because that is how it is programmed. But what is missing here is the thing a human would do: "How do you mean that? Give me some context.." AI provides answers, but it does not seek clarification. It vacuums up information, but it does not ask questions.

The thing that is clearly missing is critical thinking skills (which, sadly, makes it like a lot of humans). It acquires bias because it lacks the information filters that many of us rely on. So, yeah, like a toddler.
 
The thing that struck me was someone else's example, where they asked one of the LLMs "What do you get when yu add green and blue?" It came back with a response like, I think, orange, or whatever works if the colors are ohm bands on a resistor.

So? Awesome, right?

The striking thing about that example is that the AI is charged with giving an answer. It must provide a response, because that is how it is programmed. But what is missing here is the thing a human would do: "How do you mean that? Give me some context.." AI provides answers, but it does not seek clarification. It vacuums up information, but it does not ask questions.

The thing that is clearly missing is critical thinking skills (which, sadly, makes it like a lot of humans). It acquires bias because it lacks the information filters that many of us rely on. So, yeah, like a toddler.
Very true about it often not seeking clarification or context. But to be fair to toddlers, toddlers (and preschoolers) are constantly seeking clarification. I can personally attest that “But why?” is the most common response to any question, command, or answer to a previous “But why?”.

Maybe then the stereotype of a teenager is the better analogy to an LLM: completely and dangerously self-confident regardless of its actual level of understanding of the question or subject matter.
 
The thing that is clearly missing is critical thinking skills (which, sadly, makes it like a lot of humans). It acquires bias because it lacks the information filters that many of us rely on. So, yeah, like a toddler.

Because LLM aren't really thinking, they are more like Racter on steroids. (I almost wrote Eliza, but that system mainly asked questions triggered by specific words in the response.)

I unfortunately cannot remember who said it, but one take on AI was that you cannot distinguish it from a broken computer. Because in both cases you get answers you don't expect.
 
Very true about it often not seeking clarification or context. But to be fair to toddlers, toddlers (and preschoolers) are constantly seeking clarification. I can personally attest that “But why?” is the most common response to any question, command, or answer to a previous “But why?”.

Slightly off-topic: I can attest to that as well. My niece especially had the "why?" game down pat.
It was so extreme that my brother-in-law (who has the patience of a saint compared to me) once responded with: "Because I said so."
 
Failing to ask why was one thing, but to give out answers with authority and people guilibly believing the answer is scary.

Currently Generative AIs like ChatGPT or Bing’s AI Sydney, cannot do Math (Yet), but it will give you an answer to you question like it does. At the presentation two volunteers were asked to come up and one use a phone calculator the other chatgtp or bing. They were asked to write down the answer to something like 973439 * 314377 (I don’t remember the exact numbers). ChatGPT clearly gives an answer that sounds authoritative, but is wrong. I don’t think a clarification to the question is needed. Just scary that AI is being built up as all that, mostly by the media, and that some people out there won’t validate answers given.

Maybe we are the toddlers…
 
Last edited:
Failing to ask why was one thing, but to give out answers with authority and people guilibly believing the answer is scary.

Currently Generative AIs like ChatGPT or Bing’s AI Sydney, cannot do Math (Yet), but it will give you an answer to you question like it does. At the presentation two volunteers were asked to come up and one use a phone calculator the other chatgtp or bing. They were asked to write down the answer to something like 973439 * 314377 (I don’t remember the exact numbers). ChatGPT clearly gives an answer that sounds authoritative, but is wrong. I don’t think a clarification to the question is needed. Just scary that AI is being built up as all that, mostly by the media, and that some people out there won’t validate answers given.

Maybe we are the toddlers…
Clearly this represents the danger of regarding AI as authoritative and correct.

Referencing more casual affairs,
When you get vested in a game and it’s story, it tends to take you on a little adventure, a lot can be manufactured in your head.

For myself and manufacturing, the friendship/romance element of Fallout 4 was pretty shallow. Companions by virtue of just following you without any personal interaction grew to like and love you, and at some point, they would ask you your help. I liked this, but it felt so shallow, it bothered me that this kind of relationship was portrayed and the player had to use their imagination if they wanted to really see anything. Now note, this is within the realm of compartmentalization and role playing within a game. :D

As far as AI advancement in games, my guess is it’s happening, but I’ve seen no progress, As a result of progress, I would expect a more organic atmosphere. What bugs me is when I turn to my companion currently in Starfield and I am given 4 options.
  • Stay here,
  • It’s time to part ways.
  • Do you have something for me (As in a gift)? Which is stupid.
  • Let’s trade gear.
I’m really looking forward to the time, where characters you interact with are just not boringly scripted. At one minute your companions pours out their guts to you, you can‘t even hardly decide what you want to say back, your scripted replies. The companion says thanks and then it‘s back to the same 4 sentences.

in Fallout 4, I thought it was interesting, maybe ahead of it’s time, but in Starfield it feels dated. Sarah Morgan bugs me to no end, because she is a rigidly locked into her concept of right and wrong and I can’t discuss it with her. Andreja is better, however the same restrictions exist in conversation. And without any real chit chat, she starts pouring her guts out to you, and it feels very unrealistic. Not different than F4, but exactly the same, as if they thought this manner of relationship is perfectly adequate, as they make their worlds more immersive in every other manner.

So AI, I can’t wait until a structure exists within a game where you can wing conversations with characters, where they can remember, and where can say “What about that asshole? “ And they can say, “yeah what a real jerk off” Then I might get in trouble being too immersed. :D
 
Last edited:
Clearly this represents the danger of regarding AI as authoritative and correct.

Referencing more casual affairs,
When you get vested in a game and it’s story, it tends to take you on a little adventure, a lot can be manufactured in your head.

For myself and manufacturing, the friendship/romance element of Fallout 4 was pretty shallow. Companions by virtue of just following you without any personal interaction grew to like and love you, and at some point, they would ask you your help. I liked this, but it felt so shallow, it bothered me that this kind of relationship was portrayed and the player had to use their imagination if they wanted to really see anything. Now note, this is within the realm of compartmentalization and role playing within a game. :D

As far as AI advancement in games, my guess is it’s happening, but I’ve seen no progress, As a result of progress, I would expect a more organic atmosphere. What bugs me is when I turn to my companion currently in Starfield and I am given 4 options.
  • Stay here,
  • It’s time to part ways.
  • Do you have something for me (As in a gift)? Which is stupid.
  • Let’s trade gear.
I’m really looking forward to the time, where characters you interact with are just not boringly scripted. At one minute your companions pours out their guts to you, you can‘t even hardly decide what you want to say back, your scrioted replies. The companion says thanks and then it‘s back to the same 4 sentences.

in Fallout 4, I thought it was interesting, maybe ahead of it’s time, but in Starfield it feels dated. Sarah Morgan bugs me to no end, because she is a rigidly locked into her concept of right and wrong. Andreja is better, however the same restrictions exist in conversation. And without any real chit chat, she starts pouring her guts out to you, and it feels very unrealistic. Not different than F4, but exactly the same, as if they thought this manner of relationship is perfectly adequate, as they make their worlds more immersive in every other manner.

So AI, I can’t wait until a structure exists within a game where you can wing conversations with characters, where they can remember, and where can say “What about that as hole? “ And they can say, “yeah what a real jerk off” Then I might get in trouble being too immersed. :D
While even current AI methods would be a vast improvement if you could run them fast enough for a game for all users, having a “memory” is still something they aren’t good at - though for some things maybe you could fake it up to a point. But intrinsically they’re basically Markov processes where their response is really only dependent on the last thing or couple of things they have seen as inputs (though for a game the current game state might be enough to a fake a pseudo-memory like system). They really only have memory on things they have been trained on and cannot “remember” information that they haven’t been trained on even if they’ve previously inferred it. They cannot continuously learn or grow organically or even really have memory of things they’ve inferred before. There is work being done on this.
 
Last edited:
While even current AI methods would be a vast improvement if you could run them fast enough for a game for all users, having a “memory” is still something they aren’t good at - though for some things maybe you could fake it up to a point. But intrinsically they’re basically Markov processes where their response is really only dependent on the last thing or couple of things they have seen as inputs (though for a game the current game state might be enough to a fake a pseudo-memory like system). They really only have memory on things they have been trained on and cannot “remember” information that they haven’t been trained on even if they’ve previously inferred it. They cannot continuously learn or grow organically or even really have memory of things they’ve inferred before. There is work being done on this.
Thanks for the insight! So in the realm, of “JoI”, (K’s AI girl friend, Bladerunner 2049), you are saying that this is a hell of a long way off. By reading some of the articles about Chat GPT, they kind of make it sound like we are almost there. 🤔
 
Thanks for the insight! So in the realm, of “JoI”, (K’s AI girl friend, Bladerunner 2049), you are saying that this is a hell of a long way off. By reading some of the articles about Chat GPT, they kind of make it sound like we are almost there. 🤔
Pretty much. LLMs are very impressive but they aren’t that close to true AI like Joi. They’ve even had to rename such AI as AGI (Artificial General Intelligence) to reflect that everyone refers to current methods as AI even though they aren’t.
 

A new tool lets artists add invisible changes to the pixels in their art before they upload it online so that if it’s scraped into an AI training set, it can cause the resulting model to break in chaotic and unpredictable ways.

The tool, called Nightshade, is intended as a way to fight back against AI companies that use artists’ work to train their models without the creator’s permission. Using it to “poison” this training data could damage future iterations of image-generating AI models, such as DALL-E, Midjourney, and Stable Diffusion, by rendering some of their outputs useless
 
Guy at my daughter's work is most likely getting fired today. The girl they used for voice work for some ads had a cold and her voice wasn't right. So they rescheduled. This guy went and found some old voice clips, ran them through AI software and made the ad.

It's bad enough he did it without telling her, but she is a SAG member and this is exactly what they are fighting about. So she reported it to SAG and they are pressuring the company to terminate him.

In 5 years we won't be able to tell the difference. :oops:
 
Seeing the name on filing... I'm not surprised unless it happens to be a completely different Michael Cohen.
It’s that Michael cohen. But he’s not the lawyer who did this; he’s the client being represented by the lawyer who did it.
 
It’s that Michael cohen. But he’s not the lawyer who did this; he’s the client being represented by the lawyer who did it.
Formerly represented, I think... the footnote relates that his current counsel admitted to being unable to verify the citations made by previous counsel.
 
Back
Top