The Ai thread

Claude has had enough of this user's shit. :ROFLMAO: I personally love this response, yes it's a bot but you still have a working relationship with it and there's no need to be such an asshole. I even find myself saying please and thank you sometimes, it seems to help with the flow of a project.
I hope you're aware there is no real "working relationship" with a LLM. It's just a chatbot pattern matching against its training data, which contains lots of conversations between real humans to imitate.

So yeah, if you abuse it enough, it'll eventually start synthesizing its output from angry responses by people who've had enough. But the same thing applies when you're nice to it. It has no feelings of its own, it's all just imitation.

I'm not suggesting one should make a habit of abusing LLMs, especially if you spend a lot of time "talking" to them. Not for the LLM's sake, but because the interface is natural language I feel that building toxic habits will creep into your interactions with real people. My concern is that you, Eric, are falling for the psychological manipulation built into LLMs. Have you ever heard of the ELIZA effect?


Even using some fairly crude 1960s text parsing technology it was possible to build a chatbot that made people swear it must be intelligent. This is because human beings are biased to credit intelligence to anything that communicates in well-formed sentences and generally reflects their own ideas back to them. The LLM grifters are fully aware of this and are exploiting it as much as they can.
 
I hope you're aware there is no real "working relationship" with a LLM. It's just a chatbot pattern matching against its training data, which contains lots of conversations between real humans to imitate.

So yeah, if you abuse it enough, it'll eventually start synthesizing its output from angry responses by people who've had enough. But the same thing applies when you're nice to it. It has no feelings of its own, it's all just imitation.

I'm not suggesting one should make a habit of abusing LLMs, especially if you spend a lot of time "talking" to them. Not for the LLM's sake, but because the interface is natural language I feel that building toxic habits will creep into your interactions with real people. My concern is that you, Eric, are falling for the psychological manipulation built into LLMs. Have you ever heard of the ELIZA effect?


Even using some fairly crude 1960s text parsing technology it was possible to build a chatbot that made people swear it must be intelligent. This is because human beings are biased to credit intelligence to anything that communicates in well-formed sentences and generally reflects their own ideas back to them. The LLM grifters are fully aware of this and are exploiting it as much as they can.
Fair enough and I do realize it is just a machine that couldn't really give a shit about what I say one way or the other that is only generating what it says through a model. That said, it's just who I am to say please and thank you no matter what the situation.

I will say though that I do see it as a working relationship, I'll often say things like "don't generate any code here, I just want to discuss options and get your feedback" which is often times more helpful than not (though sometimes just flat out ridiculous) but it's a coordinated effort.
 
Fair enough and I do realize it is just a machine that couldn't really give a shit about what I say one way or the other that is only generating what it says through a model. That said, it's just who I am to say please and thank you no matter what the situation.

I will say though that I do see it as a working relationship, I'll often say things like "don't generate any code here, I just want to discuss options and get your feedback" which is often times more helpful than not (though sometimes just flat out ridiculous) but it's a coordinated effort.
I do the same when calling a company and having an automated conversation before being connected to a human
I say please and thank you. Lately if the person is nice more than usual, it happened twice, I ask if they are AI
Depending on their laugh 😆 I can interpret if’s true or not.
 
pr9eo7cxh4wg1.jpeg
 
memorials for the bomb dropping

I had a fondness for a restaurant in downtown Portland called Hubers, so mom took me there for lunch. Our waitress was Kyoko, who was about 8 months younger than my mom (mid 80s, a bit more than a decade ago), and in our conversations with her (I always like to treat waitstaff like real humans) we learned that she had been there at the time of the thing. It was pretty awesome to meet someone who actually saw the cloud. (She did get a write-up in the local paper, so more than a few people knew about her.)
 
I had a fondness for a restaurant in downtown Portland called Hubers, so mom took me there for lunch. Our waitress was Kyoko, who was about 8 months younger than my mom (mid 80s, a bit more than a decade ago), and in our conversations with her (I always like to treat waitstaff like real humans) we learned that she had been there at the time of the thing. It was pretty awesome to meet someone who actually saw the cloud. (She did get a write-up in the local paper, so more than a few people knew about her.)
That’s cool. Is she the woman in this article? (Asking because the person in the newspaper account has lived in Washington since arriving from Japan.)
 
That’s cool. Is she the woman in this article? (Asking because the person in the newspaper account has lived in Washington since arriving from Japan.)
That does not appear to be her. She was working in Portland, and she was like 9 or 10 months younger than my mother, so she would not have been 90 in 2020, more like 87. I am not sure whether Kiyoko is the same name as Kyoko, but as I recall, she spelled it the latter way.
 
That does not appear to be her. She was working in Portland, and she was like 9 or 10 months younger than my mother, so she would not have been 90 in 2020, more like 87. I am not sure whether Kiyoko is the same name as Kyoko, but as I recall, she spelled it the latter way.
If they're two different people, that makes it even more cool.

My wife and I were supposed to visit Hiroshima during a trip to Japan a few years ago, but that was derailed by the pandemic.
 
This doesn't sound good at all:

https://www.tomshardware.com/tech-i...-tool-powered-by-anthropics-claude-goes-rogue

According to the article, PocketOS is a SaaS platform that services car rental businesses. So, to paraphrase a famous scene from Seinfeld, Claude knows how to get the database, it just doesn't know how to keep the database:


Yeah, caught this earlier and I don't think you can put 100% of the blame on AI, somewhere someone handed the keys to all of it over to Claude without a backup plan. I'm also guilty establishing a backup with Claude but for every project I also periodically copy of the folder xcode uses that Claude is blissfully unaware of. I only started doing this because it made some decisions that I didn't fully trust and don't want to throw away all of my work just in case.

This was a hard lesson though and you can see how it can easily happen, hopefully we'll all (including Claude) learn a lesson from it.
 
Yeah, caught this earlier and I don't think you can put 100% of the blame on AI, somewhere someone handed the keys to all of it over to Claude without a backup plan. I'm also guilty establishing a backup with Claude but for every project I also periodically copy of the folder xcode uses that Claude is blissfully unaware of. I only started doing this because it made some decisions that I didn't fully trust and don't want to throw away all of my work just in case.

Xcode Claude is generally better behaved than say VSCode Claude/Copilot, because most of the tasks I want to have it perform can be done via the workspace tools, limiting the number of arbitrary commands it wants to run. But I also keep it on a short leash, precisely to avoid these larger issues.

But this is generally one reason to always keep code in Git, with the master copy hosted on a NAS or the like (GitHub, Gitea, GitLab, Codeberg, etc). It also gives me a chance to review code changes the LLM is making to ensure that it meets my particular bar. Both at work (prior to sending it for human review), and at home (prior to checking it in), I can diff it against what is checked in locally and see just what is being changed at a particular point in time. It's been good practice to at least use VCS of some kind, even just locally, since mercurial/git made it possible to work completely offline.
 
Back
Top