I hope you're aware there is no real "working relationship" with a LLM. It's just a chatbot pattern matching against its training data, which contains lots of conversations between real humans to imitate.Claude has had enough of this user's shit.I personally love this response, yes it's a bot but you still have a working relationship with it and there's no need to be such an asshole. I even find myself saying please and thank you sometimes, it seems to help with the flow of a project.
So yeah, if you abuse it enough, it'll eventually start synthesizing its output from angry responses by people who've had enough. But the same thing applies when you're nice to it. It has no feelings of its own, it's all just imitation.
I'm not suggesting one should make a habit of abusing LLMs, especially if you spend a lot of time "talking" to them. Not for the LLM's sake, but because the interface is natural language I feel that building toxic habits will creep into your interactions with real people. My concern is that you, Eric, are falling for the psychological manipulation built into LLMs. Have you ever heard of the ELIZA effect?
ELIZA effect - Wikipedia
Even using some fairly crude 1960s text parsing technology it was possible to build a chatbot that made people swear it must be intelligent. This is because human beings are biased to credit intelligence to anything that communicates in well-formed sentences and generally reflects their own ideas back to them. The LLM grifters are fully aware of this and are exploiting it as much as they can.