The Ai thread

I agree with the distinction between language and symbols, but would argue that the written form of language is symbols so they seem to be closely related in the developmental process at least for humans.

I never said language wasn’t. It’s an “all people are mammals but not all mammals are people” situation. So I’m mostly saying that it’s a mistake to suggest language *is* abstract thought rather than a component or result of it.

Our habit of having an internal monologue can bias our views on what abstract thought ultimately is.

Hey that’s a blue truck, not a car! :)

That’s kinda my point though. Language is imperfect because the symbology of words can be coarse or detailed, broad or narrow, or even highly contextual. It changes over time. The process of translating one set of symbols (mental image) to another (words) while maintaining meaning is surprisingly inconsistent.

The fact that we can build social constructs on what words mean (dictionary) and then choose to embrace or ignore that construct makes me believe language is just one form symbols can take, and likely not the foundational one, but rather one we use frequently as it enables social structures and other useful traits for us, such as passing on skills to our young.

Another mark against language as the basis for thought is research from not long ago where it appears that high level mathematical capability isn’t linked to the language processing centers.
 
Yes, I think there's no question it will displace people but at the same time I see it as an opportunity to be more creative, at least from the perspective of being an artist.

I played around with it today and it's so good that it's scary, as a photographer I only ever use these apps for color correction and cleanup, never to add or remove artifacts. However, I took a coastline photo and drew a box around a section on the top of a cliff and typed in "add a lighthouse" and right out of the box it was so realistic it would've been hard to disprove that it's real.

This begs the question of authenticity and the article I posted states that camera manufacturers are looking at adding detection that will watermark images it deems fake and I think this is the smart way to go. The only issue is so far no phone manufacturers are getting onboard with it, likely because they already add so much AI to their photos by default.
I won’t be participating in their subscription scheme. I look for other solutions. In fact renting software imo is one of the most nefarious monetization schemes in our lifetimes. DRM is number 2. I assume the for-sale competition will catch up eventually and I will continue to resist. :)
 
Last edited:
@Cmaier

Your job is safe … for now 🙃. Hopefully you can give us a rundown? You would undoubtedly do it justice in a way that I could not.


1685169446443.png

1685169479401.png
 
Last edited:
@Cmaier

Your job is safe … for now 🙃. Hopefully you can give us a rundown? You would undoubtedly do it justice in a way that I could not.


Essentially, a lawyer relied on chatgpt to provide him with case law he could cite for a brief. ChatGPT just made it all up. The court figured out that the cases don’t exist, and issued an order to show cause why the lawyer should not be sanctioned.

There are actually several chatgpt-based tools out there in beta, now, that won’t hallucinate like this. But asking chatGPT to give you cases is profoundly stupid. You have an ethical duty to actually read the damned cases, and ”shepardize“ them to make sure they haven’t been overruled or distinguished by subsequent cases. Typically you have a paralegal “cite check” everything to make sure the cases exist, that you have cited them properly, that the page numbers you cite are correct, etc. So even if you get a list of cases from chatgpt, this never should have happened.

What’s worse is that they actually submitted the supposed text of the fake cases. No idea where they came from. Still trying to figure that part out, but the lawyer in question seems to be claiming chatgpt just made up the entire text of the cited opinions, as well. But if you are going to submit case law, you’d download it from westlaw or lexis, or possibly from PACER or the court’s website, and not from some random source. Very strange,

So I don’t see the order to show cause going well for the lawyer(s) involved. Sanctions will likely be imposed.
 
Oh, wow, it gets worse. It seems everyone was onto them earlier, and the lawyers involved were ordered to submit the actual text of the cases because nobody could find them. Since they were on notice, if it were an honest mistake they;d certainly figure it out at that point. And yet, instead, they submitted fake case text.
 

Eating Disorder Helpline Fires Staff, Transitions to Chatbot After Unionization​

The chatbot is named "Tessa" and will replace the entire Helpline program starting June 1.
WOW! Since I was unfamiliar with the website where this appeared, at first I thought this was some hideous, unfunny joke, but I checked around and saw that yes, this news is apparently true. Very bad move on the part of NEDA and they might as well pack up and shut down their whole organization as no one with an ED, their family members or professionals in the field will trust that organization any more at all.
 
I wonder whether AI could simply replace the big software houses. Train it in program construction, allowing the average user to say "I need an app that does thing, like thisaways," and it puts together what you want and can tweak it when it is not exactly what you want. If we get there, suddenly x86 compatibility will no longer have any meaning.
 
I wonder whether AI could simply replace the big software houses. Train it in program construction, allowing the average user to say "I need an app that does thing, like thisaways," and it puts together what you want and can tweak it when it is not exactly what you want. If we get there, suddenly x86 compatibility will no longer have any meaning.

This will probably come up first with low code platforms where things are a bit more constrained. Especially if you can build a model to help take input and pick templates to fill out, which doesn’t even necessarily need to be GPT-level. But folks working with low code platforms are already saving money by not hiring engineers to build their stuff, so it’s more a question of making those non-engineers more productive in the short term.

But you really need a model that doesn’t hallucinate techniques that don‘t exist, or misinterpret how the language it’s spitting out works. For certain “how do I?” questions it seems to spit out valid code for scenarios that you could also find on Stack Overflow, but beyond that it does things similar to this legal fun.
 
This came up in the Guardian a day or two ago. Not paywalled. It’s a shrewdly insightful and delightful take on why one segment of our population is suddenly wary of AI. I think @Yoused for one might nod at good advice near the end.
 
Not a fan of Doug Rushkoff - as someone that has worked in tech companies, his thoughts have seemed obvious...often projecting "future theories" for concepts that were already discussed and discarded several years earlier.

I still say the threat to our employment rate is the biggest risk...and if it happens too quickly, how will our economies grapple with it? (as we've seen all too well, Government is sloth-like when it comes to reaction time).
 
The case with the lawyer (pun intended) is now also covered by Ars Technica in more detail:

A few excerpts:
"greatly regrets having utilized generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity,"
I believe Cliff wrote that he should have verified the sources anyway.

The lawyer's affidavit said he had "never utilized ChatGPT as a source for conducting legal research prior to this occurrence and therefore was unaware of the possibility that its content could be false."
If he has no idea how it works because he used it for the first time, then he definitely should have verified the results.
 
The case with the lawyer (pun intended) is now also covered by Ars Technica in more detail:

A few excerpts:
"greatly regrets having utilized generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity,"
I believe Cliff wrote that he should have verified the sources anyway.

The lawyer's affidavit said he had "never utilized ChatGPT as a source for conducting legal research prior to this occurrence and therefore was unaware of the possibility that its content could be false."
If he has no idea how it works because he used it for the first time, then he definitely should have verified the results.
I did an experiment with chatgpt yesterday and had it write me a legal brief.

1) you have to give it follow-up prompts to get it to insert case law.
2) it refuses to invent the CONTENT of the case law. In other words, it did make up cites for me, but when I asked to see the text of those cites it said it couldn’t do that.

So there are holes in their story.
 
Looks like the law firm hired another law firm to represent them in the show cause hearing. Wish it was going to be televised.
 
I did an experiment with chatgpt yesterday and had it write me a legal brief.

1) you have to give it follow-up prompts to get it to insert case law.
2) it refuses to invent the CONTENT of the case law. In other words, it did make up cites for me, but when I asked to see the text of those cites it said it couldn’t do that.

So there are holes in their story.
Could it be a function of which ChatGPT version they used (the paid version is a different engine than the free one)?
 
Could it be a function of which ChatGPT version they used (the paid version is a different engine than the free one)?

i doubt it. And I’m guessing he used the free version anyway.
 
Really interesting article on AI and journalism:


So, now we have AI, and it's doing exactly the same thing. There is no difference here, except that an AI can do it much faster, much cheaper, and from far more sources. But, the overall concept is identical to what we see across almost all newspapers every day. It is taking information from different sources, quoting it, rephrasing other parts, and turning that into an article ... without properly linking or compensating those sources. Just like humans do.

And which I just did, although to be fair I linked to it!

If AIs need to compensate and properly link to their sources, then surely the rest of the media industry needs to do the same. Ritzau charged money for an article where they did no journalism of their own, but they did not pay their sources, so why should AIs?

He does not that AI is technically not capable of this yet, but probably will be soon.

The next segment of the article is about the decline of social traffic and not directly related to AI but still interesting.
 
I think it’s fair to say ai could very much disrupt the employment situation in a great number or industries.

As someone who works in healthcare, I’m not terribly concerned, rather very excited about what the future ai technology will bring to healthcare.

As a Clinical Pharmacist in a hospital, a lot of my job is solving the puzzle of which medications to provide given complex underlying health problems and the need for many medications, some of which have undesirable interactions. There are interaction checkers between various drugs and conditions/diseases, but they only compare 2 things at once and there’s no real analysis of the relative risk/severity- and even then there’s plenty of drugs with “severe” interactions that are provided concurrently all the time. There’s so much that could be done to improve clinical guidance for prescribers and also better assessing interactions and best outcomes. There’s endless amounts of collected patient data that could be analyzed to reveal better understandings of the medications we work with. And that’s just the tip of the iceberg.

That said, I don’t think we will be trusting AI anytime soon to be making medical decisions. Especially considering how regulated healthcare is, it’s hard to imagine an ai could be verified to make anything close to clinical decisions or recommendations- especially when decision making is not well understood. I suspect it will be used much more as a resource for recognizing possible errors or helping in individual situations and a tool to analyze data for guidelines than to essentially replace clinicians.

I suppose it’s possible for a system to be made that could suck all the patient information in and decide what interventions to make… but it’s hard to imagine a system that can truly understand all the intricacies that- even minor, non-medical things like twice a day dosing might be better for ensuring compliance in certain patients or X medication is easier to swallow.

Another big part of my job is patient consultation- educating patients on their medications and inspiring medication adherence and lifestyle changes. I don’t think machines can replace the human connection. Sadly healthcare in many respects has become depersonalized, but at least in some respects the healthcare systems is realizing the need for personal connection.

My biggest concern for AI is the creation of fake information/media incl deepfakes. That and chatbots replacing old fashioned research- people relying on a single source which may not actually be correct. It seems to me there needs to be legislation around deepfakes and ai generated content.
 
Back
Top