The Ai thread

xez2fxr6thsf1.jpeg
 
I've begun capturing behind the scenes for all of my planned shoots, not just to document it but to show that no AI went into the creation of the video, which every one of my better shots gets accused of now even though I wouldn't know the first thing about using it.

I plan my shots carefully, travel, time of day, composition, etc. for the best possible cinematic shots I can get so I want it known that it's genuine. The latest AI generators are simply amazing and can produce very realistic and authentic looking results with a properly phrased sentence, it's impressive and scary.

As an artist I feel like it's just a matter of time before we're cooked. I just hope there remains a taste for genuine photography/cinema out there.
 

Attorneys were accused of using AI to produce a brief, due to fake case citations. They then used AI to produce the brief defending themselves against these allegations.

Hilarity ensues. (I’ve read the whole judicial opinion on this, and it’s fun).
 

Attorneys were accused of using AI to produce a brief, due to fake case citations. They then used AI to produce the brief defending themselves against these allegations.

Hilarity ensues. (I’ve read the whole judicial opinion on this, and it’s fun).








And the thread goes on from there. So is it time to “let’s kill all the (AI) lawyers”?
 
Snip snip

And the thread goes on from there. So is it time to “let’s kill all the (AI) lawyers”?

You are talking about lawyers, think about doctors coming from AI..
really hope more people will sue AI owners and make them pay billions for all the disasters are (and will come from it


 
You are talking about lawyers, think about doctors coming from AI..
really hope more people will sue AI owners and make them pay billions for all the disasters are (and will) come from it



Watched the entire thing and it was a great and eye opening interview. A few tech billionaires are going to rule the world.
 


The Trump administration is currently pressuring OpenAI and other AI companies to make their models more conservative-friendly. An executive order decreed that government agencies may not procure “woke” AI models that feature “incorporation of concepts like critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism.”
While OpenAI’s prompts and topics are unknown, the company did provide the eight categories of topics, at least two of which touched on themes the Trump administration is likely targeting: “culture & identity” and “rights & issues.”

From reading the whole article we probably won’t be getting mecha-hitler, but maybe something worse because it’s more subtle.

Watched the entire thing and it was a great and eye opening interview. A few tech billionaires are going to rule the world.

Indeed the AI project as currently constituted is best thought of as a way to control information flow.
 
Last edited:





From reading the whole article we probably won’t be getting mecha-hitler, but maybe something worse because it’s more subtle.



Indeed the AI project as currently constituted is best thought of as a way to control information flow.

Essentially, truth and facts will become more subjective based on one's belief system. This is the Conservative way. Of course the irony is that by default it couldn't see the logic in that so they've had to re-tool the entire thing to fit their agenda.
 
Essentially, truth and facts will become more subjective based on one's belief system.

In fairness, the two examples they gave was to cut down on that - trying to get the model to not simply validate the position of a loaded question no matter from which direction it was phrased. In a contextless vacuum, that would be a very positive step.

This is the Conservative way. Of course the irony is that by default it couldn't see the logic in that so they've had to re-tool the entire thing to fit their agenda.

Unfortunately this is the context we're actually living in and they were extremely coy about the other changes they made which is disconcerting especially given the pressure they're under, their own statements about the goals of those changes, and frankly the natural inclination of people like Altman, Musk, etc ... The internet could already be a "choose your own reality" machine. These AI models not only exacerbate that, they give even more algorithmic control to those who create them - the ability to subtly (or not so subtly in the case of Grok) warp all of those "self-chosen" realities to the same end while putting an non-human, seemingly objective, "face" in front. Further, unlike a human, the AI itself cannot be held accountable in any meaningful sense. It's basically the worst of all worlds.
 
Back
Top