The Ai thread

Shit, throw a little AI in there and we may have a solution to end the energy crisis.

gj41plh91r5b1.jpg
 
Shit, throw a little AI in there and we may have a solution to end the energy crisis.

It feels like one of the two happened:

1) They never played Horizon Zero Dawn.
2) They played Horizon Zero Dawn and thought "Faro's tech is cool, let's build that"
 
Outside the realm of existential threats to humanity and really stupid lawyers, AI continues to make its way into real, shipping products that matter to the general public. Microsoft is now pushing Copilot, a ChatGPT-based digital assistant, to Windows 11 public testers. They've already added such features to their Edge browser and Bing search in the Windows taskbar. This feature is a sidebar where the user can ask questions of the Copilot assistant.

windows-copilot-800x450.jpg


Microsoft continues to follow the Google model, because here's the kicker:

Copilot will also launch with Bing ads right out of the gate. Microsoft will serve you ads that the company thinks is relevant.

More from Ars Technica:


The wasteland of telemetry spying and intrusive advertising continues apace. This time with the added functionality of AI. Think "Clippy on cocaine".
 
Hopefully "Copilot" will be less annoying that that damn paperclip.

Considering that we've already had to deal with autogenerated SEO sludge, and it's starting to kick into high gear with the GPT hype, I have a feeling it might be less annoying, but not more useful.

Recently had folks talking about wholly generated "articles" show up in Google results, which led me to this: https://www.tffn.net/how-long-does-your-hair-need-to-be-to-wax/

At least they are honest that it's fully generated, because it's basically broadcasting what the future of SEO sludge looks like. And we'll be paying folks in Africa 2$/hr to sort through it to avoid re-incorporating it into the data set of the next generation of LLMs.
 
My partner manages a cancer treatment team and arrived home some two weeks ago having discovered some of her staff were using ChatGPT to create emails, and “we might as well embrace it”. I pointed out the risks around its tendency to hallucinate and sent her several articles and a paper for her senior team to discuss, resulting in a policy for its responsible use which includes always checking any ‘factual’ claims it might make and always reviewing and revising to make sure its ‘style’ and ‘tone’ is appropriate. Also emphasised is not to upload any patient, staff or other personal information in crafting prompts

So here we are. As @Colstan points out, (narrow) AI is steadily making its way in the real world

Several points have occurred to me recently from some of the memorable discussion in this thread. I wasn’t sure they were worth committing to a post but…well, anyway!

1. @dada_dave mentioned will and @Huntn - always quick to identify the big questions! - consciousness

There’s quite a lot to say around this, but a quick reduction is that if AI, modelled as it is on our imperfect understanding of the brain, is at all a good approximation of how our brains do their work, then we can run this in reverse as well. Among possibilities that might apply to both is that neural networks might be non-linear and chaotic ie deterministic yet unpredictable

This suggests that inputs in an ‘always-on’ learning system will (a) not give the same output twice (b) may not ‘know’ or be able to reproduce how it arrives at a particular output. My suggestion? That a conscious human - and a conscious AI if such a thing ever arises - might experience these unpredictable outputs as the operation of ‘free will’…

2. @Yoused made superb points about the difference between AI and human thought. One that struck me particularly is the multi-sensory advantage humans (and other animals) currently enjoy over AI. Right now, human (long-term) memory is organized in much more complex ways than that of an AI because our links are built using - yes words - but also images, smells, tastes, touch. The neural pathways between our memories are richer and more elaborate

To the extent that our everyday outputs depend on these things - this might most obviously apply with artists and writers and so on, but given the unpredictability suggested above it might apply in non-obvious professions too - we have a significant creative advantage in the (sensory) inputs we experience and the links we are able to make. Some of these would ‘baffle’ an AI restricted to words and even images…right now

Since my partner’s turn to ’the dark side’ :D I’ve given ChatGPT another trial, and it seems to have improved a lot from where it was in December, almost an acceptable assistant now. It definitely falls well short of creative insight - which I’ll loosely define as the ability to make large leaps that suddenly simplify what was previously complicated and obscure - yet I can see that its foundation in a large literature of internet-accessible human writings (and images) may make it a genuinely useful starting and debating point in longer creative processes

My reservation is that LLMs right now may suffer from a certain ‘poverty‘ in their outputs (back to Yoused’s point). @Nycturne (who made some great posts addressing other points in this thread BTW) and @Andropov and I discussed ChatGPT some months ago in the swift thread, and Androprov noted something along these lines in the style of email produced by LLMs (via some of Andropov’s colleages IIRC)

Many decades ago humans’ ability to do long arithmetic calculations got a large shot in the arm with the invention of the personal calculator. Recently, I’ve started to think of LLMs as the verbal counterpart of the personal calculator

As to the image production side of all this? @Eric and @Citysnaps might care to comment?

I have a young, artistically talented niece who two years ago was heading into an exciting career as a graphic artist. While enthused about the capabilities of AI, her career options - especially being young and at the bottom - don’t look good right now. However, she has the kind of smarts I hope will see her find a fulfilling niche somewhere in this interesting new world!
 
My partner manages a cancer treatment team and arrived home some two weeks ago having discovered some of her staff were using ChatGPT to create emails, and “we might as well embrace it”. I pointed out the risks around its tendency to hallucinate and sent her several articles and a paper for her senior team to discuss, resulting in a policy for its responsible use which includes always checking any ‘factual’ claims it might make and always reviewing and revising to make sure its ‘style’ and ‘tone’ is appropriate. Also emphasised is not to upload any patient, staff or other personal information in crafting prompts

So here we are. As @Colstan points out, (narrow) AI is steadily making its way in the real world

Several points have occurred to me recently from some of the memorable discussion in this thread. I wasn’t sure they were worth committing to a post but…well, anyway!

1. @dada_dave mentioned will and @Huntn - always quick to identify the big questions! - consciousness

There’s quite a lot to say around this, but a quick reduction is that if AI, modelled as it is on our imperfect understanding of the brain, is at all a good approximation of how our brains do their work, then we can run this in reverse as well. Among possibilities that might apply to both is that neural networks might be non-linear and chaotic ie deterministic yet unpredictable

This suggests that inputs in an ‘always-on’ learning system will (a) not give the same output twice (b) may not ‘know’ or be able to reproduce how it arrives at a particular output. My suggestion? That a conscious human - and a conscious AI if such a thing ever arises - might experience these unpredictable outputs as the operation of ‘free will’…

2. @Yoused made superb points about the difference between AI and human thought. One that struck me particularly is the multi-sensory advantage humans (and other animals) currently enjoy over AI. Right now, human (long-term) memory is organized in much more complex ways than that of an AI because our links are built using - yes words - but also images, smells, tastes, touch. The neural pathways between our memories are richer and more elaborate

To the extent that our everyday outputs depend on these things - this might most obviously apply with artists and writers and so on, but given the unpredictability suggested above it might apply in non-obvious professions too - we have a significant creative advantage in the (sensory) inputs we experience and the links we are able to make. Some of these would ‘baffle’ an AI restricted to words and even images…right now

Since my partner’s turn to ’the dark side’ :D I’ve given ChatGPT another trial, and it seems to have improved a lot from where it was in December, almost an acceptable assistant now. It definitely falls well short of creative insight - which I’ll loosely define as the ability to make large leaps that suddenly simplify what was previously complicated and obscure - yet I can see that its foundation in a large literature of internet-accessible human writings (and images) may make it a genuinely useful starting and debating point in longer creative processes

My reservation is that LLMs right now may suffer from a certain ‘poverty‘ in their outputs (back to Yoused’s point). @Nycturne (who made some great posts addressing other points in this thread BTW) and @Andropov and I discussed ChatGPT some months ago in the swift thread, and Androprov noted something along these lines in the style of email produced by LLMs (via some of Andropov’s colleages IIRC)

Many decades ago humans’ ability to do long arithmetic calculations got a large shot in the arm with the invention of the personal calculator. Recently, I’ve started to think of LLMs as the verbal counterpart of the personal calculator

As to the image production side of all this? @Eric and @Citysnaps might care to comment?

I have a young, artistically talented niece who two years ago was heading into an exciting career as a graphic artist. While enthused about the capabilities of AI, her career options - especially being young and at the bottom - don’t look good right now. However, she has the kind of smarts I hope will see her find a fulfilling niche somewhere in this interesting new world!
While less of an issue for your partner and more of one for your niece, one of several reasons why AI companies don’t want to reveal their training data is an issue of copyright. From programming to art to writing, much of what the models were trained on is probably copyrighted - at one point does the owner of that copyright make claims against the product of a prompt or the tool itself? This is in of course in addition to the highly related legal question of who owns the product of a prompt? It’s going to get very interesting in the legal regulatory space.

And as @Nycturne pointed out avoiding AI generated content in the next round of training is going to be a huge challenge never mind avoiding the proliferation of “poisoned data” (adversarial data) deliberately created to fuck with AI training. Also, are people going to be hired specifically to create “data” to further train models? Lots of issues coming up.
 
Last edited:
As to the image production side of all this? @Eric and @Citysnaps might care to comment?

I have a young, artistically talented niece who two years ago was heading into an exciting career as a graphic artist. While enthused about the capabilities of AI, her career options - especially being young and at the bottom - don’t look good right now. However, she has the kind of smarts I hope will see her find a fulfilling niche somewhere in this interesting new world!
Lots to unpack in your well thought out post but let me touch on my thoughts on the photography side of things.

It's starting to take off, even the local news has been doing a lot of "look at the cool San Francisco scenes we typed up from AI" while there are a handful of really talented local photographers they could be highlighting instead, but I digress.

The lines are getting more and more blurred, as in this example that won a photography contest and was later revealed as AI
2zwthvsYjtmT4VM8Vu6mAh-1920-80.png.webp


I follow a lot of photography groups and it's still pretty easy to spot the fakes but I fear that line will be come even further blurred as it gets better and better. I'm not sure what can be done about it TBH, I think it's coming no matter what and we need to embrace it yet still separate the real artists from those behind a computer tweaking details by text prompts.

IMO it's much like music, you spend your life learning and honing your skills on an instrument as an artist just to be replaced by pre-programmed mechanical bots with no real human feel or attributes, but it still sounds really cool so what are you going to do? Even vocals are now digitally perfected so you don't even have to be a real singer.

I'm older and set in my ways but in the end I still think we have to work with these new technologies or get left behind, it's really just a matter of finding your place.
 
As to the image production side of all this? @Eric and @Citysnaps might care to comment?

That’s a lot to chew on, especially as photography covers a wide swath of kinds/genres.

There’s probably a market for visually striking/appealing/unusual/dramatic/etc photographs in the same vein as there is for paintings. “Painter of Light” Thomas Kinkade comes to mind. If Kinkade were still alive he’d be all over AI and be making a ton of money cranking out the kind of paintings that many people found appealing.

For other kinds of photography where there are $ involved as a result, say documentary/journalistic/sports/wedding/etc, I don’t think that would be a good fit as they deal in reality.

I think most photographers would eschew any photographic work that does not involve a photographer, his/her eye and imagination, a camera, and interesting subject matter.

As an aside... my photography is a little on the strange side. I like hitting up strangers in urban environments, often underserved, for some conversation and making a couple of portraits. Or making candid photographs of people in their environment (usually urban).

As in the previous paragraph, I use the word “making” instead of “taking” photographs. That’s because there’s a lot of thought (what to include and not include in the frame, the quality of light, environmental context, gesture, potential narratives a viewer might conjure, hiding details in shadows to provoke mystery, and on and on) that goes into the making of a photograph, usually within a couple of seconds, before releasing the shutter. And very little thought simply taking a photograph. I suppose with enough sample photos that’s something AI could learn. Just don’t know if the end result would be consistently appealing. Of course the same can be said regarding human-made photographs. :)
 
There’s probably a market for visually striking/appealing/unusual/dramatic/etc photographs in the same vein as there is for paintings. “Painter of Light” Thomas Kinkade comes to mind. If Kinkade were still alive he’d be all over AI and be making a ton of money cranking out the kind of paintings that many people found appealing.

Arrgh, do not give someone ideas. More Kincade or Keene or stuff like that, we have enough as it is. Now, BobRossGPT would be kind of annoying, but it would at least be somewhat tolerable.

I think most photographers would eschew any photographic work that does not involve a photographer, his/her eye and imagination, a camera, and interesting subject matter.

I have a hard time believing an AI could come up with
moonrise-hernandez-new-mexico-ansel-adams.jpg

because the photographer would have to be able to recognize the emotional potential of a subject/scene: emotions are by definition not calculated, and I suspect that they could not accurately be embedded in a compute model (their source is primarily chemical, which is difficult to simulate reliably).

However, AI could be helpful in getting the post-processing done efficiently. Which is a whole nother kettle – for a while already it has been hard to accept the validity of some images.
 
I have a hard time believing an AI could come up with

Though Ansel Adams in general doesn't ring my bells, I love Moonlight in Hernandez! And I give him a ton of props in developing his methodology.

I think AI could do well with some of his other images, though.

Bob Ross... Loved his programs, even though I can't paint with beans. Or a brush. Sorry to see him pass away, and the exploitation of his name afterwards.

Kinkade... I can't find adequate words. Maybe AI can help me with that. :)
 
My partner manages a cancer treatment team and arrived home some two weeks ago having discovered some of her staff were using ChatGPT to create emails, and “we might as well embrace it”. I pointed out the risks around its tendency to hallucinate and sent her several articles and a paper for her senior team to discuss, resulting in a policy for its responsible use which includes always checking any ‘factual’ claims it might make and always reviewing and revising to make sure its ‘style’ and ‘tone’ is appropriate. Also emphasised is not to upload any patient, staff or other personal information in crafting prompts

So here we are. As @Colstan points out, (narrow) AI is steadily making its way in the real world

Several points have occurred to me recently from some of the memorable discussion in this thread. I wasn’t sure they were worth committing to a post but…well, anyway!

1. @dada_dave mentioned will and @Huntn - always quick to identify the big questions! - consciousness

There’s quite a lot to say around this, but a quick reduction is that if AI, modelled as it is on our imperfect understanding of the brain, is at all a good approximation of how our brains do their work, then we can run this in reverse as well. Among possibilities that might apply to both is that neural networks might be non-linear and chaotic ie deterministic yet unpredictable

This suggests that inputs in an ‘always-on’ learning system will (a) not give the same output twice (b) may not ‘know’ or be able to reproduce how it arrives at a particular output. My suggestion? That a conscious human - and a conscious AI if such a thing ever arises - might experience these unpredictable outputs as the operation of ‘free will’…

2. @Yoused made superb points about the difference between AI and human thought. One that struck me particularly is the multi-sensory advantage humans (and other animals) currently enjoy over AI. Right now, human (long-term) memory is organized in much more complex ways than that of an AI because our links are built using - yes words - but also images, smells, tastes, touch. The neural pathways between our memories are richer and more elaborate

To the extent that our everyday outputs depend on these things - this might most obviously apply with artists and writers and so on, but given the unpredictability suggested above it might apply in non-obvious professions too - we have a significant creative advantage in the (sensory) inputs we experience and the links we are able to make. Some of these would ‘baffle’ an AI restricted to words and even images…right now

Since my partner’s turn to ’the dark side’ :D I’ve given ChatGPT another trial, and it seems to have improved a lot from where it was in December, almost an acceptable assistant now. It definitely falls well short of creative insight - which I’ll loosely define as the ability to make large leaps that suddenly simplify what was previously complicated and obscure - yet I can see that its foundation in a large literature of internet-accessible human writings (and images) may make it a genuinely useful starting and debating point in longer creative processes

My reservation is that LLMs right now may suffer from a certain ‘poverty‘ in their outputs (back to Yoused’s point). @Nycturne (who made some great posts addressing other points in this thread BTW) and @Andropov and I discussed ChatGPT some months ago in the swift thread, and Androprov noted something along these lines in the style of email produced by LLMs (via some of Andropov’s colleages IIRC)

Many decades ago humans’ ability to do long arithmetic calculations got a large shot in the arm with the invention of the personal calculator. Recently, I’ve started to think of LLMs as the verbal counterpart of the personal calculator

As to the image production side of all this? @Eric and @Citysnaps might care to comment?

I have a young, artistically talented niece who two years ago was heading into an exciting career as a graphic artist. While enthused about the capabilities of AI, her career options - especially being young and at the bottom - don’t look good right now. However, she has the kind of smarts I hope will see her find a fulfilling niche somewhere in this interesting new world!
My opinion:
  • AI shows great promise and danger on multiple levels from job loss to Skynet.
  • Capitalism will be threatened by it due to job loss and the threat to intellectual property.
 

This sex toy company uses ChatGPT to whisper sweet, customizable fantasies at you​

The equivalent of having your dildo talking to you. “Oh baby, your so hot!!”. 😄
Android companions, sexually capable are a sure feature of the future.

ACA526A6-DEF4-40E6-94DA-74DDD58D117B.jpeg
 
Back
Top