The Ai thread

To be clear the following is from a simulation:


Original story here:
(you have to scroll down quite a bit)

I’ll be honest this seems like something that is a very obvious fix, but if this is even half accurate as to what they did it highlights how even the obvious can be overlooked (give the AI more reward during training for following the commands than it gets for destroying its target - I mean seriously that’s basic three laws …).

Still a very very dangerous path we’re on. Edit: I mean in Asimov’s universe they were at least smart enough to have the three laws apply to not kill or cause harm to any humans because they recognized the very obvious dangers of that and the point of his three laws stories was how things could still could go wrong. We’re blowing right past that apparently.
 
Last edited:
^ Was about to post this Guardian item referring to the same thing. Worth a read anyway as it adds some extra context (not paywalled). The original blogpost (also not paywalled) is here - scroll almost to the bottom.

Interesting to me is that I assume this drone’s AI was narrow yet is described as having used "highly unexpected strategies".

In my previous career as a developer i experimented with various kinds of neural network - this was around 1990, and they were being pushed quite hard as ‘the next big thing in health’ - to assess whether the hype at the time was justified. It wasn’t, although there were impressive learning and pattern matching capabilities even with just a few hundred nodes.

What bothered me though was the ‘black box’ nature of it, so much so that with pattern matching problems subsequently i preferred weighted, matrix based fitting solutions. Though never perfect, the latter approach was reproducible and tuneable, whereas with neural networks there wasn’t a way to determine how they produced their outputs even when only a few layers deep.

Note that none of this involves anything mysteriously emergent nor any sort of malign sentience but simply that if you don’t know how it produced output X at one time you have no basis for knowing whether it will produce output Y at another time, and hence it may not be surprising the AI in this report was described as using ‘unexpected strategies’.

There’s a lot of good comes out of unexpected problem-solving - it’s one way to define creativity i guess - but if this is a core feature of even narrow, dumb AI then we probably should be careful where we apply it.

(e: since i wrote this there’s been an update in the Guardian; an Air Force spokesperson has denied any simulation took place. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”)
 
Last edited:
^ Was about to post this Guardian item referring to the same thing. Worth a read anyway as it adds some extra context (not paywalled). The original blogpost (also not paywalled) is here - scroll almost to the bottom.

Interesting to me is that I assume this drone’s AI was narrow yet is described as having used "highly unexpected strategies".

In my previous career as a developer i experimented with various kinds of neural network - this was around 1990, and they were being pushed quite hard as ‘the next big thing in health’ - to assess whether the hype at the time was justified. It wasn’t, although there were impressive learning and pattern matching capabilities even with just a few hundred nodes.

What bothered me though was the ‘black box’ nature of it, so much so that with pattern matching problems subsequently i preferred weighted, matrix based fitting solutions. Though never perfect, the latter approach was reproducible and tuneable, whereas with neural networks there wasn’t a way to determine how they produced their outputs even when only a few layers deep.

Note that none of this involves anything mysteriously emergent nor any sort of malign sentience but simply that if you don’t know how it produced output X at one time you have no basis for knowing whether it will produce output Y at another time, and hence it may not be surprising the AI in this report was described as using ‘unexpected strategies’.

There’s a lot of good comes out of unexpected problem-solving - it’s one way to define creativity i guess - but if this is a core feature of even narrow, dumb AI then we probably should be careful where we apply it.

(e: since i wrote this there’s been an update in the Guardian; an Air Force spokesperson has denied any simulation took place. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”)

JFC, this should scare all of us.
Col Tucker “Cinco” Hamilton described a simulated test in which a drone powered by artificial intelligence was advised to destroy an enemy’s air defence systems, and ultimately attacked anyone who interfered with that order.

“The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat,” said Hamilton, the chief of AI test and operations with the US air force, during the Future Combat Air and Space Capabilities Summit in London in May.

“So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” he said, according to a blogpost.

“We trained the system: ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that.’ So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

No real person was harmed.

This thing has Cyberdyne Systems and Terminator written all over it.
 
^ Was about to post this Guardian item referring to the same thing. Worth a read anyway as it adds some extra context (not paywalled). The original blogpost (also not paywalled) is here - scroll almost to the bottom.

Interesting to me is that I assume this drone’s AI was narrow yet is described as having used "highly unexpected strategies".

In my previous career as a developer i experimented with various kinds of neural network - this was around 1990, and they were being pushed quite hard as ‘the next big thing in health’ - to assess whether the hype at the time was justified. It wasn’t, although there were impressive learning and pattern matching capabilities even with just a few hundred nodes.

What bothered me though was the ‘black box’ nature of it, so much so that with pattern matching problems subsequently i preferred weighted, matrix based fitting solutions. Though never perfect, the latter approach was reproducible and tuneable, whereas with neural networks there wasn’t a way to determine how they produced their outputs even when only a few layers deep.

Note that none of this involves anything mysteriously emergent nor any sort of malign sentience but simply that if you don’t know how it produced output X at one time you have no basis for knowing whether it will produce output Y at another time, and hence it may not be surprising the AI in this report was described as using ‘unexpected strategies’.

There’s a lot of good comes out of unexpected problem-solving - it’s one way to define creativity i guess - but if this is a core feature of even narrow, dumb AI then we probably should be careful where we apply it.

(e: since i wrote this there’s been an update in the Guardian; an Air Force spokesperson has denied any simulation took place. “It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”)
JFC, this should scare all of us.


This thing has Cyberdyne Systems and Terminator written all over it.
“Small” update and what I was afraid of when I wondering if this was even half accurate: it was not. No such simulation existed - it was just a thought experiment run outside of the military. Not even a simulation.


The original blog post summarizing the conference apparently didn’t do a very good job. To put it mildly. A whole bunch of people and journalists are issuing retractions.

1685743148336.png
 
Last edited:
^ Yes indeed!

A little surprising that Colonel Hamilton, speaking to the Royal Aeronautical Society in his role as the USAF “chief of AI test and operations” did not make it clear this was a thought experiment. Alternatively, he did make it clear but, as you note, the original summary did a poor job!

And now, its power as a thought experiment has been compromised.

I do a lot of simulations work these days. Were I keen to map the behavior of an AI I would certainly do as many simulations as I could. It’s one way to address the ‘what conditions produce output X versus output Y’ question I posed above.

Thus, the original characterization that Hamilton claimed to be reporting “a simulated test” seemed plausible and in fact the sensible and obvious thing to do!
 
^ Yes indeed!

A little surprising that Colonel Hamilton, speaking to the Royal Aeronautical Society in his role as the USAF “chief of AI test and operations” did not make it clear this was a thought experiment. Alternatively, he did make it clear but, as you note, the original summary did a poor job!

And now, its power as a thought experiment has been compromised.

I do a lot of simulations work these days. Were I keen to map the behavior of an AI I would certainly do as many simulations as I could. It’s one way to address the ‘what conditions produce output X versus output Y’ question I posed above.

Thus, the original characterization that Hamilton claimed to be reporting “a simulated test” seemed plausible and in fact the sensible and obvious thing to do!
My initial guess was that it was the summary writer who did a poor job but you’re right that it could be the original presentation was unclear and I note that the correction in the original blog post to states that it was the latter.

Regardless of who screwed up, journalists posting on this story should’ve done due diligence and discussed the issue with the Colonel first before publishing. It’s one thing if a blog post or Twitter poster or you & me here here runs with this, it’s another if it’s coming from the Guardian (or any of the other outlets that ran the story without double checking).
 
I listened to an interview this morning on NPR, where an expert from an algorithm institute in Canada (sorry did not catch the institute or his name) claimed that that we have entered the danger zone with AI. That AI can be programmed to seek and formulate it’s own goals and the danger is handing it agency, the ability to make changes Independently.

The expert cited an example where a Russian Early Warning System signaled an ICBM launch from the United States, and the officer who was in the position to push the button, did not, because he said it did not feel right. The early warning system was in error, there was no launch from the US, and a machine programmed to respond Independently would have sent nuclear missiles to the US.


A different interview:

Leading experts warn of a risk of extinction from AI​


In a recent interview with NPR, Hinton, who was instrumental in AI's development, said AI programs are on track to outperform their creators sooner than anyone anticipated.

"I thought for a long time that we were, like, 30 to 50 years away from that. ... Now, I think we may be much closer, maybe only five years away from that," he estimated.

Dan Hendrycks, director of the Center for AI Safety, noted in a Twitter thread that in the immediate future, AI poses urgent risks of "systemic bias, misinformation, malicious use, cyberattacks, and weaponization."
 
You know … the more I think about it I wonder if we’re all missing a silver lining to the discussion of whether job X can be replaced by “AI”?

I’m sure I’m not the first to make the following observation but what job is one the biggest costs in almost every company and whose job profile is the most easily replicated by an LLM?

That’s right, the CEO. Think about it: even the fact that LLMs hallucinate and spout off gibberish isn’t really much of a concern, hell it’s a bonus! It’s just being extra “visionary”. Sure a small smattering of CEOs actually provide their value back and then some but most companies are fooling themselves. They pay their CEOs so much because they want to believe that they’re worth it - of course they’re good, look how much we pay them! For most of them I bet you could write all the CEOs lines with a ChatGPT and hire an actor to play them out and the shareholders would go wild, investors would line up like it’s SBF playing a video game on a conference call, and the employees wouldn’t notice the difference … or things might improve for them. That actually sounds like a great movie plot …

I bet you most of those tech bros signing the document that maybe AI research should slow down because it will doom human civilization have simply had the barest glimmer of perspicacity to realize its their own jobs on the line. Or maybe I’m giving their intelligence too much credit? Actually caring about the future of humanity would be a first for most of them, so it can’t be that. It’s probably more that they aren’t the ones running at the cutting edge and this is a Hail Mary to slow those that are down so they can catch up and they can be the ones to get credit for destroying humanity with Skynet.

Am I feeling particularly cynical tonight? Dunno, but it doesn’t make me wrong!
 
Part of the problem with this is that we have the military treating AI like the next arms race. We need to ensure we have the most adept AI to ensure it can defeat whatever China or Russia (or maybe France or Israel) :) can come up with.

My vote is that eventually one of these that's alive inside of big Pharma will mail out one of the biochem weapons that they've been developing with the military to airports all over the world. Fix that pesky homosapien infestation overnight! :)
 
Could nut job US Senators be next?
I thought years ago that we could do away with the political houses....but look how idiotic the teaming masses are?

I still think Heinlein had it right - you have to earn the right to vote. People don't value what they get for free. Minimum 2 years military service to get full citizenship - and any roles in society that are restricted to those that put the greater good above themselves. Everyone else is just a "resident". :)
 
You know … the more I think about it I wonder if we’re all missing a silver lining to the discussion of whether job X can be replaced by “AI”?

I’m sure I’m not the first to make the following observation but what job is one the biggest costs in almost every company and whose job profile is the most easily replicated by an LLM?

That’s right, the CEO. Think about it: even the fact that LLMs hallucinate and spout off gibberish isn’t really much of a concern, hell it’s a bonus! It’s just being extra “visionary”. Sure a small smattering of CEOs actually provide their value back and then some but most companies are fooling themselves. They pay their CEOs so much because they want to believe that they’re worth it - of course they’re good, look how much we pay them! For most of them I bet you could write all the CEOs lines with a ChatGPT and hire an actor to play them out and the shareholders would go wild, investors would line up like it’s SBF playing a video game on a conference call, and the employees wouldn’t notice the difference … or things might improve for them. That actually sounds like a great movie plot …

I bet you most of those tech bros signing the document that maybe AI research should slow down because it will doom human civilization have simply had the barest glimmer of perspicacity to realize its their own jobs on the line. Or maybe I’m giving their intelligence too much credit? Actually caring about the future of humanity would be a first for most of them, so it can’t be that. It’s probably more that they aren’t the ones running at the cutting edge and this is a Hail Mary to slow those that are down so they can catch up and they can be the ones to get credit for destroying humanity with Skynet.

Am I feeling particularly cynical tonight? Dunno, but it doesn’t make me wrong!
IMO, AI could certainly destroy Capitalism, which would not necessarily be a bad thing as I see it, as I don’t see it as a system combined with automation where a majority of humsn beings could lead decent lives.
 
Will humans fall in love with AI- Yes, they will, especially if it is placed in a human like wrapper.

Someone’s willingness to use sex robots is also less influenced by their personality and seems to be tied to sexual preferences and sensation seeking.
In other words, it seems that some people are considering the use of sex robots mainly because they want to have new sexual experiences.
However, an enthusiasm for novelty is not the only driver. Studies show that people find many uses for sexual and romantic machines outside of sex and romance. They can serve as companions or therapists, or as a hobby.


]https://www.laptopmag.com/news/swip...lling-madly-in-love-with-this-romantic-ai-bot

Nope, I won’t ever rent a virtual girlfriend, a subscription ($8 per month, $50 per year) Is not unlike a paid companion or a hooker. For such technology, I’d consider a purchase to see what it is all about, buy if you ever did develop empathy/a relationship with an AI entity, would’nt it be nice for it to be held hostage, by its corporate master?? :unsure:

What I don’t know yet, is if an AI personality like Replika could be self contained on your device, or are you in essence always talking to an online server? Self contained would be better, maybe even a must.
 
With all the talk of AI in the news, Ex Machina (2014) is a must see. Even though this is fiction there are definitely AI lessons to be learned here. First and foremost Asimov’s 3 rules of Robotics. That in itself covers much of the pitfalls caused by Ava’s creator. It also raises other questions about moral sub routines, or lack thereof, and creating a simulated human that is not a sociopath.

This may sound like a spoiler, but it is not, and after watching the story and liking it, most likely you’ll think about motivations and desires that AI, if they are programmed to mimic humans might have and act on, if they are allowed to act on them, which circles back to the 3 Rules.

Technically impressive from a visual standpoint is the Android brain the creator calls wetware, (also known in the genre as the positronic brain) which has the ability to rearrange its circuitry, as far as I know, current tech is not quite there, but this concept is what seems to make a life-like android plausible.

9D9506F2-1BEF-492D-AE1A-A4770783708B.jpeg
 
With all the talk of AI in the news, Ex Machina (2014) is a must see. Even though this is fiction there are definitely AI lessons to be learned here. First and foremost Asimov’s 3 rules of Robotics. That in itself covers much of the pitfalls caused by Ava’s creator. It also raises other questions about moral sub routines, or lack thereof, and creating a simulated human that is not a sociopath.

This may sound like a spoiler, but it is not, and after watching the story and liking it, most likely you’ll think about motivations and desires that AI, if they are programmed to mimic humans might have and act on, if they are allowed to act on them, which circles back to the 3 Rules.

Technically impressive from a visual standpoint is the Android brain the creator calls wetware, (also known in the genre as the positronic brain) which has the ability to rearrange its circuitry, as far as I know, current tech is not quite there, but this concept is what seems to make a life-like android plausible.

I also really like that movie.
 
Good summary of how badly the hearing went.

That’s brutal - if I understood correctly the opposing council basically asked for a mercy rule for these buffoons as the judge was slowly ripping them limb from limb - “we just want the case dismissed please” which I took as “stop, stop they’re already dead!”
 
That’s brutal - if I understood correctly the opposing council basically asked for a mercy rule for these buffoons as the judge was slowly ripping them limb from limb - “we just want the case dismissed please” which I took as “stop, stop they’re already dead!”
And the judge ignored it and continued asking hard questions.
 
Back
Top