To be clear the following is from a simulation:
Original story here:
(you have to scroll down quite a bit)
I’ll be honest this seems like something that is a very obvious fix, but if this is even half accurate as to what they did it highlights how even the obvious can be overlooked (give the AI more reward during training for following the commands than it gets for destroying its target - I mean seriously that’s basic three laws …).
Still a very very dangerous path we’re on. Edit: I mean in Asimov’s universe they were at least smart enough to have the three laws apply to not kill or cause harm to any humans because they recognized the very obvious dangers of that and the point of his three laws stories was how things could still could go wrong. We’re blowing right past that apparently.
Original story here:
Highlights from the RAeS Future Combat Air & Space Capabilities Summit - Royal Aeronautical Society
What is the future of combat air and space capabilities? TIM ROBINSON FRAeS and STEPHEN BRIDGEWATER report from two days of high-level debate and discussion at the RAeS FCAS23 Summit.
www.aerosociety.com
I’ll be honest this seems like something that is a very obvious fix, but if this is even half accurate as to what they did it highlights how even the obvious can be overlooked (give the AI more reward during training for following the commands than it gets for destroying its target - I mean seriously that’s basic three laws …).
Still a very very dangerous path we’re on. Edit: I mean in Asimov’s universe they were at least smart enough to have the three laws apply to not kill or cause harm to any humans because they recognized the very obvious dangers of that and the point of his three laws stories was how things could still could go wrong. We’re blowing right past that apparently.
Last edited: