Go Back  PPRuNe Forums > PPRuNe Social > Jet Blast
Reload this Page >

Can you murder a Robot?

Jet Blast Topics that don't fit the other forums. Rules of Engagement apply.

Can you murder a Robot?

Old 17th Mar 2019, 07:48
  #1 (permalink)  
Thread Starter
 
Join Date: Aug 2007
Location: 4DME
Posts: 2,195
Can you murder a Robot?

A Sunday morning read.

https://www.bbc.co.uk/news/technology-47090174
N707ZS is offline  
Old 17th Mar 2019, 23:21
  #2 (permalink)  
 
Join Date: Feb 2003
Location: Scotland
Posts: 144
Might be interesting to see how peoples attitude changes right after they watch Terminator
cdtaylor_nats is offline  
Old 18th Mar 2019, 00:18
  #3 (permalink)  
 
Join Date: Apr 2001
Location: yes
Posts: 175
I think Star Trek covered that when someone tried to dismantle Data but lost their case when it was proved Data was a senseint being.
Yeah silly science fiction but wait the vegans are on the march. Wait until they try and prove animals are people too.
​​​​
I'm not even joking about that.
Steepclimb is offline  
Old 18th Mar 2019, 10:28
  #4 (permalink)  
 
Join Date: Apr 1999
Location: Manchester, UK
Posts: 1,958
Of course the robot wasn’t a sentient being (is anyone claiming it was?) but that doesn’t alter the fact that whoever smashed it up is a grade one jerk.
ShotOne is offline  
Old 18th Mar 2019, 10:56
  #5 (permalink)  
 
Join Date: Apr 1998
Location: Mesopotamos
Posts: 1,562
Can you murder a robot?
I think you should just throw a bucket of cold water over that idea.
cattletruck is online now  
Old 18th Mar 2019, 12:18
  #6 (permalink)  
 
Join Date: Apr 2003
Location: Germany
Age: 73
Posts: 1,561
We already see court cases where people fight over a patient in a PVS (Persistent Vegetative State), often with no higher mental functions, kept barely alive by a lot of life support machinery. There you have a sort of cyborg, and the trouble starts when it is arguable that what makes us human, our higher mental functions, have slipped away and are never coming back. All that is left is a sort of robot, really.

Stanley Kubrick dealt with this back in the Sixties in "2001: A Space Odyssey," when a crewmember "killed" the HAL 9000 computer that controlled his space craft. Hal was a computer with seemingly human intelligence, as shown by the way he tried to murder the entire crew.
chuks is offline  
Old 18th Mar 2019, 13:31
  #7 (permalink)  
Cunning Artificer
 
Join Date: Jun 2001
Location: The spiritual home of DeHavilland
Age: 73
Posts: 3,125
I could murder a pint of Tiger right now.
Blacksheep is offline  
Old 18th Mar 2019, 14:23
  #8 (permalink)  
 
Join Date: Apr 1999
Location: Manchester, UK
Posts: 1,958
Kubrick certainly raised the question but you couldn’t say he “dealt with” it. The murder bit may come up but so far this instance is undeniably a criminal offence and a vicious and spiteful one at that since the “robot” was designed to be an object of interest and affection to (sentient) humans. Surely anyone could appreciate ripping up a child’s teddy bear would cause hurt beyond the material value of a fabric toy?
ShotOne is offline  
Old 18th Mar 2019, 14:56
  #9 (permalink)  
 
Join Date: Sep 2003
Location: Germany
Posts: 56
Originally Posted by ShotOne View Post
Kubrick certainly raised the question but you couldn’t say he “dealt with” it. The murder bit may come up but so far this instance is undeniably a criminal offence and a vicious and spiteful one at that since the “robot” was designed to be an object of interest and affection to (sentient) humans. Surely anyone could appreciate ripping up a child’s teddy bear would cause hurt beyond the material value of a fabric toy?
what is this all about?
Blohm is offline  
Old 19th Mar 2019, 11:07
  #10 (permalink)  
 
Join Date: Apr 2003
Location: Germany
Age: 73
Posts: 1,561
Isaac Asimov posited three rules for robots:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These rules were first presented by Asimov in I, Robot, often used in his later work, and taken for ineluctable. Kubrick turned this on its head in 2001: A Space Odyssey with his HAL 9000, a sort of huge robot, an artificial intelligence running a large space ship. Hal, as he is known by the crew, uses his machine logic to come to a logical decision that the humans aboard the ship must die. His killing by Dr. Bowman is thus quite justifiable, since Hal, if left alive, was going to kill him after already having killed the rest of the crew, already having attempted to kill Dr. Bowman by stranding him in space.

"Open the pod bay doors, Hal."

"I'm sorry, Dave. I can't do that." There the most chilling thing is the very mild and reasonable tone of voice in which Hal speaks, telling Dr. Bowman that he must kill him.

That left another us with another given to confront, the killing of a highly sympathetic, humanoid robot. That was addressed in A.I. Artificial Intelligence, where David, a childlike humanoid, is due to be destroyed, killed if you like, but then he is saved from that by a complicated series of events. Ironically, David is discovered millennia later as the sole remnant of humanity, enabling alien beings to resurrect his human "mother," albeit only for a day.

We will get further into this, I am sure.
chuks is offline  
Old 19th Mar 2019, 11:39
  #11 (permalink)  
 
Join Date: Aug 2003
Location: closer to hell
Age: 49
Posts: 908
Originally Posted by chuks View Post
Isaac Asimov posited three rules for robots:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These rules were first presented by Asimov in I, Robot, often used in his later work, and taken for ineluctable. Kubrick turned this on its head in 2001: A Space Odyssey with his HAL 9000, a sort of huge robot, an artificial intelligence running a large space ship. Hal, as he is known by the crew, uses his machine logic to come to a logical decision that the humans aboard the ship must die. His killing by Dr. Bowman is thus quite justifiable, since Hal, if left alive, was going to kill him after already having killed the rest of the crew, already having attempted to kill Dr. Bowman by stranding him in space.

"Open the pod bay doors, Hal."

"I'm sorry, Dave. I can't do that." There the most chilling thing is the very mild and reasonable tone of voice in which Hal speaks, telling Dr. Bowman that he must kill him.

That left another us with another given to confront, the killing of a highly sympathetic, humanoid robot. That was addressed in A.I. Artificial Intelligence, where David, a childlike humanoid, is due to be destroyed, killed if you like, but then he is saved from that by a complicated series of events. Ironically, David is discovered millennia later as the sole remnant of humanity, enabling alien beings to resurrect his human "mother," albeit only for a day.

We will get further into this, I am sure.
probably the most boring movie. ever.
troppo is offline  
Old 19th Mar 2019, 21:41
  #12 (permalink)  
 
Join Date: Jan 2015
Location: Coasting South
Age: 66
Posts: 70
Originally Posted by troppo View Post
probably the most boring movie. ever.
And probably one of the most ground breaking and influential.
hiflymk3 is offline  
Old 20th Mar 2019, 13:13
  #13 (permalink)  
 
Join Date: Sep 2006
Location: 59°09N 002°38W (IATA: SOY, ICAO: EGER)
Age: 78
Posts: 811
ricardian is offline  
Old 20th Mar 2019, 21:59
  #14 (permalink)  
 
Join Date: Oct 2007
Location: A better place.
Posts: 1,818
Jeez Troppo - you'd be a whole lotta fun at a party.
tartare is offline  
Old 21st Mar 2019, 09:19
  #15 (permalink)  
 
Join Date: Apr 1999
Location: Manchester, UK
Posts: 1,958
Strange how we’re concerned about the possible feelings of yet-to-be-invented “sentient” machines yet utterly unconcerned at the tens of millions of undeniably sentient animals which endure lives of utter misery before having their throats slit for us.
ShotOne is offline  
Old 23rd Mar 2019, 17:53
  #16 (permalink)  
 
Join Date: Apr 2001
Location: yes
Posts: 175
Originally Posted by ShotOne View Post
Strange how we’re concerned about the possible feelings of yet-to-be-invented “sentient” machines yet utterly unconcerned at the tens of millions of undeniably sentient animals which endure lives of utter misery before having their throats slit for us.
Oh I don't know the cows round here seem really relaxed and content. Same with the sheep.

Here's the vegan vision of the future. Sitting round the table rather than on the table. Have to go now. I have a former sentient being in the oven. Yum yum.

Steepclimb is offline  
Old 25th Mar 2019, 08:09
  #17 (permalink)  
 
Join Date: Apr 1999
Location: Manchester, UK
Posts: 1,958
Missing the point Steepclimb: if you want to clog your arteries eating bits of dead animal, fill your boots. My point was directed at those blocking life-saving/improving advances because of “ethical” concerns while ignoring far more brutal ones. A current one would be those blocking research into growing gene-edited transplant organs because it’s “unnatural” -even though this condemns millions to limited, shortened or painful lives.
ShotOne is offline  
Old 25th Mar 2019, 08:22
  #18 (permalink)  
 
Join Date: Nov 2011
Location: Japan
Posts: 818
Re Asimov's Rule One: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

That is great but US and Russian scientists are designing killer robots, so this rule may apply to robots, but not to the humans who command them. This makes Rule One silly idealism. Besides, in a war scenario one side usually does not see the other as 'human'.

Someone wrote the other day about how the computer thought the aircraft was entering a stall, so it commanded immediate, sudden nose-down movement, throwing people to the ceiling. I was interested to see this casual use of the word 'thought'.

Thus I can imagine times when I would gladly murder the robot.

Apologies for any aviation related comment, albeit tangential.

Last edited by jolihokistix; 25th Mar 2019 at 12:44.
jolihokistix is online now  
Old 25th Mar 2019, 08:59
  #19 (permalink)  
 
Join Date: Apr 2003
Location: Germany
Age: 73
Posts: 1,561
Those rules exist in an alternate future, the same one with atomic power that is totally safe, and too cheap to meter. I was raised expecting that future, and look what we have instead!

Killer robots already exist: some guy working in a Ford factory wandered into the path of an industrial robot back in 1979 to become the first person to have been killed by a robot, when there have been several other similar deaths since then.

Nobody is safe. Vegetarians have died from choking to death on raw carrots. It is not just those who exploit robots who meet a grisly end, and carnivorous people being stomped to death by cows, you know. I would not be so smug about my personal safety if I were you, ShotOne. There's an aubergine out there with your name on it.
chuks is offline  
Old 25th Mar 2019, 10:23
  #20 (permalink)  
 
Join Date: Apr 1999
Location: Manchester, UK
Posts: 1,958
What’s my personal safety got to do with anything?? For the avoidance of doubt, my post wasn’t about shoving sentient beings in the oven. It was directed st those doing so while hand-wringing over the hypothetical “feelings” of a yet-to-be-invented robot.
ShotOne is offline  

Thread Tools
Search this Thread

Contact Us - Archive - Advertising - Cookie Policy - Privacy Statement - Terms of Service - Do Not Sell My Personal Information -

Copyright © 2021 MH Sub I, LLC dba Internet Brands. All rights reserved. Use of this site indicates your consent to the Terms of Use.