RTFM...
Posted: Wed Jul 13, 2011 1:53 pm
Oh dear, it looks like computer's have taken the advice!
http://www.gizmag.com/machine-learning-systems/19205/
http://www.gizmag.com/machine-learning-systems/19205/
Well, I've seen the Terminator films, I know how it ends...CommonSenseOTB wrote:This allows computers to react to the environment without needing to understand. Isn't that what animals and lower forms of life do? This has real potential to be destructive to the human race if it gets out of control. So much for Asimov's law.....
I think I would prefer if we brought computers up to a level of "reason" before giving them the ability to "react". Else they might just "react" to us with tragic consequences.
The thing is in those films the computers can think and "reason" that humans should be exterminated. This discovery actually lets computers "react", skipping the middleman, the moral compass as it were. The nearest parallel in animals would be instinct. Imagine that a certain unpredicted situation happens that cause computers to "react" in a hostile way. Automatically and out of our control. Like a dog that for some reason that suddenly turns on its master. Imagine a billion such "dogs" all having the same "reaction" at once....DaddyHoggy wrote:Well, I've seen the Terminator films, I know how it ends...CommonSenseOTB wrote:This allows computers to react to the environment without needing to understand. Isn't that what animals and lower forms of life do? This has real potential to be destructive to the human race if it gets out of control. So much for Asimov's law.....
I think I would prefer if we brought computers up to a level of "reason" before giving them the ability to "react". Else they might just "react" to us with tragic consequences.
Have you seen Forbidden Planet? Do you remember robot in that?DaddyHoggy wrote:Well, I've seen the Terminator films, I know how it ends...CommonSenseOTB wrote:This allows computers to react to the environment without needing to understand. Isn't that what animals and lower forms of life do? This has real potential to be destructive to the human race if it gets out of control. So much for Asimov's law.....
I think I would prefer if we brought computers up to a level of "reason" before giving them the ability to "react". Else they might just "react" to us with tragic consequences.
I've seen Forbidden Planet many times, even the stage play "follow up" (Return to the Forbidden Planet).Matti wrote:Have you seen Forbidden Planet? Do you remember robot in that?DaddyHoggy wrote:Well, I've seen the Terminator films, I know how it ends...CommonSenseOTB wrote:This allows computers to react to the environment without needing to understand. Isn't that what animals and lower forms of life do? This has real potential to be destructive to the human race if it gets out of control. So much for Asimov's law.....
I think I would prefer if we brought computers up to a level of "reason" before giving them the ability to "react". Else they might just "react" to us with tragic consequences.
Arg! That film was terrible! 'Twas like they tried to combine all of Asimov's robot stories into one, but did it in a really bad way...DaddyHoggy wrote:(or actual the film "I, Robot" - based on the book in name only - which the computer decides human's can't look after themselves).
Which is “the” moral compass? Everyone has their own moral code, and at the end of the day none are objective; at best, some can be expressed in more convincing terms than others.CommonSenseOTB wrote:the moral compass as it were
Eliezer Yudkowsky wrote:The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
That depends on the idea of intelligence, though. Intelligent people tend not to wander around killing stuff through indifference. In fact, humanity – as the most intelligent species on earth – is also the kindest. We're maybe the only ones capable of kindness, and of caring about other living beings. So perhaps – with the sun shining outside – something more intelligent than us would also be kinder than us. It might be too smart to be indifferent to another entity's suffering.Ahruman wrote:There is no reason to assume intelligent computers, even rigidly moral ones, would value human life higher than we value, say, the lives of cabbages. If a powerful AI is not explicitly (and correctly) designed to be benevolent to humans – known as Friendly AI, or less euphemistically, slaves – the only rational expectation is that it will eventually harm us. Not because it hates us, but because it doesn’t care; the opposite of “Friendly” is “indifferent”.
Eliezer Yudkowsky wrote:The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
…and yet we slaughter billions of animals and plants every day. No-one’s saying that an unrestrained AI will treat humanity that way, only that we have no a priori way of knowing, and appeal to our own moral codes is simply irrelevant. With no way of knowing, and no reasonable way of assigning probabilities, we must assume the worst.Disembodied wrote:That depends on the idea of intelligence, though. Intelligent people tend not to wander around killing stuff through indifference. In fact, humanity – as the most intelligent species on earth – is also the kindest. We're maybe the only ones capable of kindness
and ask "Why?"A robot may not injure a human being or, through inaction, allow a human being to come to harm.
True. Our own moral codes are a product of history, and our primate and mammalian inheritance. At a book event I once asked Ken MacLeod and Iain Banks about their respective (and opposite) takes on AI in their fictions: Ken MacLeod said his negative view of AIs came from his knowledge of the sorts of people who program computers.Ahruman wrote:No-one’s saying that an unrestrained AI will treat humanity that way, only that we have no a priori way of knowing, and appeal to our own moral codes is simply irrelevant. With no way of knowing, and no reasonable way of assigning probabilities, we must assume the worst.