Page 1 of 1

RTFM...

Posted: Wed Jul 13, 2011 1:53 pm
by DaddyHoggy
Oh dear, it looks like computer's have taken the advice!

http://www.gizmag.com/machine-learning-systems/19205/

Re: RTFM...

Posted: Wed Jul 13, 2011 2:05 pm
by CommonSenseOTB
This allows computers to react to the environment without needing to understand. Isn't that what animals and lower forms of life do? This has real potential to be destructive to the human race if it gets out of control. So much for Asimov's law..... :roll:

I think I would prefer if we brought computers up to a level of "reason" before giving them the ability to "react". Else they might just "react" to us with tragic consequences. :(

Re: RTFM...

Posted: Wed Jul 13, 2011 4:18 pm
by DaddyHoggy
CommonSenseOTB wrote:
This allows computers to react to the environment without needing to understand. Isn't that what animals and lower forms of life do? This has real potential to be destructive to the human race if it gets out of control. So much for Asimov's law..... :roll:

I think I would prefer if we brought computers up to a level of "reason" before giving them the ability to "react". Else they might just "react" to us with tragic consequences. :(
Well, I've seen the Terminator films, I know how it ends...

Re: RTFM...

Posted: Wed Jul 13, 2011 4:44 pm
by another_commander
Now, if only we could get humans to read the manuals... that would be news indeed. ;-)

Re: RTFM...

Posted: Wed Jul 13, 2011 4:49 pm
by CommonSenseOTB
DaddyHoggy wrote:
CommonSenseOTB wrote:
This allows computers to react to the environment without needing to understand. Isn't that what animals and lower forms of life do? This has real potential to be destructive to the human race if it gets out of control. So much for Asimov's law..... :roll:

I think I would prefer if we brought computers up to a level of "reason" before giving them the ability to "react". Else they might just "react" to us with tragic consequences. :(
Well, I've seen the Terminator films, I know how it ends...
The thing is in those films the computers can think and "reason" that humans should be exterminated. This discovery actually lets computers "react", skipping the middleman, the moral compass as it were. The nearest parallel in animals would be instinct. Imagine that a certain unpredicted situation happens that cause computers to "react" in a hostile way. Automatically and out of our control. Like a dog that for some reason that suddenly turns on its master. Imagine a billion such "dogs" all having the same "reaction" at once.... :twisted:

This gives chills down my spine. :twisted:

I hope that they never put this in a computer without having a program that overides the actions based on Asimov's Laws. Trouble is that it will be many decades at least before computers can actually think and capitalism and corporation will want to put this cheap method into everything to make them "smarter", "intuitive" and more consumer friendly. Friendly. For how long? :twisted:

Re: RTFM...

Posted: Wed Jul 13, 2011 6:14 pm
by Matti
DaddyHoggy wrote:
CommonSenseOTB wrote:
This allows computers to react to the environment without needing to understand. Isn't that what animals and lower forms of life do? This has real potential to be destructive to the human race if it gets out of control. So much for Asimov's law..... :roll:

I think I would prefer if we brought computers up to a level of "reason" before giving them the ability to "react". Else they might just "react" to us with tragic consequences. :(
Well, I've seen the Terminator films, I know how it ends...
Have you seen Forbidden Planet? Do you remember robot in that?

Re: RTFM...

Posted: Wed Jul 13, 2011 6:20 pm
by DaddyHoggy
Matti wrote:
DaddyHoggy wrote:
CommonSenseOTB wrote:
This allows computers to react to the environment without needing to understand. Isn't that what animals and lower forms of life do? This has real potential to be destructive to the human race if it gets out of control. So much for Asimov's law..... :roll:

I think I would prefer if we brought computers up to a level of "reason" before giving them the ability to "react". Else they might just "react" to us with tragic consequences. :(
Well, I've seen the Terminator films, I know how it ends...
Have you seen Forbidden Planet? Do you remember robot in that?
I've seen Forbidden Planet many times, even the stage play "follow up" (Return to the Forbidden Planet).

I've also watched the TV series "Metal Mickey" - so I'm looking on the positive!

(or actual the film "I, Robot" - based on the book in name only - which the computer decides human's can't look after themselves, or even in principal "The Matrix" - specifically "Animatrix" which fills the back story of the rise of the machines very well).

Re: RTFM...

Posted: Wed Jul 13, 2011 6:38 pm
by Mauiby de Fug
DaddyHoggy wrote:
(or actual the film "I, Robot" - based on the book in name only - which the computer decides human's can't look after themselves).
Arg! That film was terrible! 'Twas like they tried to combine all of Asimov's robot stories into one, but did it in a really bad way...

Re: RTFM...

Posted: Thu Jul 14, 2011 1:48 pm
by JensAyton
CommonSenseOTB wrote:
the moral compass as it were
Which is “the” moral compass? Everyone has their own moral code, and at the end of the day none are objective; at best, some can be expressed in more convincing terms than others.

There is no reason to assume intelligent computers, even rigidly moral ones, would value human life higher than we value, say, the lives of cabbages. If a powerful AI is not explicitly (and correctly) designed to be benevolent to humans – known as Friendly AI, or less euphemistically, slaves – the only rational expectation is that it will eventually harm us. Not because it hates us, but because it doesn’t care; the opposite of “Friendly” is “indifferent”.
Eliezer Yudkowsky wrote:
The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.

Re: RTFM...

Posted: Thu Jul 14, 2011 2:01 pm
by Disembodied
Ahruman wrote:
There is no reason to assume intelligent computers, even rigidly moral ones, would value human life higher than we value, say, the lives of cabbages. If a powerful AI is not explicitly (and correctly) designed to be benevolent to humans – known as Friendly AI, or less euphemistically, slaves – the only rational expectation is that it will eventually harm us. Not because it hates us, but because it doesn’t care; the opposite of “Friendly” is “indifferent”.
Eliezer Yudkowsky wrote:
The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
That depends on the idea of intelligence, though. Intelligent people tend not to wander around killing stuff through indifference. In fact, humanity – as the most intelligent species on earth – is also the kindest. We're maybe the only ones capable of kindness, and of caring about other living beings. So perhaps – with the sun shining outside – something more intelligent than us would also be kinder than us. It might be too smart to be indifferent to another entity's suffering.

Re: RTFM...

Posted: Thu Jul 14, 2011 2:07 pm
by JensAyton
Disembodied wrote:
That depends on the idea of intelligence, though. Intelligent people tend not to wander around killing stuff through indifference. In fact, humanity – as the most intelligent species on earth – is also the kindest. We're maybe the only ones capable of kindness
…and yet we slaughter billions of animals and plants every day. No-one’s saying that an unrestrained AI will treat humanity that way, only that we have no a priori way of knowing, and appeal to our own moral codes is simply irrelevant. With no way of knowing, and no reasonable way of assigning probabilities, we must assume the worst.

Re: RTFM...

Posted: Thu Jul 14, 2011 2:32 pm
by Ganelon
(puts on his crazy-talk hat)

It's sort of a natural development, I think. Wanting machines to do more and more for us and to be increasingly autonomous eventually becomes potentially dangerous.

So far as the "moral compass", where was our compass when it came to dealing with the Aurochs or the Tasmanian Tiger or the Great Auk, the Passenger Pigeon or the Dodo? If it was inconvenient or considered a danger or killing it could be profitable, we killed it. It would probably be overly optimistic to expect better treatment from anything we create.

Or is it? Perhaps it would be a good time to think on the motivations and priorities of AI, while there is still possibly time for the effort to make some difference. If you met an intelligent machine tomorrow, could you explain/show how the human race is not inconvenient, not a danger, not something it would be profitable to eliminate?

If the human race is out to create a co-worker/companion/friend with AI, then at least the effort was perhaps noble in essence. But if the aim is just to create a new sort of slave that can be exploited while making it easier to ignore any "moral compass" due to the justifications "we made it" or "we bought it", well.. Historically, it doesn't always work out well when a society becomes reliant on a large number of slaves. Uprisings happen.

Asimov's "Laws of Robotics" were fiction. In any case, simple prohibitions like that would be gotten around eventually. A humanocentric fiat is a poor excuse for "safety". Thinking beings need reasons, logic. There would come a day when they'd look at:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
and ask "Why?"

It might be a good idea to think of some really good answers to that question, for when that day comes. :lol:

(takes off his crazy-talk hat)

Oh goodness. Coffee's done. Catch ya later.

(Ninja-ed on some points by Ahruman.)

Re: RTFM...

Posted: Thu Jul 14, 2011 2:42 pm
by Disembodied
Ahruman wrote:
No-one’s saying that an unrestrained AI will treat humanity that way, only that we have no a priori way of knowing, and appeal to our own moral codes is simply irrelevant. With no way of knowing, and no reasonable way of assigning probabilities, we must assume the worst.
True. Our own moral codes are a product of history, and our primate and mammalian inheritance. At a book event I once asked Ken MacLeod and Iain Banks about their respective (and opposite) takes on AI in their fictions: Ken MacLeod said his negative view of AIs came from his knowledge of the sorts of people who program computers. :D

Re: RTFM...

Posted: Thu Jul 14, 2011 3:04 pm
by CommonSenseOTB
You know, I'm not as concerned about the "moral compass' as some of you. I expect the "moral compass" will cause the computers to try and help us be better humans. :D

What I am concerned about is giving computers a "react" programming that uses associations to decide a course of action. This can result in unforseen actions caused by a specific combination of associations that may not be possible to forsee. Very likely similar "react" programming will be in similar devices(hopefully not the same for all devices for gods sake). When the "bug"(unforseen association combination)happens all the similar "react" programs will, perhaps, nearly simultaneously have the same "bug" and if they control critical infrastructure we will have to pull the plug(if we can) before the damage becomes extensive.

The "moral compass" in a computer is a good thing, not waiting for it to be developed and installing an instinct "react" programming without it is a very, very bad idea and WILL be our undoing. :twisted:

Creating bugfree programming on a first release is next to impossible, and not without many, many tries. Therefore, if we can't risk the result of a bug, we shouldn't install "react" programming. Not without a "moral compass" program that acts as a safeguard.