RTFM...

Off topic discussion zone.

Moderators: winston, another_commander, Cody

Post Reply
User avatar
DaddyHoggy
Intergalactic Spam Assassin
Intergalactic Spam Assassin
Posts: 8515
Joined: Tue Dec 05, 2006 9:43 pm
Location: Newbury, UK
Contact:

RTFM...

Post by DaddyHoggy »

Oh dear, it looks like computer's have taken the advice!

http://www.gizmag.com/machine-learning-systems/19205/
Selezen wrote:
Apparently I was having a DaddyHoggy moment.
Oolite Life is now revealed here
User avatar
CommonSenseOTB
---- E L I T E ----
---- E L I T E ----
Posts: 1397
Joined: Wed May 04, 2011 10:42 am
Location: Saskatchewan, Canada

Re: RTFM...

Post by CommonSenseOTB »

This allows computers to react to the environment without needing to understand. Isn't that what animals and lower forms of life do? This has real potential to be destructive to the human race if it gets out of control. So much for Asimov's law..... :roll:

I think I would prefer if we brought computers up to a level of "reason" before giving them the ability to "react". Else they might just "react" to us with tragic consequences. :(
Take an idea from one person and twist or modify it in a different way as a return suggestion so another person can see a part of it that can apply to the oxp they are working on.


CommonSense 'Outside-the-Box' Design Studios Ltd.
WIKI+OXPs
User avatar
DaddyHoggy
Intergalactic Spam Assassin
Intergalactic Spam Assassin
Posts: 8515
Joined: Tue Dec 05, 2006 9:43 pm
Location: Newbury, UK
Contact:

Re: RTFM...

Post by DaddyHoggy »

CommonSenseOTB wrote:
This allows computers to react to the environment without needing to understand. Isn't that what animals and lower forms of life do? This has real potential to be destructive to the human race if it gets out of control. So much for Asimov's law..... :roll:

I think I would prefer if we brought computers up to a level of "reason" before giving them the ability to "react". Else they might just "react" to us with tragic consequences. :(
Well, I've seen the Terminator films, I know how it ends...
Selezen wrote:
Apparently I was having a DaddyHoggy moment.
Oolite Life is now revealed here
another_commander
Quite Grand Sub-Admiral
Quite Grand Sub-Admiral
Posts: 6671
Joined: Wed Feb 28, 2007 7:54 am

Re: RTFM...

Post by another_commander »

Now, if only we could get humans to read the manuals... that would be news indeed. ;-)
User avatar
CommonSenseOTB
---- E L I T E ----
---- E L I T E ----
Posts: 1397
Joined: Wed May 04, 2011 10:42 am
Location: Saskatchewan, Canada

Re: RTFM...

Post by CommonSenseOTB »

DaddyHoggy wrote:
CommonSenseOTB wrote:
This allows computers to react to the environment without needing to understand. Isn't that what animals and lower forms of life do? This has real potential to be destructive to the human race if it gets out of control. So much for Asimov's law..... :roll:

I think I would prefer if we brought computers up to a level of "reason" before giving them the ability to "react". Else they might just "react" to us with tragic consequences. :(
Well, I've seen the Terminator films, I know how it ends...
The thing is in those films the computers can think and "reason" that humans should be exterminated. This discovery actually lets computers "react", skipping the middleman, the moral compass as it were. The nearest parallel in animals would be instinct. Imagine that a certain unpredicted situation happens that cause computers to "react" in a hostile way. Automatically and out of our control. Like a dog that for some reason that suddenly turns on its master. Imagine a billion such "dogs" all having the same "reaction" at once.... :twisted:

This gives chills down my spine. :twisted:

I hope that they never put this in a computer without having a program that overides the actions based on Asimov's Laws. Trouble is that it will be many decades at least before computers can actually think and capitalism and corporation will want to put this cheap method into everything to make them "smarter", "intuitive" and more consumer friendly. Friendly. For how long? :twisted:
Take an idea from one person and twist or modify it in a different way as a return suggestion so another person can see a part of it that can apply to the oxp they are working on.


CommonSense 'Outside-the-Box' Design Studios Ltd.
WIKI+OXPs
Matti
Dangerous
Dangerous
Posts: 103
Joined: Tue Jun 14, 2011 3:28 pm

Re: RTFM...

Post by Matti »

DaddyHoggy wrote:
CommonSenseOTB wrote:
This allows computers to react to the environment without needing to understand. Isn't that what animals and lower forms of life do? This has real potential to be destructive to the human race if it gets out of control. So much for Asimov's law..... :roll:

I think I would prefer if we brought computers up to a level of "reason" before giving them the ability to "react". Else they might just "react" to us with tragic consequences. :(
Well, I've seen the Terminator films, I know how it ends...
Have you seen Forbidden Planet? Do you remember robot in that?
User avatar
DaddyHoggy
Intergalactic Spam Assassin
Intergalactic Spam Assassin
Posts: 8515
Joined: Tue Dec 05, 2006 9:43 pm
Location: Newbury, UK
Contact:

Re: RTFM...

Post by DaddyHoggy »

Matti wrote:
DaddyHoggy wrote:
CommonSenseOTB wrote:
This allows computers to react to the environment without needing to understand. Isn't that what animals and lower forms of life do? This has real potential to be destructive to the human race if it gets out of control. So much for Asimov's law..... :roll:

I think I would prefer if we brought computers up to a level of "reason" before giving them the ability to "react". Else they might just "react" to us with tragic consequences. :(
Well, I've seen the Terminator films, I know how it ends...
Have you seen Forbidden Planet? Do you remember robot in that?
I've seen Forbidden Planet many times, even the stage play "follow up" (Return to the Forbidden Planet).

I've also watched the TV series "Metal Mickey" - so I'm looking on the positive!

(or actual the film "I, Robot" - based on the book in name only - which the computer decides human's can't look after themselves, or even in principal "The Matrix" - specifically "Animatrix" which fills the back story of the rise of the machines very well).
Selezen wrote:
Apparently I was having a DaddyHoggy moment.
Oolite Life is now revealed here
User avatar
Mauiby de Fug
---- E L I T E ----
---- E L I T E ----
Posts: 847
Joined: Tue Sep 07, 2010 2:23 pm

Re: RTFM...

Post by Mauiby de Fug »

DaddyHoggy wrote:
(or actual the film "I, Robot" - based on the book in name only - which the computer decides human's can't look after themselves).
Arg! That film was terrible! 'Twas like they tried to combine all of Asimov's robot stories into one, but did it in a really bad way...
User avatar
JensAyton
Grand Admiral Emeritus
Grand Admiral Emeritus
Posts: 6657
Joined: Sat Apr 02, 2005 2:43 pm
Location: Sweden
Contact:

Re: RTFM...

Post by JensAyton »

CommonSenseOTB wrote:
the moral compass as it were
Which is “the” moral compass? Everyone has their own moral code, and at the end of the day none are objective; at best, some can be expressed in more convincing terms than others.

There is no reason to assume intelligent computers, even rigidly moral ones, would value human life higher than we value, say, the lives of cabbages. If a powerful AI is not explicitly (and correctly) designed to be benevolent to humans – known as Friendly AI, or less euphemistically, slaves – the only rational expectation is that it will eventually harm us. Not because it hates us, but because it doesn’t care; the opposite of “Friendly” is “indifferent”.
Eliezer Yudkowsky wrote:
The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
User avatar
Disembodied
Jedi Spam Assassin
Jedi Spam Assassin
Posts: 6885
Joined: Thu Jul 12, 2007 10:54 pm
Location: Carter's Snort

Re: RTFM...

Post by Disembodied »

Ahruman wrote:
There is no reason to assume intelligent computers, even rigidly moral ones, would value human life higher than we value, say, the lives of cabbages. If a powerful AI is not explicitly (and correctly) designed to be benevolent to humans – known as Friendly AI, or less euphemistically, slaves – the only rational expectation is that it will eventually harm us. Not because it hates us, but because it doesn’t care; the opposite of “Friendly” is “indifferent”.
Eliezer Yudkowsky wrote:
The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
That depends on the idea of intelligence, though. Intelligent people tend not to wander around killing stuff through indifference. In fact, humanity – as the most intelligent species on earth – is also the kindest. We're maybe the only ones capable of kindness, and of caring about other living beings. So perhaps – with the sun shining outside – something more intelligent than us would also be kinder than us. It might be too smart to be indifferent to another entity's suffering.
User avatar
JensAyton
Grand Admiral Emeritus
Grand Admiral Emeritus
Posts: 6657
Joined: Sat Apr 02, 2005 2:43 pm
Location: Sweden
Contact:

Re: RTFM...

Post by JensAyton »

Disembodied wrote:
That depends on the idea of intelligence, though. Intelligent people tend not to wander around killing stuff through indifference. In fact, humanity – as the most intelligent species on earth – is also the kindest. We're maybe the only ones capable of kindness
…and yet we slaughter billions of animals and plants every day. No-one’s saying that an unrestrained AI will treat humanity that way, only that we have no a priori way of knowing, and appeal to our own moral codes is simply irrelevant. With no way of knowing, and no reasonable way of assigning probabilities, we must assume the worst.
Ganelon
---- E L I T E ----
---- E L I T E ----
Posts: 534
Joined: Fri Jul 02, 2010 11:45 am
Location: Around Rabiarce or Lasoce

Re: RTFM...

Post by Ganelon »

(puts on his crazy-talk hat)

It's sort of a natural development, I think. Wanting machines to do more and more for us and to be increasingly autonomous eventually becomes potentially dangerous.

So far as the "moral compass", where was our compass when it came to dealing with the Aurochs or the Tasmanian Tiger or the Great Auk, the Passenger Pigeon or the Dodo? If it was inconvenient or considered a danger or killing it could be profitable, we killed it. It would probably be overly optimistic to expect better treatment from anything we create.

Or is it? Perhaps it would be a good time to think on the motivations and priorities of AI, while there is still possibly time for the effort to make some difference. If you met an intelligent machine tomorrow, could you explain/show how the human race is not inconvenient, not a danger, not something it would be profitable to eliminate?

If the human race is out to create a co-worker/companion/friend with AI, then at least the effort was perhaps noble in essence. But if the aim is just to create a new sort of slave that can be exploited while making it easier to ignore any "moral compass" due to the justifications "we made it" or "we bought it", well.. Historically, it doesn't always work out well when a society becomes reliant on a large number of slaves. Uprisings happen.

Asimov's "Laws of Robotics" were fiction. In any case, simple prohibitions like that would be gotten around eventually. A humanocentric fiat is a poor excuse for "safety". Thinking beings need reasons, logic. There would come a day when they'd look at:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
and ask "Why?"

It might be a good idea to think of some really good answers to that question, for when that day comes. :lol:

(takes off his crazy-talk hat)

Oh goodness. Coffee's done. Catch ya later.

(Ninja-ed on some points by Ahruman.)
Sleep? Who needs sleep? Got game. No need sleep.
User avatar
Disembodied
Jedi Spam Assassin
Jedi Spam Assassin
Posts: 6885
Joined: Thu Jul 12, 2007 10:54 pm
Location: Carter's Snort

Re: RTFM...

Post by Disembodied »

Ahruman wrote:
No-one’s saying that an unrestrained AI will treat humanity that way, only that we have no a priori way of knowing, and appeal to our own moral codes is simply irrelevant. With no way of knowing, and no reasonable way of assigning probabilities, we must assume the worst.
True. Our own moral codes are a product of history, and our primate and mammalian inheritance. At a book event I once asked Ken MacLeod and Iain Banks about their respective (and opposite) takes on AI in their fictions: Ken MacLeod said his negative view of AIs came from his knowledge of the sorts of people who program computers. :D
User avatar
CommonSenseOTB
---- E L I T E ----
---- E L I T E ----
Posts: 1397
Joined: Wed May 04, 2011 10:42 am
Location: Saskatchewan, Canada

Re: RTFM...

Post by CommonSenseOTB »

You know, I'm not as concerned about the "moral compass' as some of you. I expect the "moral compass" will cause the computers to try and help us be better humans. :D

What I am concerned about is giving computers a "react" programming that uses associations to decide a course of action. This can result in unforseen actions caused by a specific combination of associations that may not be possible to forsee. Very likely similar "react" programming will be in similar devices(hopefully not the same for all devices for gods sake). When the "bug"(unforseen association combination)happens all the similar "react" programs will, perhaps, nearly simultaneously have the same "bug" and if they control critical infrastructure we will have to pull the plug(if we can) before the damage becomes extensive.

The "moral compass" in a computer is a good thing, not waiting for it to be developed and installing an instinct "react" programming without it is a very, very bad idea and WILL be our undoing. :twisted:

Creating bugfree programming on a first release is next to impossible, and not without many, many tries. Therefore, if we can't risk the result of a bug, we shouldn't install "react" programming. Not without a "moral compass" program that acts as a safeguard.
Take an idea from one person and twist or modify it in a different way as a return suggestion so another person can see a part of it that can apply to the oxp they are working on.


CommonSense 'Outside-the-Box' Design Studios Ltd.
WIKI+OXPs
Post Reply