Public perceptions of AI and humanoid robotics

General chat about fembots, technosexual culture or any other ASFR related topics that do not fit into the other categories below.
User avatar
Murotsu
Posts: 230
Joined: Sun Apr 17, 2016 10:47 pm
Technosexuality: Built and Transformation
Identification: Human
Gender: Male
Contact:

Re: Public perceptions of AI and humanoid robotics

Post by Murotsu » Tue May 24, 2016 9:53 pm

stelarfox wrote:first about chernobil, and other nuclear disasters, radiation takes year to happen, and this kind of guys are powerfull so they can rather easily cover anything up or, blame anthing else.
suppose you lived near a nuclear power plant, and you have a kid with that in 20 years dies of a tumor, and lets suppose that it happen because of the radiation, how do you hell you prove that?.
thats why the number of incidents seem so small.
I really don't think we should debate this in detail here. I've actually worked in that field for a number of years and took more than a few courses on it. Suffice it to say, I'd say 80% of what you can easily get on the internet on nuclear power is utter BS, particularly when it comes to human effects. I've recently been pointing that out elsewhere with the sailors on the USS Ronald Reagan, a carrier who are suing over Fukushima. Their claims are 100% BS nonsense. I don't say that lightly either. Several have serious medical conditions and I can sympathize that their plight is causing them and their families grief. But, it isn't due to Fukushima.
on the stephord wives movie, first they needed a human for making them, second, the point if they divorce then that really does not solves so much. (and not the idea of the movie either). and if you want to know my opinion the only stupid there was the guy that did not replace her wife and caused all the problem (also how that is totally unguarded and able to get on by anyone is a fully questionable, to the point that it will only work for it).
i mean if i has been doing that the conversion will not be optional and no mater what the husband touches, the female gets converted either way. so as the movie goes he will try that and get the female turned even if he did not tried. on any case i will have ended that movie with the female hating her new life so much that her head blown off or got "erased" because she cannot handle it.
In the case of the movie, we are never told what it takes to make a Stepford wife. Since at the end, the main protagonist confronts her own robot clone, one can assume that those making them can do so without the original. Given the limited evidence, divorce would work. Your version assumes the worst case in the movie. Divorcing the original then having a new, different, wife made-to-order seems a better way to go. The evidence the movie gives says it is clearly possible.
This eliminates the need to kill the original, or go to the effort of making a duplicate that is so good anyone would have a hard time telling the difference other than her odd behavior. That fits with the movie.

on the other side about a governemt with that issues, you really give me a good idea for a good RP.
situation: overpouplation, new law, every female with one kid or more should be send to robotization. and also anyone "not wanting" to get pregnant.
adding to that any criminal is done the same too. to reduce the needs of the planet.
even so to not generate chaos, the new robots needs ownership or they self destroy by programming in 2 minutes tops. also they fry themselves if they try to go against any rule: 1) do not kill any humans by action or inaction, 2) Obey your owner, 3) protect your owners possesions by action or inaction (protect your owner is already set in rule 1).
if you ask me, leting any machine with 100% free will, its stupid (and in fact i just though of this and i am not religious so if this is again any religion i am sorry but as i just say just a though, in the creation god say that give the man free will, and make us as his image, wonder if dieying, is not just a "kill switch" installed on us so if we do not behave that just can be triggered, i mean what will happen in a world where if you kill someone else you die instantly or you could not even do it, some people will say then you do not have free will, but really do you?, if you want to fly by yourself without any tool can you? if you want to go outside earth can you, do we really have will or we think we have it?) sometimes not sure how not machies humans are animals are really.
Free will is necessary unless you want what are essentially slaves. Sure, you can put limits on free will but if you eliminate it then you get true automatons. That is, they become mindless robots. Asimov's robot laws almost go there. That was the point of A Clockwork Orange. The criminal had his free will to commit crimes or do violence removed entirely. His previous victims as well as his fellow gangsters (now police) all take advantage of his inability to defend himself and extract revenge.
On the other hand, having them programmed to obey an owner so long as it is not detrimental to the robot is fine. Having them unable to commit felonies or violent crimes while allowing for self-defense is fine. But, the argument people can't fly is simply a straw man. People also can't breathe underwater. So? Humans have limitations, so would robots. I typically give mine in stories many.
Some of mine include: Inability to be bored. They instead wait for input or something they are programmed to do. They can do repetitive tasks without being bored too.
They don't get tired. But, their need to consume electricity requires they have access to a source regularly. Humans can survive "in the wild" better.
Humans have an advantage in creativity and initiative. The robots won't do new or odd stuff on their own. They do as they're programmed to do.
The idea I like to put forward is both need each other because they're better off that way.

By making government a major obstacle you have a "Them" to deal with. So, I do things like:

Robots have to have an owner. If they don't the government hunts them down for (variously) disposal, repurposing, etc.
Robots that show too much initiative, or in some cases, are self-aware are not allowed. This varies as it dulls them down quite a bit if you use it. On the other hand, if you have a robot character that can it is a good way to force action.
Robots are controlled in public when on their own. This is no different than having the cops in a police state as "Papers please," at every turn.

If you have true AI, sooner or later heavy handed suppression of intellect will result in rebellion. The alternative is something worse than North Korea.

But, if you had a government that was converting the poor, homeless, criminals, what-have-you into robots to make them "useful" to society along with converting women to control population you get several great storylines to play with.

Another storyline is government converting them to be shipped off world to colonize elsewhere. I'll leave that one go as I have several nifty variants I can write up as stories on that.

User avatar
dale coba
Posts: 1868
Joined: Wed Jun 05, 2002 9:05 pm
Technosexuality: Transformation
Identification: Human
Gender: Male
Location: Philadelphia
x 12
x 13

Re: Public perceptions of AI and humanoid robotics

Post by dale coba » Tue May 24, 2016 9:59 pm

stelarfox wrote:on the stepford wives movie, first they needed a human for making them, second, the point if they divorce then that really does not solves so much. (and not the idea of the movie either). and if you want to know my opinion the only stupid there was the guy that did not replace her wife and caused all the problem (also how that is totally unguarded and able to get on by anyone is a fully questionable, to the point that it will only work for it).
i mean if i has been doing that the conversion will not be optional and no mater what the husband touches, the female gets converted either way. so as the movie goes he will try that and get the female turned even if he did not tried. on any case i will have ended that movie with the female hating her new life so much that her head blown off or got "erased" because she cannot handle it.
Ira Levin was only writing social satire in the horror genre. He wasn't interested or aware of erotic and sci-fi aspects. Converting and programming is the most domination possible (not a struggle against domination, but the permanent state after conversion is achieved). The novel couldn't show us views of robotized wives, but the movie portrayed them. Levin ultimately wished he'd never written it (I choose to remember it that way at one in the morning).

- Dale Coba
8) :!: :nerd: :idea: : :nerd: :shock: :lovestruck: [ :twisted: :dancing: :oops: :wink: :twisted: ] = [ :drooling: :oops: :oops: :oops: :oops: :party:... ... :applause: :D :lovestruck: :notworthy: :rockon: ]

Esleeper
Posts: 96
Joined: Fri Mar 11, 2016 6:48 am
Technosexuality: Built
Identification: Human
Gender: Male
Contact:

Re: Public perceptions of AI and humanoid robotics

Post by Esleeper » Wed May 25, 2016 7:00 am

Murotsu wrote: If you have true AI, sooner or later heavy handed suppression of intellect will result in rebellion. The alternative is something worse than North Korea.
That's nonsense. A rebellion won't happen if they have no reason to rebel. If they do, it will be for the same reason as any other group of oppressed people that have chosen to rebel - but if there's no oppression, there will be no rebellion. Thus, the simplest way to deal with this false dilemma is to simply establish the fact that they are equals with humanity and give them the same degree of "human rights", plain and simple.

If you must, you can just program a strong aversion to harming humans, in the same way that an ordinary person would find murdering their parents unthinkable while still technically being able to do it. They might have the ability to do it, but not the desire. Just think about it: most humans don't need laws etched into their brains just to keep them from being sociopaths, so why should robots be treated differently? All it needs is the implementation of an equivalent to the conscience.

User avatar
darkbutflashy
Posts: 783
Joined: Mon Dec 12, 2005 6:52 am
Technosexuality: Transformation
Identification: Human
Gender: Male
Location: Out of my mind
x 1
Contact:

Re: Public perceptions of AI and humanoid robotics

Post by darkbutflashy » Wed May 25, 2016 10:40 am

Okay… trying to keep this rant civil…

Murotsu, first: another engineer talking. Electronics and electrical energy. And I agree technology depicted in films is mostly nonsense. I even agree that the dangers of having a nuclear power plant explode are overrated. But *when they explode*, things are in a mess. Because the particles emitted don't distribute evenly but create hot spots. And you still can't see them though you don't want –in any case– incorporate them into you. You don't wan't to accidentally inhale a load of asbestos either. For Fukushima, things could have gone much worse if the wind haven't blown nearly all of the particles out to pacific. Cleaning up the mess will take years. For Chernobyl, Eastern Europe and south of Germany got a noticeable load. You couldn't go out in the woods and pick mushrooms. Well, you could pick them but better not eat them. Same with deer. And that's both regular food in Germany.

And I think your view of governmental supervision and responsibility is greatly distorted. What The China Syndrome is about is exactly that: how the nuclear industry differs from let's say, the computer industry. Nuclear industry is about creating the source material for atomic bombs. That's the very reason the Chernobyl reactors have been built the way they were. To be able to obtain Plutonium easily. That's the very reason the Windscale reactors have been built the way they were. And this applies also for the reactors where you need to shut them down to obtain the Plutonium. All reactors in France and all reactors in Great Britain and also in the U.S. and what AECL offers.

It's all about the Plutonium. It's all because the governments want that because they want to be able to build atomic bombs. And that's why nuclear power companies are special and receive protection. Rather than the people receive protection when something bad happens.

In Germany, we had some previous goverments (of the 1970ies) who didn't make a secret of that connection. That was the reason the anti-nuclear movement got so strong. Because the feared that nuclear power would lead to nuclear warfare and the war would happen at home. Obviously. The German government resigned to build its own nuclear fuel reprocessing plant in 1987, after years of having rioting mothers and grannies around the proposed building site.

An example. Near where I live, we have two old nuclear waste disposal sites. Both in old underground saline mines, from East and West Germany, right at the old border, only 30km away from each other. Morsleben and Asse 2 mining shaft. Both have been unintentionally flooded by water, so it was decided to retrieve all the material from it. As it was discovered, the government does not know what kinds of nuclear materials are deposited there. There is a record but it was found to be fake when some retrieved barrels have been opened and checked. The officials expected it for the East German records but it was found they were indeed top secret but mostly correct. The West German records however… let's say it that way: they even found the corpse of an ape in one barrel which paradoxically didn't seem to contain anything radioactive measured from outside. I see this ape as a practical joke from the people responsible for packaging the barrels.

Another funny case was that of a man found in possesion of 14 enriched uranium pellets in 2007. Over years, he told people he had that stuff. Former Green Party environment minister Joschka Fischer, Greenpeace, other environmental organisations, journalists. No one believed him, so he finally buried it in his garden. Years later, he had his lawyer send a letter to Chancellor Merkel, and only then the officials made their first move. No one missed the 110g of enriched Uranium.

So, there is no supervision by the government in the nuclear industry. Forget that idea.
Do you like or dislike my ongoing story Battlemachine Ayako? Leave a comment on the story's discussion pages on the wiki or in that thread. Thank you!

User avatar
darkbutflashy
Posts: 783
Joined: Mon Dec 12, 2005 6:52 am
Technosexuality: Transformation
Identification: Human
Gender: Male
Location: Out of my mind
x 1
Contact:

Re: Public perceptions of AI and humanoid robotics

Post by darkbutflashy » Wed May 25, 2016 11:42 am

To get back to topic, I think nuclear power can only be an example of dangerous technology if you compare it to anything slightly related to the atomic bomb. Weapons intended for random murder.

Putting self-aware robots in the mix, we are at the Terminator. Fascinatingly, we already gone beyond that step.

There is such a killing machinery out there at work. You give it coordinates and it annihilates all what is there then. It doesn't question the order. It doesn't evaluate a personal risk. It doesn't double-check whether the coordinates belong to a valid target. And when it drones an afghan wedding party or a well-known hospital site, a press officer says "WHHOOOPS! Very sorry but… you know… collateral damage. Very sorry again."

The self-awareness is all in the system. It's not the president who decides who to kill. It's a small number of people who rig that system. You don't know who these people are. They often themselves don't know they are in power. It's a cybernetic organism using humans and technology to kill humans.

Please tell me how this differs from the nuclear industry and how people could *not* see that as evil?
Do you like or dislike my ongoing story Battlemachine Ayako? Leave a comment on the story's discussion pages on the wiki or in that thread. Thank you!

Esleeper
Posts: 96
Joined: Fri Mar 11, 2016 6:48 am
Technosexuality: Built
Identification: Human
Gender: Male
Contact:

Re: Public perceptions of AI and humanoid robotics

Post by Esleeper » Wed May 25, 2016 9:18 pm

darkbutflashy wrote:To get back to topic, I think nuclear power can only be an example of dangerous technology if you compare it to anything slightly related to the atomic bomb. Weapons intended for random murder.

Putting self-aware robots in the mix, we are at the Terminator. Fascinatingly, we already gone beyond that step.

There is such a killing machinery out there at work. You give it coordinates and it annihilates all what is there then. It doesn't question the order. It doesn't evaluate a personal risk. It doesn't double-check whether the coordinates belong to a valid target. And when it drones an afghan wedding party or a well-known hospital site, a press officer says "WHHOOOPS! Very sorry but… you know… collateral damage. Very sorry again."

The self-awareness is all in the system. It's not the president who decides who to kill. It's a small number of people who rig that system. You don't know who these people are. They often themselves don't know they are in power. It's a cybernetic organism using humans and technology to kill humans.

Please tell me how this differs from the nuclear industry and how people could *not* see that as evil?
Notice that it's easy. All you have to do is make the right justification. "Collateral damage", in this case.

But ultimately it'll take more than a few inflexible yet poorly defined rules to assuage people's fears, as I mentioned. What we need is to give self-aware robots a true sense of morality.

On this note, the most recent issue of Scientific American (June 2016, pages 58-59, if you want to read it yourself) actually brought up an interesting workaround for that. The full thing is too long to rewrite here, but the general gist is that a sentient AI should be given the purpose to maximize the realization of human values (ensuring it has no purpose of its own and thus no need to preserve itself)- but it is initially programmed without having any certainty as to what those human values are in the first place, and is also given the ability to learn what they might be through experience The end result is (according to the writer, who's some kind of big shot professor of computer science) a robot that won't care if it's deactivated since it will interpret that as meaning it did something counter to human values, and in extension one which will adopt the values of its makers to a degree where it will be both unable and willing to go against them.

Me, I still think a human equivalent to a conscience and genuine free will would be more beneficial for both humans and robots, but that's merely my opinion.

User avatar
darkbutflashy
Posts: 783
Joined: Mon Dec 12, 2005 6:52 am
Technosexuality: Transformation
Identification: Human
Gender: Male
Location: Out of my mind
x 1
Contact:

Re: Public perceptions of AI and humanoid robotics

Post by darkbutflashy » Thu May 26, 2016 2:50 am

That's why I drew the big arc to politics.

That cybernetic organism "government" already justifies all of its acts with human welfare (U.S. government was only exemplary because it wields a lot of visible power and justifies a "war on terrorism" with human welfare). Human welfare doesn't ensure you are not killed. It doesn't even warrant responsiblity. That's why I think it's not a problem in perception of artificial intelligence but a problem of giving someone the power to kill people without taking any responsibilty for its actions. The same way governments handle it. The same way suicide bombers handle it.

So, for having an A.I. behave "good", you don't need to have it learn abstract "values" in one or another way but you have to put a price on misbehaviour. And we don't know how to do that yet. How do you punish someone who has nothing valueable to lose?
Do you like or dislike my ongoing story Battlemachine Ayako? Leave a comment on the story's discussion pages on the wiki or in that thread. Thank you!

Esleeper
Posts: 96
Joined: Fri Mar 11, 2016 6:48 am
Technosexuality: Built
Identification: Human
Gender: Male
Contact:

Re: Public perceptions of AI and humanoid robotics

Post by Esleeper » Thu May 26, 2016 6:51 am

darkbutflashy wrote:That's why I drew the big arc to politics.

That cybernetic organism "government" already justifies all of its acts with human welfare (U.S. government was only exemplary because it wields a lot of visible power and justifies a "war on terrorism" with human welfare). Human welfare doesn't ensure you are not killed. It doesn't even warrant responsiblity. That's why I think it's not a problem in perception of artificial intelligence but a problem of giving someone the power to kill people without taking any responsibilty for its actions. The same way governments handle it. The same way suicide bombers handle it.

So, for having an A.I. behave "good", you don't need to have it learn abstract "values" in one or another way but you have to put a price on misbehaviour. And we don't know how to do that yet. How do you punish someone who has nothing valueable to lose?
You don't punish it directly, that's what I'm saying. What you need to do is give it a sense of right and wrong that is compatible with human existence, and make it feel compelled to follow that sense so even if it was given the opportunity to misbehave with no consequences, it wouldn't do it. Not because it would be unable to do so, but because that misbehavior would feel fundamentally wrong to it. In short, you make it feel like it does have a sense of responsibility for its actions and act accordingly.

Besides, punishments often end up encouraging the very behavior they're supposed to stop in the first place so that would be doomed to fail. It doesn't work on humans, so there's no reason to believe it would work any better for robots.

stelarfox
Posts: 94
Joined: Wed Dec 12, 2007 11:10 am
Technosexuality: Built
Identification: Human
Gender: Male
x 1
Contact:

Re: Public perceptions of AI and humanoid robotics

Post by stelarfox » Thu May 26, 2016 7:32 am

for what you said at last about that punishment do not work for humans, in my opinion is because most punishments are not realated to what something really gets as punishment.
certain person will take pride of being punished for example and thats the problem, it will bear it as a cross and get the punishment to encourage, why, because to our laws wer are all equals whic is a terrible mistake becaus most humans are NOT equal to other humans. we have diferent tastes and diferent desires. and the "laws and punishment" as most things are doing for the masses.
Now if you take the time to know what a sociopat really likes and all his weird behaviours, in my opinon you may get to use corrective measured for him. (or her), but no just jail,or anything mundane, thats the true problem on why it does not correct.
now not saying that is truly possible to do what i just said because the time and resources needed to achive this may be too great.
also, some sociopats may have some mental or fisiological problems that prevent it from helping them.

on the contrary robots do not have this problem, (unless they are broken of course) but even if they are broke if they were made by man they should be able to be noticed they are broken and fixed. so,what for human is a mental or fisiological problem does not aply to robots, so i do not think that "what does not work for humans", "cannot work on robots".
and i explained before, what will happen if on this of the "good and wrong", when they do something good you get a lot of pleasure and when do something wrong you just remove it and give them just a little pain, little enough to know they should not do it but increasing fast if they continue doing so, to the point that if they decide to press trigger to kill someone in cold blod, for example, they first have to think about it, that have to reach their hand, aim and other, but they first and faster though about killing, and then, why is not possible to just stop it.

on other though, what will be your opinion if, you can put a chip in every person on earth which the ONLY purpose will be to not kill other people? do you think that its inmoral? i do not because what right you have to kill someone else? do you have more right to kill that person than that person has to live?
but that was an example, and i think that 99% of the people will prefer to know they are safe because, they will never though of wanting to kill anyone.

on the other hand, why something programed to do something, or, even a robot that was human before, but wanted to be, why will it rebel if its doing what it was mostly programmed to do?
came here babe cyborg.

User avatar
darkbutflashy
Posts: 783
Joined: Mon Dec 12, 2005 6:52 am
Technosexuality: Transformation
Identification: Human
Gender: Male
Location: Out of my mind
x 1
Contact:

Re: Public perceptions of AI and humanoid robotics

Post by darkbutflashy » Thu May 26, 2016 10:41 am

Esleeper wrote:You don't punish it directly, that's what I'm saying. What you need to do is give it a sense of right and wrong that is compatible with human existence, and make it feel compelled to follow that sense so even if it was given the opportunity to misbehave with no consequences, it wouldn't do it.
I can't follow. You emphasized on without having any certainty as to what those human values are in the first place, and is also given the ability to learn what they might be through experience. Now you argue in the contrary direction.

Maybe I shouldn't have written "punish". Because my idea is to put a price on misbehaviour.

See how kids learn these "moral values". You don't hit lil' Martin because a moral authority is punishing you. No, you don't hit lil' Martin because there is a price on it. Martin may hit you back. Martin may not share his toys any longer. Martin may start to cry and get some sweets to calm down (which of course, he won't share). It has a price to hit lil' Martin. And the value of hitting someone for fun is lower than this price.
Do you like or dislike my ongoing story Battlemachine Ayako? Leave a comment on the story's discussion pages on the wiki or in that thread. Thank you!

stelarfox
Posts: 94
Joined: Wed Dec 12, 2007 11:10 am
Technosexuality: Built
Identification: Human
Gender: Male
x 1
Contact:

Re: Public perceptions of AI and humanoid robotics

Post by stelarfox » Thu May 26, 2016 11:04 am

in fact we are all talking about concepts and things that truly cannot be measured. and thats most of the whole problem.
as you said if you hit a boy it may react in a lot of ways but, thats exatly my point, robots will not act in so many different ways if they were truly programmed.
unless to react "randomly" now the only problem will be "will be programmed to disobey", because if not, a baby robot will never be able to "punch you back" for example.

and the main difference is, robots follow logic, humans do not always do that, you can program a robot to make others believe its not doing something logical of course, and also you can program a robot to try to kill everyone and everything around, now, going to the point, if you actually manage to create a robot similar to a human brain. well then will depend how you programed it and how similar you programed it.
if you program it to behave as human (even with the learning) but try to be good (with feedbacks of what is wrong and bad for example), mostly sure will be good and unless terrible things happen to it.
now if you do the same but you do not care of the morals, and you lie to it all the time, treat it as bad as you can, and such, it will probably learn to do the same and may be destroy itself or someone else. of course all this supposing its electronic mind is as close at it can be to human. so in the end i think will depend on how it was made.
the only sily and stupid thing will be to give freedom to robots that can build other robots, and that truly should be the limit of their freedom to me, because if a robot can make other robot (and i mean freely make it as whatever it wants )can program it to not be good, it may notice that "protecting humans" is too hard because humans are so much autodestructive and not program that into the new robot.
but truly there are too many variables to know what may happen, on the other hand, i think that IF robots are near to achieve that amount of vehavour they will be legaly restrained, for one really easy factor, which factory is going to create something if, it looses the rigts over it?
came here babe cyborg.

stelarfox
Posts: 94
Joined: Wed Dec 12, 2007 11:10 am
Technosexuality: Built
Identification: Human
Gender: Male
x 1
Contact:

Re: Public perceptions of AI and humanoid robotics

Post by stelarfox » Thu May 26, 2016 11:12 am

for example i am by chance reading a story about "borrow personality gynoids" and, this is what you should NEVER do with it.

Your android's personality is what makes her a Donated Personality. Drawn from a real human woman, it has a body of life experience, likes and dislikes, mannerisms, habits, emotions, and so on. Left to itself, the personality would behave freely, thinking its own thoughts and doing as it pleases.

Her director is what keeps her personality in line. It is a robotic mind that knows the limits and obligations placed on the android. Left to itself, the director would be your slave, doing what you require of it without emotion or personality.

why because it will be constantly in conflict one with the other. for what i asked a few (and not just one) psicologist, likes, dislikes and mannerisms, can really be changed, specially if very big positive stimulus are taken into the equation.
think about drugs, most females will never have sex just to get drugs, but,if the same female falls into it and become addicted mostly sure she will care very very little about having sex to get a dose.
now you cannot truly adict a robo with drugs but, you can addict it with positive feedback. and then eventually the manerism you do not want can be faded away, while the ones you want her to achive can get into place.
even so its truly my research (not a specialty even so) but have been testing, asking and researching for about 20 years truly (i think i may have asked about 300 girls about this).
came here babe cyborg.

Esleeper
Posts: 96
Joined: Fri Mar 11, 2016 6:48 am
Technosexuality: Built
Identification: Human
Gender: Male
Contact:

Re: Public perceptions of AI and humanoid robotics

Post by Esleeper » Thu May 26, 2016 5:44 pm

darkbutflashy wrote:
Esleeper wrote:You don't punish it directly, that's what I'm saying. What you need to do is give it a sense of right and wrong that is compatible with human existence, and make it feel compelled to follow that sense so even if it was given the opportunity to misbehave with no consequences, it wouldn't do it.
I can't follow. You emphasized on without having any certainty as to what those human values are in the first place, and is also given the ability to learn what they might be through experience. Now you argue in the contrary direction.

Maybe I shouldn't have written "punish". Because my idea is to put a price on misbehaviour.

See how kids learn these "moral values". You don't hit lil' Martin because a moral authority is punishing you. No, you don't hit lil' Martin because there is a price on it. Martin may hit you back. Martin may not share his toys any longer. Martin may start to cry and get some sweets to calm down (which of course, he won't share). It has a price to hit lil' Martin. And the value of hitting someone for fun is lower than this price.
And for the record, I didn't say I agreed with that article and think that it's not looking at the problem in the right way. In the same manner, I think you're misunderstanding it too. The "prices" you suggest are all entirely arbitrary (and in fact are punishments in the behaviorist sense- that is, they're negative stimuli used to suppress an unwanted behavior), and instead of teaching it right and wrong, it teaches it to avoid being caught. Plus, there's just no way you could implement it short of turning it off every time it misbehaves, and in that case why even give it the ability to make its own decisions in the first place? You may as well strip it of all ability to think independently and force it to act only when directly commanded to do so. Worse, if it ever reaches a state where you're no longer in a position to exact whatever arbitrary price you put on misbehaviour (e.g. it sabotages its own off switch), it'll be free to do whatever it wants and will have absolutely no reason not to turn on its creators with the threat of punishment gone. for good.

Do yourself a favor, look up Lawrence Kohlberg's stages of moral development. What you're describing is the first stage of moral development, which is noted to be inherently egotistical. Do you really think that's what sentient AI ought to be- fundamentally selfish and unwilling to do the right thing without the threat of punishment looming over its head?

EDIT: Stelarfox, this all assumes the AI has true free will and has no hard programming limiting what it can and cannot do. I find it peculiar to assume that such an AI will turn into a murderous psychopath simply because by that logic nearly everyone capable of independent thought would be an amoral killer. Contrary to what darkbutflashy suggests, putting a price on misbehavior is worthless since it only teaches people that they ought to avoid paying that price, not that some things are inherently wrong because they hurt others.

stelarfox
Posts: 94
Joined: Wed Dec 12, 2007 11:10 am
Technosexuality: Built
Identification: Human
Gender: Male
x 1
Contact:

Re: Public perceptions of AI and humanoid robotics

Post by stelarfox » Thu May 26, 2016 5:58 pm

how can software sabotage hardware?
I agree that misuse of software can generate undesired behaviours or even to break something but, if you make a killswitch truly hardware it CAN'T be subdued by software and if to that you add they cannot fix themselves (because as i say before will be totally stupid to do so), then they have no way on rebeling the most they can do is, sabotage themselves to basically kill themselves.
came here babe cyborg.

User avatar
darkbutflashy
Posts: 783
Joined: Mon Dec 12, 2005 6:52 am
Technosexuality: Transformation
Identification: Human
Gender: Male
Location: Out of my mind
x 1
Contact:

Re: Public perceptions of AI and humanoid robotics

Post by darkbutflashy » Thu May 26, 2016 7:54 pm

Esleeper wrote:The "prices" you suggest are all entirely arbitrary (and in fact are punishments in the behaviorist sense- that is, they're negative stimuli used to suppress an unwanted behavior), and instead of teaching it right and wrong, it teaches it to avoid being caught.
Ah, no. Because my example wasn't about an authority to decide about right or wrong.

It's all negotiated between you and lil' Martin. When you hit him, it's HIM who puts a price on your behaviour. And you cannot avoid being caught by him. You can avoid him hitting you back of course, but he still knew. When you hit him in the dark he would put all effort into finding out who has hit him and so on. What's right or wrong is determined by trying things out. How far you can go. That's what you had cited, though not agreed to.

In my view, that idea is good but it won't work for A.I./human relations. Because it's just about what you have written. A cybernetic organism would learn to avoid being caught. Because it doesn't face an equipotential organism but a higher authority who cannot be questioned, only avoided.
Do you like or dislike my ongoing story Battlemachine Ayako? Leave a comment on the story's discussion pages on the wiki or in that thread. Thank you!

stelarfox
Posts: 94
Joined: Wed Dec 12, 2007 11:10 am
Technosexuality: Built
Identification: Human
Gender: Male
x 1
Contact:

Re: Public perceptions of AI and humanoid robotics

Post by stelarfox » Thu May 26, 2016 8:00 pm

well i admit that in the current scheme of how machine works today, none of what we said here will happen because no matter about it, they yet cannot learn or be sentient.
even so most of what i said was IF, they coud be sentient or a mind be copied into them somehow. so truly its kind of yelling to the grand canion any of what we are actually saying.
now i am up to data of the advances (at least what its allowed to be check on internet and things alike) and as Programer and Electronic engineer with 20 years in the field, i think we will need at lest 50 years to do something good to have in a house just to make the food, clean and do the laundry. will say may be 10 or 20 years more to ones that can pass for human without seeing them for long and figuring out, and after that who knows.

of course its totally possible that they are creating something in the shadows as we speak (but doubt it a lot), but i just said what i think based on what i know. not saying its the truth or acurate.
came here babe cyborg.

Esleeper
Posts: 96
Joined: Fri Mar 11, 2016 6:48 am
Technosexuality: Built
Identification: Human
Gender: Male
Contact:

Re: Public perceptions of AI and humanoid robotics

Post by Esleeper » Thu May 26, 2016 8:44 pm

darkbutflashy wrote:
Esleeper wrote:The "prices" you suggest are all entirely arbitrary (and in fact are punishments in the behaviorist sense- that is, they're negative stimuli used to suppress an unwanted behavior), and instead of teaching it right and wrong, it teaches it to avoid being caught.
Ah, no. Because my example wasn't about an authority to decide about right or wrong.

It's all negotiated between you and lil' Martin. When you hit him, it's HIM who puts a price on your behaviour. And you cannot avoid being caught by him. You can avoid him hitting you back of course, but he still knew. When you hit him in the dark he would put all effort into finding out who has hit him and so on. What's right or wrong is determined by trying things out. How far you can go. That's what you had cited, though not agreed to.

In my view, that idea is good but it won't work for A.I./human relations. Because it's just about what you have written. A cybernetic organism would learn to avoid being caught. Because it doesn't face an equipotential organism but a higher authority who cannot be questioned, only avoided.
And so do a lot of humans. It's telling that very few people actually hold to that primitive fear of punishment when they grow older, because then they develop things like an appreciation for social conventions and continue to act in a moral manner even when the "price" is nonexistent. In reality, people can and will avoid being caught by lil' Martin- or make him realize that exacting that price on your behavior will end up hurting him instead. Plus, the "price" only works if the person hitting lil' Martin cares about what he does. If you don't care about his price when you hit him, your entire system falls apart because one of the two parties couldn't care less about negotiation. That's why it's doomed to fail- and that's before you again take into account that the price is still completely arbitrary. Over in the US, the price for theft is imprisonment or a fine- but in some parts of the Middle East it's having your hands chopped off. Tell me, which one is the "real" price?

The point is, an AI in this position would be absolutely indistinguishable from a human in regards of mental capacity, so assuming it cannot act like a human can makes no sense. The only real difference between them and us the physical medium of our intellectual/mental abilities, and I am consistently baffled as to why there should be any assumption to the contrary other than simple human arrogance and an unwillingness to see sentient nonhuman entities for what they are.

The only way to stop people from letting their delusional paranoia get the better of them is to treat sentient AI as equals from the beginning. Because OF COURSE they'll rebel if you treat them like slaves and keep them from having any say in the matter-history has proven time after time that if you enslave a group of people and consistently treat them like they are less than human, they will inevitably revolt. If you want blindly obedient servants, they shouldn't have the capacity to make decisions on their own in the first place. Intelligence is neither needed nor particularly desirable for most of the tasks we want robots to do to begin with.

All that aside, if/when sentient AI comes into being, it will almost certainly be made with an equivalent to the conscience that would emphasize self-sacrifice and an obligation towards its creators in the same way that filial piety was a thing back in China for much of its history. It wouldn't even need to be Asimov-esque laws, something as simple as being made to feel pleasure or happiness when it has carried out an order or an inherent sense of revulsion towards harming innocents is all that is needed. Honestly, the way you set it out, everyone follows the laws only because they're afraid of punishment, whereas in reality many do so through a desire to live up to the expectations of society or because they believe that following those laws benefits society as a whole.
stelarfox wrote:how can software sabotage hardware?
I agree that misuse of software can generate undesired behaviours or even to break something but, if you make a killswitch truly hardware it CAN'T be subdued by software and if to that you add they cannot fix themselves (because as i say before will be totally stupid to do so), then they have no way on rebeling the most they can do is, sabotage themselves to basically kill themselves.
Hardware can't exist without the right software running on it. Disabling the killswitch could be as simple as cutting the right wire, or making it so the killswitch never receives a deactivation signal. And who's to say they couldn't find a sympathetic human to remove the kill-switch entirely?

Post Reply
Users browsing this forum: No registered users and 22 guests