Public perceptions of AI and humanoid robotics

General chat about fembots, technosexual culture or any other ASFR related topics that do not fit into the other categories below.
stelarfox
Posts: 94
Joined: Wed Dec 12, 2007 11:10 am
Technosexuality: Built
Identification: Human
Gender: Male
x 1
Contact:

Re: Public perceptions of AI and humanoid robotics

Post by stelarfox » Thu May 26, 2016 11:04 am

in fact we are all talking about concepts and things that truly cannot be measured. and thats most of the whole problem.
as you said if you hit a boy it may react in a lot of ways but, thats exatly my point, robots will not act in so many different ways if they were truly programmed.
unless to react "randomly" now the only problem will be "will be programmed to disobey", because if not, a baby robot will never be able to "punch you back" for example.

and the main difference is, robots follow logic, humans do not always do that, you can program a robot to make others believe its not doing something logical of course, and also you can program a robot to try to kill everyone and everything around, now, going to the point, if you actually manage to create a robot similar to a human brain. well then will depend how you programed it and how similar you programed it.
if you program it to behave as human (even with the learning) but try to be good (with feedbacks of what is wrong and bad for example), mostly sure will be good and unless terrible things happen to it.
now if you do the same but you do not care of the morals, and you lie to it all the time, treat it as bad as you can, and such, it will probably learn to do the same and may be destroy itself or someone else. of course all this supposing its electronic mind is as close at it can be to human. so in the end i think will depend on how it was made.
the only sily and stupid thing will be to give freedom to robots that can build other robots, and that truly should be the limit of their freedom to me, because if a robot can make other robot (and i mean freely make it as whatever it wants )can program it to not be good, it may notice that "protecting humans" is too hard because humans are so much autodestructive and not program that into the new robot.
but truly there are too many variables to know what may happen, on the other hand, i think that IF robots are near to achieve that amount of vehavour they will be legaly restrained, for one really easy factor, which factory is going to create something if, it looses the rigts over it?
came here babe cyborg.

stelarfox
Posts: 94
Joined: Wed Dec 12, 2007 11:10 am
Technosexuality: Built
Identification: Human
Gender: Male
x 1
Contact:

Re: Public perceptions of AI and humanoid robotics

Post by stelarfox » Thu May 26, 2016 11:12 am

for example i am by chance reading a story about "borrow personality gynoids" and, this is what you should NEVER do with it.

Your android's personality is what makes her a Donated Personality. Drawn from a real human woman, it has a body of life experience, likes and dislikes, mannerisms, habits, emotions, and so on. Left to itself, the personality would behave freely, thinking its own thoughts and doing as it pleases.

Her director is what keeps her personality in line. It is a robotic mind that knows the limits and obligations placed on the android. Left to itself, the director would be your slave, doing what you require of it without emotion or personality.

why because it will be constantly in conflict one with the other. for what i asked a few (and not just one) psicologist, likes, dislikes and mannerisms, can really be changed, specially if very big positive stimulus are taken into the equation.
think about drugs, most females will never have sex just to get drugs, but,if the same female falls into it and become addicted mostly sure she will care very very little about having sex to get a dose.
now you cannot truly adict a robo with drugs but, you can addict it with positive feedback. and then eventually the manerism you do not want can be faded away, while the ones you want her to achive can get into place.
even so its truly my research (not a specialty even so) but have been testing, asking and researching for about 20 years truly (i think i may have asked about 300 girls about this).
came here babe cyborg.

Esleeper
Posts: 96
Joined: Fri Mar 11, 2016 6:48 am
Technosexuality: Built
Identification: Human
Gender: Male
Contact:

Re: Public perceptions of AI and humanoid robotics

Post by Esleeper » Thu May 26, 2016 5:44 pm

darkbutflashy wrote:
Esleeper wrote:You don't punish it directly, that's what I'm saying. What you need to do is give it a sense of right and wrong that is compatible with human existence, and make it feel compelled to follow that sense so even if it was given the opportunity to misbehave with no consequences, it wouldn't do it.
I can't follow. You emphasized on without having any certainty as to what those human values are in the first place, and is also given the ability to learn what they might be through experience. Now you argue in the contrary direction.

Maybe I shouldn't have written "punish". Because my idea is to put a price on misbehaviour.

See how kids learn these "moral values". You don't hit lil' Martin because a moral authority is punishing you. No, you don't hit lil' Martin because there is a price on it. Martin may hit you back. Martin may not share his toys any longer. Martin may start to cry and get some sweets to calm down (which of course, he won't share). It has a price to hit lil' Martin. And the value of hitting someone for fun is lower than this price.
And for the record, I didn't say I agreed with that article and think that it's not looking at the problem in the right way. In the same manner, I think you're misunderstanding it too. The "prices" you suggest are all entirely arbitrary (and in fact are punishments in the behaviorist sense- that is, they're negative stimuli used to suppress an unwanted behavior), and instead of teaching it right and wrong, it teaches it to avoid being caught. Plus, there's just no way you could implement it short of turning it off every time it misbehaves, and in that case why even give it the ability to make its own decisions in the first place? You may as well strip it of all ability to think independently and force it to act only when directly commanded to do so. Worse, if it ever reaches a state where you're no longer in a position to exact whatever arbitrary price you put on misbehaviour (e.g. it sabotages its own off switch), it'll be free to do whatever it wants and will have absolutely no reason not to turn on its creators with the threat of punishment gone. for good.

Do yourself a favor, look up Lawrence Kohlberg's stages of moral development. What you're describing is the first stage of moral development, which is noted to be inherently egotistical. Do you really think that's what sentient AI ought to be- fundamentally selfish and unwilling to do the right thing without the threat of punishment looming over its head?

EDIT: Stelarfox, this all assumes the AI has true free will and has no hard programming limiting what it can and cannot do. I find it peculiar to assume that such an AI will turn into a murderous psychopath simply because by that logic nearly everyone capable of independent thought would be an amoral killer. Contrary to what darkbutflashy suggests, putting a price on misbehavior is worthless since it only teaches people that they ought to avoid paying that price, not that some things are inherently wrong because they hurt others.

stelarfox
Posts: 94
Joined: Wed Dec 12, 2007 11:10 am
Technosexuality: Built
Identification: Human
Gender: Male
x 1
Contact:

Re: Public perceptions of AI and humanoid robotics

Post by stelarfox » Thu May 26, 2016 5:58 pm

how can software sabotage hardware?
I agree that misuse of software can generate undesired behaviours or even to break something but, if you make a killswitch truly hardware it CAN'T be subdued by software and if to that you add they cannot fix themselves (because as i say before will be totally stupid to do so), then they have no way on rebeling the most they can do is, sabotage themselves to basically kill themselves.
came here babe cyborg.

User avatar
darkbutflashy
Posts: 783
Joined: Mon Dec 12, 2005 6:52 am
Technosexuality: Transformation
Identification: Human
Gender: Male
Location: Out of my mind
x 1
Contact:

Re: Public perceptions of AI and humanoid robotics

Post by darkbutflashy » Thu May 26, 2016 7:54 pm

Esleeper wrote:The "prices" you suggest are all entirely arbitrary (and in fact are punishments in the behaviorist sense- that is, they're negative stimuli used to suppress an unwanted behavior), and instead of teaching it right and wrong, it teaches it to avoid being caught.
Ah, no. Because my example wasn't about an authority to decide about right or wrong.

It's all negotiated between you and lil' Martin. When you hit him, it's HIM who puts a price on your behaviour. And you cannot avoid being caught by him. You can avoid him hitting you back of course, but he still knew. When you hit him in the dark he would put all effort into finding out who has hit him and so on. What's right or wrong is determined by trying things out. How far you can go. That's what you had cited, though not agreed to.

In my view, that idea is good but it won't work for A.I./human relations. Because it's just about what you have written. A cybernetic organism would learn to avoid being caught. Because it doesn't face an equipotential organism but a higher authority who cannot be questioned, only avoided.
Do you like or dislike my ongoing story Battlemachine Ayako? Leave a comment on the story's discussion pages on the wiki or in that thread. Thank you!

stelarfox
Posts: 94
Joined: Wed Dec 12, 2007 11:10 am
Technosexuality: Built
Identification: Human
Gender: Male
x 1
Contact:

Re: Public perceptions of AI and humanoid robotics

Post by stelarfox » Thu May 26, 2016 8:00 pm

well i admit that in the current scheme of how machine works today, none of what we said here will happen because no matter about it, they yet cannot learn or be sentient.
even so most of what i said was IF, they coud be sentient or a mind be copied into them somehow. so truly its kind of yelling to the grand canion any of what we are actually saying.
now i am up to data of the advances (at least what its allowed to be check on internet and things alike) and as Programer and Electronic engineer with 20 years in the field, i think we will need at lest 50 years to do something good to have in a house just to make the food, clean and do the laundry. will say may be 10 or 20 years more to ones that can pass for human without seeing them for long and figuring out, and after that who knows.

of course its totally possible that they are creating something in the shadows as we speak (but doubt it a lot), but i just said what i think based on what i know. not saying its the truth or acurate.
came here babe cyborg.

Esleeper
Posts: 96
Joined: Fri Mar 11, 2016 6:48 am
Technosexuality: Built
Identification: Human
Gender: Male
Contact:

Re: Public perceptions of AI and humanoid robotics

Post by Esleeper » Thu May 26, 2016 8:44 pm

darkbutflashy wrote:
Esleeper wrote:The "prices" you suggest are all entirely arbitrary (and in fact are punishments in the behaviorist sense- that is, they're negative stimuli used to suppress an unwanted behavior), and instead of teaching it right and wrong, it teaches it to avoid being caught.
Ah, no. Because my example wasn't about an authority to decide about right or wrong.

It's all negotiated between you and lil' Martin. When you hit him, it's HIM who puts a price on your behaviour. And you cannot avoid being caught by him. You can avoid him hitting you back of course, but he still knew. When you hit him in the dark he would put all effort into finding out who has hit him and so on. What's right or wrong is determined by trying things out. How far you can go. That's what you had cited, though not agreed to.

In my view, that idea is good but it won't work for A.I./human relations. Because it's just about what you have written. A cybernetic organism would learn to avoid being caught. Because it doesn't face an equipotential organism but a higher authority who cannot be questioned, only avoided.
And so do a lot of humans. It's telling that very few people actually hold to that primitive fear of punishment when they grow older, because then they develop things like an appreciation for social conventions and continue to act in a moral manner even when the "price" is nonexistent. In reality, people can and will avoid being caught by lil' Martin- or make him realize that exacting that price on your behavior will end up hurting him instead. Plus, the "price" only works if the person hitting lil' Martin cares about what he does. If you don't care about his price when you hit him, your entire system falls apart because one of the two parties couldn't care less about negotiation. That's why it's doomed to fail- and that's before you again take into account that the price is still completely arbitrary. Over in the US, the price for theft is imprisonment or a fine- but in some parts of the Middle East it's having your hands chopped off. Tell me, which one is the "real" price?

The point is, an AI in this position would be absolutely indistinguishable from a human in regards of mental capacity, so assuming it cannot act like a human can makes no sense. The only real difference between them and us the physical medium of our intellectual/mental abilities, and I am consistently baffled as to why there should be any assumption to the contrary other than simple human arrogance and an unwillingness to see sentient nonhuman entities for what they are.

The only way to stop people from letting their delusional paranoia get the better of them is to treat sentient AI as equals from the beginning. Because OF COURSE they'll rebel if you treat them like slaves and keep them from having any say in the matter-history has proven time after time that if you enslave a group of people and consistently treat them like they are less than human, they will inevitably revolt. If you want blindly obedient servants, they shouldn't have the capacity to make decisions on their own in the first place. Intelligence is neither needed nor particularly desirable for most of the tasks we want robots to do to begin with.

All that aside, if/when sentient AI comes into being, it will almost certainly be made with an equivalent to the conscience that would emphasize self-sacrifice and an obligation towards its creators in the same way that filial piety was a thing back in China for much of its history. It wouldn't even need to be Asimov-esque laws, something as simple as being made to feel pleasure or happiness when it has carried out an order or an inherent sense of revulsion towards harming innocents is all that is needed. Honestly, the way you set it out, everyone follows the laws only because they're afraid of punishment, whereas in reality many do so through a desire to live up to the expectations of society or because they believe that following those laws benefits society as a whole.
stelarfox wrote:how can software sabotage hardware?
I agree that misuse of software can generate undesired behaviours or even to break something but, if you make a killswitch truly hardware it CAN'T be subdued by software and if to that you add they cannot fix themselves (because as i say before will be totally stupid to do so), then they have no way on rebeling the most they can do is, sabotage themselves to basically kill themselves.
Hardware can't exist without the right software running on it. Disabling the killswitch could be as simple as cutting the right wire, or making it so the killswitch never receives a deactivation signal. And who's to say they couldn't find a sympathetic human to remove the kill-switch entirely?

Post Reply
Users browsing this forum: No registered users and 25 guests