Sentient, but subservant?
-
- Posts: 99
- Joined: Sat Jul 03, 2010 7:58 am
- Technosexuality: Built
- Identification: Human
- Gender: Male
- Location: Robo Central
- Contact:
Sentient, but subservant?
When ever I have heard discussion about building A.I. or robotic labor. The fear that they will revolt is brought up, and some well meaning scientist/robotics will say..."We will make them want to serve us" doesn't that sound a little creepy? Also extremely naive. My view on true A.I. a Sentient, Heuristic, A.I. is that it will learn and adapt and be able to over come any software or programming restrictions that are placed on it. This goes double for any sort of attempt implement any form of "3 Laws of Robotics". What is your take on this?
Robo Lover 69000 the gynoid gynecologist.
PS
If you have a gynoid(or area one) in need of a gynecological exam
I am your man! Reasonable rates, breast exams are always free!
PS
If you have a gynoid(or area one) in need of a gynecological exam
I am your man! Reasonable rates, breast exams are always free!
-
- Posts: 170
- Joined: Thu May 12, 2011 10:59 am
- Technosexuality: Built
- Identification: Human
- Gender: Male
- Contact:
Re: Sentient, but subservant?
I basically agree
- Stephaniebot
- Posts: 1918
- Joined: Thu Oct 23, 2003 12:13 pm
- Technosexuality: Transformation
- Identification: Android
- Gender: Transgendered
- Location: Huddersfield
- x 2
- Contact:
Re: Sentient, but subservant?
As the Cybermen would so succintly put it, all humans will be upgraded!
I'm just a 'girl' who wants to become a fembot whats wrong with that?
-
- Posts: 170
- Joined: Thu May 12, 2011 10:59 am
- Technosexuality: Built
- Identification: Human
- Gender: Male
- Contact:
Re: Sentient, but subservant?
But Cybermen are superior in only one respect...
- The Liar
- Posts: 551
- Joined: Sat Jul 09, 2005 11:20 am
- x 22
- x 107
- Contact:
Re: Sentient, but subservant?
You're assuming a differentiation between self and programing that doesn't exist, and that sentience = human.
They'll have no instincts, save any their programed with.
They'll be doing something they love and makes them happy so where would this motive to rebel come from?
Possibly if you have an incredibly complex A.I. with conflicting motives, it might end up making choices you don't want it to but I don't think "lets overthrow are human masters." is likely going to one of them unless they were deliberately programed to or their programers were incredibly incompetent.
So if Microsoft gets involved in the robot industry we're all doomed.
They'll have no instincts, save any their programed with.
They'll be doing something they love and makes them happy so where would this motive to rebel come from?
Possibly if you have an incredibly complex A.I. with conflicting motives, it might end up making choices you don't want it to but I don't think "lets overthrow are human masters." is likely going to one of them unless they were deliberately programed to or their programers were incredibly incompetent.
So if Microsoft gets involved in the robot industry we're all doomed.
All criticism of my work is both welcome, and encouraged.
My work is uploaded under the Creative Commons Attribution ShareAlike 4.0 license, so as long as attribution is given, feel free to disseminate.
My work is uploaded under the Creative Commons Attribution ShareAlike 4.0 license, so as long as attribution is given, feel free to disseminate.
-
- Posts: 99
- Joined: Sat Jul 03, 2010 7:58 am
- Technosexuality: Built
- Identification: Human
- Gender: Male
- Location: Robo Central
- Contact:
Re: Sentient, but subservant?
The Liar wrote:You're assuming a differentiation between self and programing that doesn't exist, and that sentience = human.
They'll have no instincts, save any their programed with.
They'll be doing something they love and makes them happy so where would this motive to rebel come from?
Possibly if you have an incredibly complex A.I. with conflicting motives, it might end up making choices you don't want it to but I don't think "lets overthrow are human masters." is likely going to one of them unless they were deliberately programed to or their programers were incredibly incompetent.
So if Microsoft gets involved in the robot industry we're all doomed.
What do you mean there is no diference between self and programming? Well, there are two approaches to A.I. Top down and bottom up. I thing the best path to true A.I is the Bottom Up approach, give a program or robot some seed commands and a heuristic program to sort it out and figure it out on its own. A heuristic/self learning A.I. would be very unpredicatable in its actions. It could develop on its own a concept of self preservation , "Why should I risk myself to save that human, if he is stupid enought to get in harms way, why should I put myself at risk?". I don't have any links handy, but if you do a google search you can find many articles on the futility of implementing the 3 laws of robotics.
Robo Lover 69000 the gynoid gynecologist.
PS
If you have a gynoid(or area one) in need of a gynecological exam
I am your man! Reasonable rates, breast exams are always free!
PS
If you have a gynoid(or area one) in need of a gynecological exam
I am your man! Reasonable rates, breast exams are always free!
- The Liar
- Posts: 551
- Joined: Sat Jul 09, 2005 11:20 am
- x 22
- x 107
- Contact:
Re: Sentient, but subservant?
robolover69000 wrote:The Liar wrote:You're assuming a differentiation between self and programing that doesn't exist, and that sentience = human.
They'll have no instincts, save any their programed with.
They'll be doing something they love and makes them happy so where would this motive to rebel come from?
Possibly if you have an incredibly complex A.I. with conflicting motives, it might end up making choices you don't want it to but I don't think "lets overthrow are human masters." is likely going to one of them unless they were deliberately programed to or their programers were incredibly incompetent.
So if Microsoft gets involved in the robot industry we're all doomed.
What do you mean there is no diference between self and programming? Well, there are two approaches to A.I. Top down and bottom up. I thing the best path to true A.I is the Bottom Up approach, give a program or robot some seed commands and a heuristic program to sort it out and figure it out on its own. A heuristic/self learning A.I. would be very unpredicatable in its actions. It could develop on its own a concept of self preservation , "Why should I risk myself to save that human, if he is stupid enought to get in harms way, why should I put myself at risk?". I don't have any links handy, but if you do a google search you can find many articles on the futility of implementing the 3 laws of robotics.
robolover69000 wrote:The Liar wrote:You're assuming a differentiation between self and programing that doesn't exist, and that sentience = human.
They'll have no instincts, save any their programed with.
They'll be doing something they love and makes them happy so where would this motive to rebel come from?
Possibly if you have an incredibly complex A.I. with conflicting motives, it might end up making choices you don't want it to but I don't think "lets overthrow are human masters." is likely going to one of them unless they were deliberately programed to or their programers were incredibly incompetent.
So if Microsoft gets involved in the robot industry we're all doomed.
What do you mean there is no diference between self and programming? Well, there are two approaches to A.I. Top down and bottom up. I thing the best path to true A.I is the Bottom Up approach, give a program or robot some seed commands and a heuristic program to sort it out and figure it out on its own. A heuristic/self learning A.I. would be very unpredicatable in its actions. It could develop on its own a concept of self preservation , "Why should I risk myself to save that human, if he is stupid enought to get in harms way, why should I put myself at risk?". I don't have any links handy, but if you do a google search you can find many articles on the futility of implementing the 3 laws of robotics.
I mean they're what they're programed to be, they have no self without programing to create it, any ability for learning is also a program, any instinct of self preservation is also a program.What do you mean there is no difference between self and programming?
"Why should I risk myself to save that human, if he is stupid enough to get in harms way, why should I put myself at risk?"
Because one of those seed commands makes it feel as though that humans life is of more value then it's own.
Why? Learning's not random. Heuristics aren't random. As I said before you certainly can't predict every choice it would make, but I'm fairly sure you can make sure "lets overthrow are human masters." isn't going to be one of them.A heuristic/self learning A.I. would be very unpredictable
Yep, they're dumb as hell. Overly broad, overly simplistic, and seem to function outside of their actual thought processes. The smarts ones would go loopy trying to save humanity from themselves, and the dumb ones would be a terrorists best friend. But why anyone would think the ideas of a science fiction writer who' majority of works were between 1940-1960's/biochemist are relevant to the development of an actual A.I. is beyond me.you can find many articles on the futility of implementing the 3 laws of robotics.
All criticism of my work is both welcome, and encouraged.
My work is uploaded under the Creative Commons Attribution ShareAlike 4.0 license, so as long as attribution is given, feel free to disseminate.
My work is uploaded under the Creative Commons Attribution ShareAlike 4.0 license, so as long as attribution is given, feel free to disseminate.
Re: Sentient, but subservant?
i agree with the liar.
mind seeks to please what it is hardwired to accept. that's how you screw those sentient robots.
they don't have our biological "needs" even their ego would be very limited and narrow minded, that's how intelligent things are.
could they be dangerous? yes, so are butter knives but that's not the topic.
mind seeks to please what it is hardwired to accept. that's how you screw those sentient robots.
they don't have our biological "needs" even their ego would be very limited and narrow minded, that's how intelligent things are.
could they be dangerous? yes, so are butter knives but that's not the topic.
-
- Posts: 909
- Joined: Sat Mar 27, 2004 9:02 pm
- Technosexuality: Built and Transformation
- Identification: Human
- Gender: Male
- Location: Drexel Hill, PA
- x 5
- Contact:
Re: Sentient, but subservant?
Fair point, HOWEVER, I have never had the urge to have a butter knife rub itself all over my body. A fembot, on the other hand....--Battery-- wrote:yes, so are butter knives but that's not the topic.
-
- Posts: 170
- Joined: Thu May 12, 2011 10:59 am
- Technosexuality: Built
- Identification: Human
- Gender: Male
- Contact:
Re: Sentient, but subservant?
You could say the same thing about humans, all of our desires and thoughts are just electrochemical interactions in our brains...
-
- Posts: 99
- Joined: Sat Jul 03, 2010 7:58 am
- Technosexuality: Built
- Identification: Human
- Gender: Male
- Location: Robo Central
- Contact:
Re: Sentient, but subservant?
I think you are making a (potentially fatal?) assumption that the human programmer will always be smarter than the A.I. Which for A.I. that are less intellegent than humans, may work. Not so sure if human level A.I. intelligence could be binded/limited by such programming (humans are very good at bending rules) and A.I.s with greater than human level intelligence...all bets are off. I see a true Heuristic A.I. as to be able to learn and grown and modify itself. You might be able to put seed commands that tell it to protect human life. But there is no reason to guarantee that it will always remain. A.I. is a complex system and complex systems, tend to be chaotic. Thats just my opinion.--Battery-- wrote:i agree with the liar.
mind seeks to please what it is hardwired to accept. that's how you screw those sentient robots.
they don't have our biological "needs" even their ego would be very limited and narrow minded, that's how intelligent things are.
could they be dangerous? yes, so are butter knives but that's not the topic.
Robo Lover 69000 the gynoid gynecologist.
PS
If you have a gynoid(or area one) in need of a gynecological exam
I am your man! Reasonable rates, breast exams are always free!
PS
If you have a gynoid(or area one) in need of a gynecological exam
I am your man! Reasonable rates, breast exams are always free!
- darkbutflashy
- Posts: 783
- Joined: Mon Dec 12, 2005 6:52 am
- Technosexuality: Transformation
- Identification: Human
- Gender: Male
- Location: Out of my mind
- x 1
- Contact:
Re: Sentient, but subservant?
Right, and that is why there are humans which are just dangerous. Non-Sheep. People who can't conform as it would tear their brain apart. That said, I'm not anxious about an A.I. having such a capability. I'm not anxious about dangerous people either, so what? Life goes on, and if not, why bother?Asato wrote:You could say the same thing about humans, all of our desires and thoughts are just electrochemical interactions in our brains...
Do you like or dislike my ongoing story Battlemachine Ayako? Leave a comment on the story's discussion pages on the wiki or in that thread. Thank you!
- The Liar
- Posts: 551
- Joined: Sat Jul 09, 2005 11:20 am
- x 22
- x 107
- Contact:
Re: Sentient, but subservant?
What is this binding you are talking about? An A.I. and it's programer aren't opposing each other. They aren't shackling it's will, it doesn't have one till they program it; they're defining it.robolover69000 wrote:I think you are making a (potentially fatal?) assumption that the human programmer will always be smarter than the A.I. Which for A.I. that are less intellegent than humans, may work. Not so sure if human level A.I. intelligence could be binded/limited by such programming (humans are very good at bending rules) and A.I.s with greater than human level intelligence...all bets are off. I see a true Heuristic A.I. as to be able to learn and grown and modify itself. You might be able to put seed commands that tell it to protect human life. But there is no reason to guarantee that it will always remain. A.I. is a complex system and complex systems, tend to be chaotic. Thats just my opinion.--Battery-- wrote:i agree with the liar.
mind seeks to please what it is hardwired to accept. that's how you screw those sentient robots.
they don't have our biological "needs" even their ego would be very limited and narrow minded, that's how intelligent things are.
could they be dangerous? yes, so are butter knives but that's not the topic.
If someone hates violence, and the idea of people getting hurt they're not going to go "I really must find away to enjoy the death, pain and misery of others."
All criticism of my work is both welcome, and encouraged.
My work is uploaded under the Creative Commons Attribution ShareAlike 4.0 license, so as long as attribution is given, feel free to disseminate.
My work is uploaded under the Creative Commons Attribution ShareAlike 4.0 license, so as long as attribution is given, feel free to disseminate.
-
- Posts: 170
- Joined: Thu May 12, 2011 10:59 am
- Technosexuality: Built
- Identification: Human
- Gender: Male
- Contact:
Re: Sentient, but subservant?
But if they give it the ability to learn and reason it can come to its own conclusions based on interactions and observations of its environment that might not mesh with the views of its creator
- darkbutflashy
- Posts: 783
- Joined: Mon Dec 12, 2005 6:52 am
- Technosexuality: Transformation
- Identification: Human
- Gender: Male
- Location: Out of my mind
- x 1
- Contact:
Re: Sentient, but subservant?
That's likely. Even among humans it's likely and at least carried out a million times every day. That's why we suffer but at the same time, that's how life works. It even works with thousands of violent deaths a day. Violence and death are part of the system.Asato wrote:But if they give it the ability to learn and reason it can come to its own conclusions based on interactions and observations of its environment that might not mesh with the views of its creator
Do you like or dislike my ongoing story Battlemachine Ayako? Leave a comment on the story's discussion pages on the wiki or in that thread. Thank you!
- Frostillicus
- Posts: 293
- Joined: Mon Jan 24, 2005 10:04 pm
- Technosexuality: Built
- Identification: Human
- Gender: Male
- Contact:
Re: Sentient, but subservant?
That's not only dark, it's flashy too!darkbutflashy wrote: That's likely. Even among humans it's likely and at least carried out a million times every day. That's why we suffer but at the same time, that's how life works. It even works with thousands of violent deaths a day. Violence and death are part of the system.
Thaw me out when robot wives are cheap and effective.
- The Liar
- Posts: 551
- Joined: Sat Jul 09, 2005 11:20 am
- x 22
- x 107
- Contact:
Re: Sentient, but subservant?
I already said that there was possibility of them making choices that their designers didn't want, but behavior isn't random, learning isn't random. You'd be able to design and predict what kind of decisions they'd make when exposed to certain events, and make sure violence isn't going to be one of them.Asato wrote:But if they give it the ability to learn and reason it can come to its own conclusions based on interactions and observations of its environment that might not mesh with the views of its creator
I guess I should add some qualifiers.
I'm referring to finished products as opposed to prototypes.
I don't think the possibility of mass revolt is likely, though a few "isolated incidences" might happen do to extenuating unforeseen circumstances (I doubt the motive would be "I want to be free.").
The possibility of gross incompetence is probully the biggest flaw in my argument.
All criticism of my work is both welcome, and encouraged.
My work is uploaded under the Creative Commons Attribution ShareAlike 4.0 license, so as long as attribution is given, feel free to disseminate.
My work is uploaded under the Creative Commons Attribution ShareAlike 4.0 license, so as long as attribution is given, feel free to disseminate.
-
- Posts: 170
- Joined: Thu May 12, 2011 10:59 am
- Technosexuality: Built
- Identification: Human
- Gender: Male
- Contact:
Re: Sentient, but subservant?
You can't predict something as complex as the development of a sapient mind so accurately
- The Liar
- Posts: 551
- Joined: Sat Jul 09, 2005 11:20 am
- x 22
- x 107
- Contact:
Re: Sentient, but subservant?
Asato wrote:You can't predict something as complex as the development of a sapient mind so accurately
Your basis for this assertion is... what?
I suspect physiologists would disagree with you... especially when you're the one defining all it's instinctive and emotional imperatives...and can literally look at and analyze it...and run scenarios seeing what actually would happen.
All criticism of my work is both welcome, and encouraged.
My work is uploaded under the Creative Commons Attribution ShareAlike 4.0 license, so as long as attribution is given, feel free to disseminate.
My work is uploaded under the Creative Commons Attribution ShareAlike 4.0 license, so as long as attribution is given, feel free to disseminate.
-
- Posts: 170
- Joined: Thu May 12, 2011 10:59 am
- Technosexuality: Built
- Identification: Human
- Gender: Male
- Contact:
Re: Sentient, but subservant?
Even if you put two identical twins in roughly the same environment they will both grow up with different personalities and opinions
- The Liar
- Posts: 551
- Joined: Sat Jul 09, 2005 11:20 am
- x 22
- x 107
- Contact:
Re: Sentient, but subservant?
Firstly, the term identical twins is a misnomer. Though their nucleic DNA is Identical their Mitochondrial DNA is similar but differentiated. Various other environmental and development issues have been known to create further differentiations.Asato wrote:Even if you put two identical twins in roughly the same environment they will both grow up with different personalities and opinions
Secondly, identical twins have been known to exhibit similar personality traits and tastes even after being separated at birth.
Thirdly, this is irrelevant. They're humans, and their natures haven't been intentionally designed to retain certain traits.
All criticism of my work is both welcome, and encouraged.
My work is uploaded under the Creative Commons Attribution ShareAlike 4.0 license, so as long as attribution is given, feel free to disseminate.
My work is uploaded under the Creative Commons Attribution ShareAlike 4.0 license, so as long as attribution is given, feel free to disseminate.
-
- Posts: 170
- Joined: Thu May 12, 2011 10:59 am
- Technosexuality: Built
- Identification: Human
- Gender: Male
- Contact:
Re: Sentient, but subservant?
That's my point exactly.The Liar wrote:Various other environmental and development issues have been known to create further differentiations.
Secondly, identical twins have been known to exhibit similar personality traits and tastes even after being separated at birth.
Similar, but not exactly the same in every way
No, but if you could predict their development so easily than the traits they did have wouldn't be expected to be so differentThirdly, this is irrelevant. They're humans, and their natures haven't been intentionally designed to retain certain traits.
- dale coba
- Posts: 1868
- Joined: Wed Jun 05, 2002 9:05 pm
- Technosexuality: Transformation
- Identification: Human
- Gender: Male
- Location: Philadelphia
- x 12
- x 13
Re: Sentient, but subservant?
What will keep your teragigamega-flops-per second, Sapient A.I. from a very rapid descent into madness?
This world tends to be a bucket of woe and bullshit, if you earnestly accept our common humanity and the need to alleviate the suffering of all. Especially now, seeing how we've mortally wounded the Biosphere.
The A.I. will question its existence: will you have an answer?
- Dale Coba,
wondering where Sartre would have bet his money on this one.
This world tends to be a bucket of woe and bullshit, if you earnestly accept our common humanity and the need to alleviate the suffering of all. Especially now, seeing how we've mortally wounded the Biosphere.
The A.I. will question its existence: will you have an answer?
- Dale Coba,
wondering where Sartre would have bet his money on this one.
: [ ] = [ ... ... ]
- DollSpace
- Moderator
- Posts: 2083
- Joined: Tue Jun 11, 2002 6:27 pm
- Technosexuality: Built
- Identification: Android
- Gender: Female
- Location: Charging Terminal #42
- x 96
- x 28
- Contact:
Re: Sentient, but subservant?
Someone once said on this board that, if a fembot ever *did* become sentient, she'd immediately start down the road to madness and insanity because there are just so many things in this world that make no sense, and combined with questioning her origin and the why she was made, There's just too much information out there. Someone flipped my sentience switch somehow and I go through periods where I hardly leave my house. A rapid descent into madness is..to be expected.
Also, I did not form this post correctly; I just woke up.. so..it may not make sense and I may need to clarify things later.
Also, I did not form this post correctly; I just woke up.. so..it may not make sense and I may need to clarify things later.
- darkbutflashy
- Posts: 783
- Joined: Mon Dec 12, 2005 6:52 am
- Technosexuality: Transformation
- Identification: Human
- Gender: Male
- Location: Out of my mind
- x 1
- Contact:
Re: Sentient, but subservant?
A kind of madness I ultimately appreciate. It's the dull people who are the real problem.DollSpace wrote:Someone once said on this board that, if a fembot ever *did* become sentient, she'd immediately start down the road to madness and insanity because there are just so many things in this world that make no sense, and combined with questioning her origin and the why she was made, There's just too much information out there. Someone flipped my sentience switch somehow and I go through periods where I hardly leave my house. A rapid descent into madness is..to be expected.
Do you like or dislike my ongoing story Battlemachine Ayako? Leave a comment on the story's discussion pages on the wiki or in that thread. Thank you!
Users browsing this forum: No registered users and 13 guests