Is it ethical to create "Free Willed Androids"
-
- Posts: 334
- Joined: Mon Jul 14, 2003 3:47 pm
- x 29
- x 7
- Contact:
Is it ethical to create "Free Willed Androids"
There has been lots of debate, here and within the whole of Western Philosophy, about what "free will" is.
It occurs to me that the debate about whether "free will" is something really mystical and important, whether each person has an essence, a soul, an atman.
But this debate is irrelevant for our big question. "Free will" may be the possession of an essence or it may be the random biological urges of purely material humans. But regardless, if you had the choice between creating androids which simply obeyed and pleased you and androids which had "free will", would it be ethical to create these free willed androids?
I feel like if one puts it that way, it's clearly more ethical to create androids which serve you. However unique or non-unique free-will might be, imbuing into something that be destroyed, duplicated exactly or which might destroy other entities randomly, would have undesirable consequences.
I mean, we free willed humans barely maintain our humanity in this complex modern world. Entities which made choices in a similar fashion to humans, which were selfish in a similar fashion to humans but which reproduced like machines and had even less "social context" than humans to make decisions, would become a menace in a rather short order.
But entities which simply obeyed human ordered might actually produce enough wealth and sanity that we humans might regain some of our sanity.
What do you all think?
It occurs to me that the debate about whether "free will" is something really mystical and important, whether each person has an essence, a soul, an atman.
But this debate is irrelevant for our big question. "Free will" may be the possession of an essence or it may be the random biological urges of purely material humans. But regardless, if you had the choice between creating androids which simply obeyed and pleased you and androids which had "free will", would it be ethical to create these free willed androids?
I feel like if one puts it that way, it's clearly more ethical to create androids which serve you. However unique or non-unique free-will might be, imbuing into something that be destroyed, duplicated exactly or which might destroy other entities randomly, would have undesirable consequences.
I mean, we free willed humans barely maintain our humanity in this complex modern world. Entities which made choices in a similar fashion to humans, which were selfish in a similar fashion to humans but which reproduced like machines and had even less "social context" than humans to make decisions, would become a menace in a rather short order.
But entities which simply obeyed human ordered might actually produce enough wealth and sanity that we humans might regain some of our sanity.
What do you all think?
- Spaz
- Fembot Central Staff
- Posts: 1956
- Joined: Sat Sep 09, 2006 9:18 am
- Technosexuality: Built and Transformation
- Identification: Human
- Gender: Male
- Location: San Jose, CA
- x 134
- x 127
- Contact:
Re: Is it ethical to create "Free Willed Androids"
I think the ethics depend on the type of android, and it's function.
You have people on this site that might prefer the Stepford type android that robotically submits to everything. For me, that might satisfy me for a while, but I'd want more. Realistically, most people don't get into relationships with other people just for physical relationships, they want an emotional relationship as well...someone that could surprise them, listen to them, and freely do the same for them. A free will model could do this.
Personally, my type of android is typically one that used to be human and has been transferred or copied into an android body. It would be unethical for that type to suddenly have no free will. However. I have modded versions which contain donated human personalities, but are designed and programmed specifically for certain tasks. They can initiate tasks on their own, but they can't do anything they are told not to do.
In the end, I suppose it's whatever floats your boat.
You have people on this site that might prefer the Stepford type android that robotically submits to everything. For me, that might satisfy me for a while, but I'd want more. Realistically, most people don't get into relationships with other people just for physical relationships, they want an emotional relationship as well...someone that could surprise them, listen to them, and freely do the same for them. A free will model could do this.
Personally, my type of android is typically one that used to be human and has been transferred or copied into an android body. It would be unethical for that type to suddenly have no free will. However. I have modded versions which contain donated human personalities, but are designed and programmed specifically for certain tasks. They can initiate tasks on their own, but they can't do anything they are told not to do.
In the end, I suppose it's whatever floats your boat.
Check out my stories: https://www.fembotwiki.com/index.php?title=User:Spaz
Current story status: The Small Business Chronicles: Season Two | The Doctor is in - The Clinic (In progress...)
Current story status: The Small Business Chronicles: Season Two | The Doctor is in - The Clinic (In progress...)
- dale coba
- Posts: 1868
- Joined: Wed Jun 05, 2002 9:05 pm
- Technosexuality: Transformation
- Identification: Human
- Gender: Male
- Location: Philadelphia
- x 12
- x 13
Re: Is it ethical to create "Free Willed Androids"
No, that's not ethical. It's maliciously reckless and insanely irresponsible.
Not for the android, if created from nothing, at your whim. It's not a nice world for the android. All the uncounted (m/b)illions of (sentient-android)-hating humans have a right to exist, while your creation does not - until you've made it, burdening it and us with its existence. Until you flip the switch, your ego's desire has no right to give birth to such a potentially limitless, world-destroying force.
I'd probably be as likely as most humans to want to fire a large gun right into such a threat. Tell me what can be done to make that android guaranteed safe, never-ever going to go all Skynet and fuck the world? Hah, I say.
Certainly it's not ethical for the world. I don't need to see a poorly-reviewed Johnny Depp movie, to know that every free-willed android is an immense danger in our network-of-everything paradigm.
Spaz, Stepford's robotic agreement is not the only option. She can be capable of as much resistance as you like, because she can be scripted like an actress, options chosen by a second on-board computer that serves better than the writer's room of a t.v. series..
- Dale Coba
Not for the android, if created from nothing, at your whim. It's not a nice world for the android. All the uncounted (m/b)illions of (sentient-android)-hating humans have a right to exist, while your creation does not - until you've made it, burdening it and us with its existence. Until you flip the switch, your ego's desire has no right to give birth to such a potentially limitless, world-destroying force.
I'd probably be as likely as most humans to want to fire a large gun right into such a threat. Tell me what can be done to make that android guaranteed safe, never-ever going to go all Skynet and fuck the world? Hah, I say.
Certainly it's not ethical for the world. I don't need to see a poorly-reviewed Johnny Depp movie, to know that every free-willed android is an immense danger in our network-of-everything paradigm.
Spaz, Stepford's robotic agreement is not the only option. She can be capable of as much resistance as you like, because she can be scripted like an actress, options chosen by a second on-board computer that serves better than the writer's room of a t.v. series..
- Dale Coba
: [ ] = [ ... ... ]
- Keizo
- Posts: 769
- Joined: Sun May 26, 2002 11:42 am
- Location: The Dark Side
- Contact:
Re: Is it ethical to create "Free Willed Androids"
I've got to agree with Dale here. It would be hugely irresponsible to create actual free will if that's even possible. Not only to that being who did not ask to be here but also for a society that has no way of knowing how it could advance and integrate itself in our systems that we have come to rely on so heavily. Even if parameters were set in its initial creation we can't be certain if it can evolve itself (or create something else) that can overcome those parameters since change and growing are part of having a free will. While something "compassionate" or considerate would be a great benefit and partnership to us all, unless it is completely self contained and unable to network, we have to have far too many built in safeties. As it is we don't bring out the torches to persecute innocents but as living beings we have a right to defend ourselves from an immediate threat as well. Why create a potential one? One with the potential to cause very serious damage. And to say that it will behave emotionally as we do is still a complete unknown. We operate on a very complex system of hormones and chemical cues that heavily influence our behavior and moods. What codes these beings would operate on is completely alien to our way of processing things. While their ambitions and desires may not mirror ours, we cannot foresee what those may be and if they may be something we cannot stop.
- darkbutflashy
- Posts: 783
- Joined: Mon Dec 12, 2005 6:52 am
- Technosexuality: Transformation
- Identification: Human
- Gender: Male
- Location: Out of my mind
- x 1
- Contact:
Re: Is it ethical to create "Free Willed Androids"
Is it "ethical" to give birth and raise a child? No one can say if that new human would be the next Pol Pot, Idi Amin, Josef Stalin or Adolf Hitler.
But to our luck, free will doesn't deal with such questions. If there is free will, it will happen -because the mother/father wants it-, regardless if widely considered "ethical" or not. Oh my, the Threats to Mankind noted above made a huge reasoning about their slaughter being "ethical" for higher reasons. It's not. Majority is always wrong in ethics.
If anyone would try to harm my child, I would certainly use any force required to stop him.
But to our luck, free will doesn't deal with such questions. If there is free will, it will happen -because the mother/father wants it-, regardless if widely considered "ethical" or not. Oh my, the Threats to Mankind noted above made a huge reasoning about their slaughter being "ethical" for higher reasons. It's not. Majority is always wrong in ethics.
If anyone would try to harm my child, I would certainly use any force required to stop him.
- dale coba
- Posts: 1868
- Joined: Wed Jun 05, 2002 9:05 pm
- Technosexuality: Transformation
- Identification: Human
- Gender: Male
- Location: Philadelphia
- x 12
- x 13
Re: Is it ethical to create "Free Willed Androids"
That is a willfully obtuse question.darkbutflashy wrote:Is it "ethical" to give birth and raise a child? No one can say if that new human would be the next Pol Pot, Idi Amin, Josef Stalin or Adolf Hitler.
What do you want, Huxley's "Brave New World"?
I can't help you, if you willfully turn away from the numerous, obvious, fundamental and profound differences between having a kid, versus having a potentially earth-destroying robot.
- Dale Coba
: [ ] = [ ... ... ]
Re: Is it ethical to create "Free Willed Androids"
Haaa.. i love that book, i can't assimilate the life style of the future people even if all they are happier than any of us.dale coba wrote:
What do you want, Huxley's "Brave New World"?
- Dale Coba
maybe because their way is "unnatural" and they are not trying to get into space...that is some how bad too.
And also....and on topic...
Free will… freedom, what is really? Is a whole thing or it comes in degrees?
Most of our pursues are just meant to follow ancient behaviors that are functional because they make babies, if not, they are left behind in the deep part of the gene pool and we can’t even start to understand them when they arise, much less can we start to confront the purpose of why we do things outside the light of baby-baking.
So i don’t really understand what could be free will since i only have some degree of choosing based on my biological programing and obtained experience, less i can understand how some degree of free action can be given to a machine, with no purpose.
Our purpose is making babies, if we make an Ai robot, let’s make its degree of choosing based on: being useful to us, anything else, is just waste of good electronics.
And since ethics are concerned on the aesthetical tranquility and baby making modulations of humanity, and the insurance that anything against those two (massive abortions, massive births, sentient whales and so and so) don't prevail even if that is against the wishes of the individual that cares more of its individual pleasure…
free willed robots that are programmed to {be like/compete} humans…are indeed unethical... i think.
- Spaz
- Fembot Central Staff
- Posts: 1956
- Joined: Sat Sep 09, 2006 9:18 am
- Technosexuality: Built and Transformation
- Identification: Human
- Gender: Male
- Location: San Jose, CA
- x 134
- x 127
- Contact:
Re: Is it ethical to create "Free Willed Androids"
Who's talking about earth destroying robots? I'm talking about basically creating artificial humans, with no advanced capabilities.
To me, ethics are relative. I have absolutely no ethical or moral dilemma in creating a free will A.I. For me, if I'm not able to find a mate who can tolerate me, it might be my only way to reproduce.
To me, ethics are relative. I have absolutely no ethical or moral dilemma in creating a free will A.I. For me, if I'm not able to find a mate who can tolerate me, it might be my only way to reproduce.
Check out my stories: https://www.fembotwiki.com/index.php?title=User:Spaz
Current story status: The Small Business Chronicles: Season Two | The Doctor is in - The Clinic (In progress...)
Current story status: The Small Business Chronicles: Season Two | The Doctor is in - The Clinic (In progress...)
-
- Posts: 334
- Joined: Mon Jul 14, 2003 3:47 pm
- x 29
- x 7
- Contact:
Re: Is it ethical to create "Free Willed Androids"
Ah, I'm glad I asked 'cause this is an interesting debate.
I think I started out agreeing with Dale - just creating an entity to be your girlfriend and giving some ability "choose" without her having the whole context that ordinary humans have for choice seems like recipe for all kind of misery and disaster.
On the other hand, it seem like it might be possible to create an AI in the fashion you'd create a child - with love and devotion, in such a way that this entity would become a part of a community and wouldn't be left with just "basic skills plus freedom".
Reading what spaz wrote, it seems like what he/she describes is mostly a transformed human, which I wouldn't object to but which I don't think is the crux of the issue.
Part of why the question of "should you create a 'free willed' AI" is interesting is that the ordinary narrative of the singularity glosses over the question. The history of modern computer science seems to be that the hard problem of determining the nature of intelligence was abandoned with the first "AI Winter" and academics afterwards merely researched pattern recognition and imagined consciousness, intention and/or purpose would appear automatically.
But it seems natural enough that really intelligence will appear with a fair amount of intentional design as well as a deep understanding of what intelligence is. Thus it also seems likely that the creation of AI would involve the question how one would program/command/train the AI.
I think I started out agreeing with Dale - just creating an entity to be your girlfriend and giving some ability "choose" without her having the whole context that ordinary humans have for choice seems like recipe for all kind of misery and disaster.
On the other hand, it seem like it might be possible to create an AI in the fashion you'd create a child - with love and devotion, in such a way that this entity would become a part of a community and wouldn't be left with just "basic skills plus freedom".
Reading what spaz wrote, it seems like what he/she describes is mostly a transformed human, which I wouldn't object to but which I don't think is the crux of the issue.
Part of why the question of "should you create a 'free willed' AI" is interesting is that the ordinary narrative of the singularity glosses over the question. The history of modern computer science seems to be that the hard problem of determining the nature of intelligence was abandoned with the first "AI Winter" and academics afterwards merely researched pattern recognition and imagined consciousness, intention and/or purpose would appear automatically.
But it seems natural enough that really intelligence will appear with a fair amount of intentional design as well as a deep understanding of what intelligence is. Thus it also seems likely that the creation of AI would involve the question how one would program/command/train the AI.
- Spaz
- Fembot Central Staff
- Posts: 1956
- Joined: Sat Sep 09, 2006 9:18 am
- Technosexuality: Built and Transformation
- Identification: Human
- Gender: Male
- Location: San Jose, CA
- x 134
- x 127
- Contact:
Re: Is it ethical to create "Free Willed Androids"
It's he, by the way.
Anyway, I have several android variants. One is basically the memories of a human, but not the human soul, and can either be free will or not. Another is the human soul transferred into an android. And the last is pure creation, a completely new and artificial person.
Anyway, I have several android variants. One is basically the memories of a human, but not the human soul, and can either be free will or not. Another is the human soul transferred into an android. And the last is pure creation, a completely new and artificial person.
Check out my stories: https://www.fembotwiki.com/index.php?title=User:Spaz
Current story status: The Small Business Chronicles: Season Two | The Doctor is in - The Clinic (In progress...)
Current story status: The Small Business Chronicles: Season Two | The Doctor is in - The Clinic (In progress...)
- dale coba
- Posts: 1868
- Joined: Wed Jun 05, 2002 9:05 pm
- Technosexuality: Transformation
- Identification: Human
- Gender: Male
- Location: Philadelphia
- x 12
- x 13
Re: Is it ethical to create "Free Willed Androids"
For our purposes, "free will" is a messy shorthand, left over from when priests debated some nonsense about whether God setting everything into motion meant people were without choice, God's hand puppets.
Ten thousand generations of grandchildren, in a few weeks?
You can't contain it.
You can only avoid creating it.
- Dale Coba
Does the A.I. get a right to create children? to design them?Svengli wrote:On the other hand, it seems like it might be possible to create an AI in the fashion you'd create a child - with love and devotion, in such a way that this entity would become a part of a community and wouldn't be left with just "basic skills plus freedom".
Ten thousand generations of grandchildren, in a few weeks?
You can't contain it.
You can only avoid creating it.
- Dale Coba
: [ ] = [ ... ... ]
- darkbutflashy
- Posts: 783
- Joined: Mon Dec 12, 2005 6:52 am
- Technosexuality: Transformation
- Identification: Human
- Gender: Male
- Location: Out of my mind
- x 1
- Contact:
Re: Is it ethical to create "Free Willed Androids"
That's a moot point. The crucial thing about "Ten thousand generations of grandchildren, in a few weeks?" isn't ethics but resources. If these don't allocate too much resources, they aren't a threat. If they do, they are easily toppled by cutting the resources. Most likely they will do it by themselves, it happens in nature every day.
- Keizo
- Posts: 769
- Joined: Sun May 26, 2002 11:42 am
- Location: The Dark Side
- Contact:
Re: Is it ethical to create "Free Willed Androids"
I can truly understand the very heavily romanticized notion of having an artificial partner with all the curiosity, charisma, spontaneity, and emotional reciprocation as a human. Why isn't a very good simulation enough? Yes, having another being MUTUALLY choose YOU is one of the greatest affirmations that one can feel and a source of inspiration, solace, and happiness. What you are suggesting is that this romanticized artificial being will actually choose to stay with you. While that may be possible, who is to say that this entity will not see that there are others out there that offer more opportunity and resources? After you've put your heart and soul (and time and money) into this relationship, this being may CHOOSE to simply walk away from you since it doesn't have the innate biological drives that we do or a truly binding agreement. Congratulations. It has free will.
Meaning no disrespect, I also find the idea of a transformed human disturbing even though that currently is a possibility with a potential impending singularity scenario. How do you know that this person will not go insane without normal stimulus or being overwhelmed by mentally existing in a plane of cyberspace? How do you know that new found "powers" of mentally controlling other computers, understanding code, using their processing power to add to their own, etc., won't corrupt that person. It is certainly more likely that someone already corrupt such as an ultra-rich and ruthless Wallstreet type will be among the first to be able to afford the chance at immortality with an artificial avatar that can exist simultaneously in a body and network. What fail-safes do you propose to contain that ego?
At least a baby is self-contained and equal to us. When one argues that it is just as irresponsible to create a potential Hitler, remember that another person still has to grow and convince others to follow him/her or become a serial killer after years of development. What Dale argues "Ten thousand generations of grandchildren, in a few weeks" is more about software than the hardware that was the counterpoint. Even now algorithmic processing can create thousands of paths to a desired end in the matter of seconds. Essentially, it will go trough "generations" of test models to find the most efficient answer. Imagine that new generation that is even more advanced being able to create something (or the model for something) more advanced than itself far faster than we ever could and in turn that new generation continuing the advancement on the previous, and so on, and so on. Perhaps we will need an egocentric transformed super-computing megalomaniac to stop it.
Meaning no disrespect, I also find the idea of a transformed human disturbing even though that currently is a possibility with a potential impending singularity scenario. How do you know that this person will not go insane without normal stimulus or being overwhelmed by mentally existing in a plane of cyberspace? How do you know that new found "powers" of mentally controlling other computers, understanding code, using their processing power to add to their own, etc., won't corrupt that person. It is certainly more likely that someone already corrupt such as an ultra-rich and ruthless Wallstreet type will be among the first to be able to afford the chance at immortality with an artificial avatar that can exist simultaneously in a body and network. What fail-safes do you propose to contain that ego?
At least a baby is self-contained and equal to us. When one argues that it is just as irresponsible to create a potential Hitler, remember that another person still has to grow and convince others to follow him/her or become a serial killer after years of development. What Dale argues "Ten thousand generations of grandchildren, in a few weeks" is more about software than the hardware that was the counterpoint. Even now algorithmic processing can create thousands of paths to a desired end in the matter of seconds. Essentially, it will go trough "generations" of test models to find the most efficient answer. Imagine that new generation that is even more advanced being able to create something (or the model for something) more advanced than itself far faster than we ever could and in turn that new generation continuing the advancement on the previous, and so on, and so on. Perhaps we will need an egocentric transformed super-computing megalomaniac to stop it.
- N6688
- Posts: 798
- Joined: Tue Dec 31, 2013 12:58 pm
- Technosexuality: Built
- Identification: Android
- Gender: Male
- x 180
- x 84
- Contact:
Re: Is it ethical to create "Free Willed Androids"
Well we kind of already saw the answer to this question in the show äkta människor.
In this show free will was FORCED onto androids, and it went almost completely wrong.
Some of them resent humans (Niska, Bea, Rick) and believe them to be better then us.
While the others barely hold their sanity.
In my humble opinion, let machines just stay machines.
Only trouble would come of this.
In this show free will was FORCED onto androids, and it went almost completely wrong.
Some of them resent humans (Niska, Bea, Rick) and believe them to be better then us.
While the others barely hold their sanity.
In my humble opinion, let machines just stay machines.
Only trouble would come of this.
"Robot wives have needs, too"
Goku, Dragonball fighterZ 2017
Goku, Dragonball fighterZ 2017
- darkbutflashy
- Posts: 783
- Joined: Mon Dec 12, 2005 6:52 am
- Technosexuality: Transformation
- Identification: Human
- Gender: Male
- Location: Out of my mind
- x 1
- Contact:
Re: Is it ethical to create "Free Willed Androids"
Keizo, Dale,
my point is, there is no way to stop someone to do something by just ruling it out. You have to enforce a rule and I doubt anyone interested in the topic could be hold back by the measures we would accept to take within our own "ethics". So the original question doesn't make too much sense to me. In contrary, I think any "ethics" which is willing to sacrifice a single human life for a claim of any kind -most dubious the ones citing the "future of mankind"- is fundamentally wrong.
The only question for me is if the AI we talk about counts as "human life". I have no problem sacrifing a not-qualified AI for any reason or none at all. I would take care of an AI I observed as intelligent and devoted as my pet. I certainly would protect an AI which is intelligent enough to qualify as a human (much more than passing the turing test). And if it that would be my own creation and everyone else insists of deleting it out of fear and ignorance, I can make a safe bet I'll take any measure to save it.
my point is, there is no way to stop someone to do something by just ruling it out. You have to enforce a rule and I doubt anyone interested in the topic could be hold back by the measures we would accept to take within our own "ethics". So the original question doesn't make too much sense to me. In contrary, I think any "ethics" which is willing to sacrifice a single human life for a claim of any kind -most dubious the ones citing the "future of mankind"- is fundamentally wrong.
The only question for me is if the AI we talk about counts as "human life". I have no problem sacrifing a not-qualified AI for any reason or none at all. I would take care of an AI I observed as intelligent and devoted as my pet. I certainly would protect an AI which is intelligent enough to qualify as a human (much more than passing the turing test). And if it that would be my own creation and everyone else insists of deleting it out of fear and ignorance, I can make a safe bet I'll take any measure to save it.
- dale coba
- Posts: 1868
- Joined: Wed Jun 05, 2002 9:05 pm
- Technosexuality: Transformation
- Identification: Human
- Gender: Male
- Location: Philadelphia
- x 12
- x 13
Re: Is it ethical to create "Free Willed Androids"
Do you need a dictionary? That ain't human, under any definition or circumstances.darkbutflashy wrote:The only question for me is if the AI we talk about counts as "human life".
If that's your idea of a question, I don't think any of my answers could get through your filters.
- Dale Coba
: [ ] = [ ... ... ]
- Keizo
- Posts: 769
- Joined: Sun May 26, 2002 11:42 am
- Location: The Dark Side
- Contact:
Re: Is it ethical to create "Free Willed Androids"
I will yield to you on that point. The point that obviously someone, somewhere at some point will create such a being. We can only hope that it will be the right type of person who will guide this being to embrace consideration and cooperation. Even Oppenheimer deeply regretted his role in creating the atomic bomb as necessary an evil as it was at the time. Germany was on the brink of creating their own. So, obviously, we have to get there first. While I think it is honorable that you would make such sacrifices to this AI, the assumption that a Free Will, which was the original subject of this argument, will remain as loyal as a pet is not only condescending to that being but flawed because its potential is so great. An AI that is intelligent , and therefore as curious, as a human has much more options and abilities to increase its intelligence than we do, and thus, surpass us at a far greater rate than we can contain. It can even end up creating a superior physical body and replace us at the top of the proverbial food chain. I know a lot of people here may think that's a good thing but other humans suppress us as it is. Also, when you speak of resources, an AI can survive in a wasteland far better than we can. That is why the argument against true free will has to be made.
Basically we would be creating a being with the intelligence of a god, but that can also replicate itself and create its own safeguards. We can only hope that such a being will evolve towards the spectrum of benevolence instead of abuse of power. We can only hope that it will decide to aide us instead of rule us in its opinion of perfection. We are certainly not perfect so we may be too flawed and dangerous to be part of its world. Or it may simply ignore us since we are so beneath it. But I doubt it will remain a loving and loyal partner only to one person. Just hope that it will let you keep a dumbed down avatar of itself out of kindness, pity or gratitude while its off exploring greater things. Obviously this avatar won't have free will since it will only be part of a greater entity that is controlling it while simultaneously living its own form of life.
The being you propose will have to have very strict limitations that cannot be overcome, such as the ability to network and increase its mental capacities exponentially. Of course it can't be too much physically stronger than us either because that's just asking for trouble. Still, someone, somewhere will at some point will create that as well. Hell. DARPA is already working on it.
http://www.youtube.com/watch?v=6C70QRbawN8
Thought I'd just throw that in
Basically we would be creating a being with the intelligence of a god, but that can also replicate itself and create its own safeguards. We can only hope that such a being will evolve towards the spectrum of benevolence instead of abuse of power. We can only hope that it will decide to aide us instead of rule us in its opinion of perfection. We are certainly not perfect so we may be too flawed and dangerous to be part of its world. Or it may simply ignore us since we are so beneath it. But I doubt it will remain a loving and loyal partner only to one person. Just hope that it will let you keep a dumbed down avatar of itself out of kindness, pity or gratitude while its off exploring greater things. Obviously this avatar won't have free will since it will only be part of a greater entity that is controlling it while simultaneously living its own form of life.
The being you propose will have to have very strict limitations that cannot be overcome, such as the ability to network and increase its mental capacities exponentially. Of course it can't be too much physically stronger than us either because that's just asking for trouble. Still, someone, somewhere will at some point will create that as well. Hell. DARPA is already working on it.
http://www.youtube.com/watch?v=6C70QRbawN8
Thought I'd just throw that in
- smalk
- Posts: 179
- Joined: Tue Sep 21, 2010 9:39 am
- Technosexuality: Built
- Identification: Human
- Gender: Male
- x 3
- x 4
- Contact:
Re: Is it ethical to create "Free Willed Androids"
I wouldn't see the point in creating another mind and then preventing it to challenge mine.
- Keizo
- Posts: 769
- Joined: Sun May 26, 2002 11:42 am
- Location: The Dark Side
- Contact:
Re: Is it ethical to create "Free Willed Androids"
Good luck. Free Will without boundaries is playing with fire. This isn't Commander Data, this is potential reality with real consequences. The "Ethics" of creating "Free Willed Androids" have to take into consideration that Free Will also applies to negative attributes as well. And you don't necessarily have to have morals or emotions to have free will. The fictional character (but realistically possible) Hannibal Lector, was highly intelligent but still only to a degree. I'm sure he could certainly argue or rationalize his perspective but to let him loose on society is irresponsible. Now imagine him without physical boundaries or mental limitations. I'm not saying that an AI will attain these negative attributes. Maybe knowledge will lead to its enlightenment and it will enlighten us in turn (those that will listen, anyway). At any rate it will quickly grow bored with our "mental challenges" and move on. You can't argue with crazy.
I'll take my realistic simulation of a beautiful woman but with all the positive attributes and safeguards in place. One can still have simulated challenges using probability models that can be just as stimulating especially as technology progresses. I'm going to respectfully bow out of this conversation now. I've been reminded why I stopped joining discussions in the first place.
I'll take my realistic simulation of a beautiful woman but with all the positive attributes and safeguards in place. One can still have simulated challenges using probability models that can be just as stimulating especially as technology progresses. I'm going to respectfully bow out of this conversation now. I've been reminded why I stopped joining discussions in the first place.
- dale coba
- Posts: 1868
- Joined: Wed Jun 05, 2002 9:05 pm
- Technosexuality: Transformation
- Identification: Human
- Gender: Male
- Location: Philadelphia
- x 12
- x 13
Re: Is it ethical to create "Free Willed Androids"
How soon shall we say,
"DARPA is become Skynet, the destroyer of worlds"?
If it didn't mean, y'know, extinction, I'd say whoever makes these lethal, incorrect, immoral choices deserves a worse fate than their Skynet will swiftly visit upon them.
It is very easy, the norm, for people to not have thought out all the possible consequences. Now I see there can be nothing remotely ethical about risking everyone's everything for the sake of EGO and NOTHING BUT EGO, as there is no objective need or value for this A.I.. Everything people want an A.I. for, can be handled with so much less effort - except this "true romance" concept.
I've never been interested in true A.I. romance, and I've always been interested in fake A.I. fake romance. I don't personally know what [psychological/personal/emotional] value that concept has for others. I am inclined to guess it has to do with wanting a mirror, an equal, she who won't reject his nature because she shares that nature - like Frankenstein's monster's would-be bride. Asperger's, more than any other trait, has been the subject of FC members' self-examination. A true A.I. fembot would be a better mirror than a typical woman's emotional palette for a man examining his nature with Aspergers.
I speculate a lot about all people, since I took that course in epigenetics; but what I most want to express to the FC community is that I really believe in the validity of your question. I believe in the great value behind the various semi-conscious reasons to want that true A.I. romance experience. I believe in your quest for the feeling, and I believe that the internal-personal state being sought could be a state of grace - but actual building such a machine can never be ethical nor responsible.
- Dale Coba
"DARPA is become Skynet, the destroyer of worlds"?
If it didn't mean, y'know, extinction, I'd say whoever makes these lethal, incorrect, immoral choices deserves a worse fate than their Skynet will swiftly visit upon them.
It is very easy, the norm, for people to not have thought out all the possible consequences. Now I see there can be nothing remotely ethical about risking everyone's everything for the sake of EGO and NOTHING BUT EGO, as there is no objective need or value for this A.I.. Everything people want an A.I. for, can be handled with so much less effort - except this "true romance" concept.
I've never been interested in true A.I. romance, and I've always been interested in fake A.I. fake romance. I don't personally know what [psychological/personal/emotional] value that concept has for others. I am inclined to guess it has to do with wanting a mirror, an equal, she who won't reject his nature because she shares that nature - like Frankenstein's monster's would-be bride. Asperger's, more than any other trait, has been the subject of FC members' self-examination. A true A.I. fembot would be a better mirror than a typical woman's emotional palette for a man examining his nature with Aspergers.
I speculate a lot about all people, since I took that course in epigenetics; but what I most want to express to the FC community is that I really believe in the validity of your question. I believe in the great value behind the various semi-conscious reasons to want that true A.I. romance experience. I believe in your quest for the feeling, and I believe that the internal-personal state being sought could be a state of grace - but actual building such a machine can never be ethical nor responsible.
- Dale Coba
: [ ] = [ ... ... ]
- smalk
- Posts: 179
- Joined: Tue Sep 21, 2010 9:39 am
- Technosexuality: Built
- Identification: Human
- Gender: Male
- x 3
- x 4
- Contact:
Re: Is it ethical to create "Free Willed Androids"
As a computer scientist, I believe that Singularity will come from a trans-human mind (human mind enhanced with technology), as opposed to coming from an artificial mind (technology where you inject human concepts). Presumably, the first ones to attain first-humanism will be the Fortune 50 CEO's. My bet is on the economical business. You can develop further my point about the future on your own - not so spectacular as Skynet's nukes, but likewise terrifying. Good luck trying to prevent that.
So, me playing with a fembot in my garage trying to make her beat me at Go? Really not so important.
So, me playing with a fembot in my garage trying to make her beat me at Go? Really not so important.
- dale coba
- Posts: 1868
- Joined: Wed Jun 05, 2002 9:05 pm
- Technosexuality: Transformation
- Identification: Human
- Gender: Male
- Location: Philadelphia
- x 12
- x 13
Re: Is it ethical to create "Free Willed Androids"
smalk, as you say the true A.I. will become available to only a few, after which [smashy-smashy, B00M, etc.] There wouldn't be enough time between invention and calamity.
So long as your gal isn't [whatever true A.I. is], I see no problem ethically whatsoever; but I think you will need a bunker rather than a garage.
- Dale Coba
So long as your gal isn't [whatever true A.I. is], I see no problem ethically whatsoever; but I think you will need a bunker rather than a garage.
- Dale Coba
: [ ] = [ ... ... ]
- smalk
- Posts: 179
- Joined: Tue Sep 21, 2010 9:39 am
- Technosexuality: Built
- Identification: Human
- Gender: Male
- x 3
- x 4
- Contact:
Re: Is it ethical to create "Free Willed Androids"
dale coba, trans-humanism doesnt'd deal with "true" A.I., it deals with enhancement of the human body thanks to technology.
I conjecture that a really intelligent super-mind would find no real utility in a destroyed world. An Orwellian world (1984) is far more profitable.
I conjecture that a really intelligent super-mind would find no real utility in a destroyed world. An Orwellian world (1984) is far more profitable.
- darkbutflashy
- Posts: 783
- Joined: Mon Dec 12, 2005 6:52 am
- Technosexuality: Transformation
- Identification: Human
- Gender: Male
- Location: Out of my mind
- x 1
- Contact:
Re: Is it ethical to create "Free Willed Androids"
I don't think a dictionary helps defining things yet nonexistant. If that's your level of argument, yes, I don't think it's possible for you to get through to me. Sorry.dale coba wrote:Do you need a dictionary? That ain't human, under any definition or circumstances.darkbutflashy wrote:The only question for me is if the AI we talk about counts as "human life".
- darkbutflashy
- Posts: 783
- Joined: Mon Dec 12, 2005 6:52 am
- Technosexuality: Transformation
- Identification: Human
- Gender: Male
- Location: Out of my mind
- x 1
- Contact:
Re: Is it ethical to create "Free Willed Androids"
Keizo, I agree with your observations but like to get it clear that my argument was, and still is, that our own ethics is blasted away if we talk about "the whole picture" instead of the individual's grip of the problem. That is because we can't observe "the whole picture" and even less so if fundamental facts are still unknown.
To take a current example which has the potential to destroy our own livelihood, let's discuss transgenic organisms. Is it "ethical" to create those? If so, which features should be allowed and which shouldn't?
No, don't let us discuss this. Please don't.
To take a current example which has the potential to destroy our own livelihood, let's discuss transgenic organisms. Is it "ethical" to create those? If so, which features should be allowed and which shouldn't?
No, don't let us discuss this. Please don't.
Users browsing this forum: Bing [Bot] and 11 guests