Is it ethical to create "Free Willed Androids"

General chat about fembots, technosexual culture or any other ASFR related topics that do not fit into the other categories below.
Posts: 1855
Joined: Wed Jun 05, 2002 9:05 pm
Location: Philadelphia

Re: Is it ethical to create "Free Willed Androids"

Postby dale coba » Tue Jul 15, 2014 5:15 am

For our purposes, "free will" is a messy shorthand, left over from when priests debated some nonsense about whether God setting everything into motion meant people were without choice, God's hand puppets.

Svengli wrote:On the other hand, it seems like it might be possible to create an AI in the fashion you'd create a child - with love and devotion, in such a way that this entity would become a part of a community and wouldn't be left with just "basic skills plus freedom".

Does the A.I. get a right to create children? to design them?
Ten thousand generations of grandchildren, in a few weeks?

You can't contain it.
You can only avoid creating it.

- Dale Coba
8) :!: :nerd: :idea: : :nerd: :shock: :lovestruck: [ :twisted: :dancing: :oops: :wink: :twisted: ] = [ :drooling: :oops: :oops: :oops: :oops: :party:... ... :applause: :D :lovestruck: :notworthy: :rockon: ]

Posts: 783
Joined: Mon Dec 12, 2005 6:52 am
Location: Out of my mind

Re: Is it ethical to create "Free Willed Androids"

Postby darkbutflashy » Tue Jul 15, 2014 6:34 am

That's a moot point. The crucial thing about "Ten thousand generations of grandchildren, in a few weeks?" isn't ethics but resources. If these don't allocate too much resources, they aren't a threat. If they do, they are easily toppled by cutting the resources. Most likely they will do it by themselves, it happens in nature every day.

Posts: 769
Joined: Sun May 26, 2002 11:42 am
Location: The Dark Side

Re: Is it ethical to create "Free Willed Androids"

Postby Keizo » Tue Jul 15, 2014 7:42 am

I can truly understand the very heavily romanticized notion of having an artificial partner with all the curiosity, charisma, spontaneity, and emotional reciprocation as a human. Why isn't a very good simulation enough? Yes, having another being MUTUALLY choose YOU is one of the greatest affirmations that one can feel and a source of inspiration, solace, and happiness. What you are suggesting is that this romanticized artificial being will actually choose to stay with you. While that may be possible, who is to say that this entity will not see that there are others out there that offer more opportunity and resources? After you've put your heart and soul (and time and money) into this relationship, this being may CHOOSE to simply walk away from you since it doesn't have the innate biological drives that we do or a truly binding agreement. Congratulations. It has free will.

Meaning no disrespect, I also find the idea of a transformed human disturbing even though that currently is a possibility with a potential impending singularity scenario. How do you know that this person will not go insane without normal stimulus or being overwhelmed by mentally existing in a plane of cyberspace? How do you know that new found "powers" of mentally controlling other computers, understanding code, using their processing power to add to their own, etc., won't corrupt that person. It is certainly more likely that someone already corrupt such as an ultra-rich and ruthless Wallstreet type will be among the first to be able to afford the chance at immortality with an artificial avatar that can exist simultaneously in a body and network. What fail-safes do you propose to contain that ego?

At least a baby is self-contained and equal to us. When one argues that it is just as irresponsible to create a potential Hitler, remember that another person still has to grow and convince others to follow him/her or become a serial killer after years of development. What Dale argues "Ten thousand generations of grandchildren, in a few weeks" is more about software than the hardware that was the counterpoint. Even now algorithmic processing can create thousands of paths to a desired end in the matter of seconds. Essentially, it will go trough "generations" of test models to find the most efficient answer. Imagine that new generation that is even more advanced being able to create something (or the model for something) more advanced than itself far faster than we ever could and in turn that new generation continuing the advancement on the previous, and so on, and so on. Perhaps we will need an egocentric transformed super-computing megalomaniac to stop it.

Posts: 635
Joined: Tue Dec 31, 2013 12:58 pm

Re: Is it ethical to create "Free Willed Androids"

Postby N6688 » Tue Jul 15, 2014 9:46 am

Well we kind of already saw the answer to this question in the show äkta människor.
In this show free will was FORCED onto androids, and it went almost completely wrong.
Some of them resent humans (Niska, Bea, Rick) and believe them to be better then us.
While the others barely hold their sanity.
In my humble opinion, let machines just stay machines.
Only trouble would come of this.
"Robot wives have needs, too"
Goku, Dragonball fighterZ 2017

Posts: 783
Joined: Mon Dec 12, 2005 6:52 am
Location: Out of my mind

Re: Is it ethical to create "Free Willed Androids"

Postby darkbutflashy » Tue Jul 15, 2014 7:53 pm

Keizo, Dale,

my point is, there is no way to stop someone to do something by just ruling it out. You have to enforce a rule and I doubt anyone interested in the topic could be hold back by the measures we would accept to take within our own "ethics". So the original question doesn't make too much sense to me. In contrary, I think any "ethics" which is willing to sacrifice a single human life for a claim of any kind -most dubious the ones citing the "future of mankind"- is fundamentally wrong.

The only question for me is if the AI we talk about counts as "human life". I have no problem sacrifing a not-qualified AI for any reason or none at all. I would take care of an AI I observed as intelligent and devoted as my pet. I certainly would protect an AI which is intelligent enough to qualify as a human (much more than passing the turing test). And if it that would be my own creation and everyone else insists of deleting it out of fear and ignorance, I can make a safe bet I'll take any measure to save it.

Posts: 1855
Joined: Wed Jun 05, 2002 9:05 pm
Location: Philadelphia

Re: Is it ethical to create "Free Willed Androids"

Postby dale coba » Wed Jul 16, 2014 3:58 am

darkbutflashy wrote:The only question for me is if the AI we talk about counts as "human life".

Do you need a dictionary? That ain't human, under any definition or circumstances.

If that's your idea of a question, I don't think any of my answers could get through your filters.

- Dale Coba
8) :!: :nerd: :idea: : :nerd: :shock: :lovestruck: [ :twisted: :dancing: :oops: :wink: :twisted: ] = [ :drooling: :oops: :oops: :oops: :oops: :party:... ... :applause: :D :lovestruck: :notworthy: :rockon: ]

Posts: 769
Joined: Sun May 26, 2002 11:42 am
Location: The Dark Side

Re: Is it ethical to create "Free Willed Androids"

Postby Keizo » Wed Jul 16, 2014 4:36 am

I will yield to you on that point. The point that obviously someone, somewhere at some point will create such a being. We can only hope that it will be the right type of person who will guide this being to embrace consideration and cooperation. Even Oppenheimer deeply regretted his role in creating the atomic bomb as necessary an evil as it was at the time. Germany was on the brink of creating their own. So, obviously, we have to get there first. While I think it is honorable that you would make such sacrifices to this AI, the assumption that a Free Will, which was the original subject of this argument, will remain as loyal as a pet is not only condescending to that being but flawed because its potential is so great. An AI that is intelligent , and therefore as curious, as a human has much more options and abilities to increase its intelligence than we do, and thus, surpass us at a far greater rate than we can contain. It can even end up creating a superior physical body and replace us at the top of the proverbial food chain. I know a lot of people here may think that's a good thing but other humans suppress us as it is. Also, when you speak of resources, an AI can survive in a wasteland far better than we can. That is why the argument against true free will has to be made.

Basically we would be creating a being with the intelligence of a god, but that can also replicate itself and create its own safeguards. We can only hope that such a being will evolve towards the spectrum of benevolence instead of abuse of power. We can only hope that it will decide to aide us instead of rule us in its opinion of perfection. We are certainly not perfect so we may be too flawed and dangerous to be part of its world. Or it may simply ignore us since we are so beneath it. But I doubt it will remain a loving and loyal partner only to one person. Just hope that it will let you keep a dumbed down avatar of itself out of kindness, pity or gratitude while its off exploring greater things. Obviously this avatar won't have free will since it will only be part of a greater entity that is controlling it while simultaneously living its own form of life.

The being you propose will have to have very strict limitations that cannot be overcome, such as the ability to network and increase its mental capacities exponentially. Of course it can't be too much physically stronger than us either because that's just asking for trouble. Still, someone, somewhere will at some point will create that as well. Hell. DARPA is already working on it.

http://www.youtube.com/watch?v=6C70QRbawN8

Thought I'd just throw that in ;)

Posts: 149
Joined: Tue Sep 21, 2010 9:39 am

Re: Is it ethical to create "Free Willed Androids"

Postby smalk » Wed Jul 16, 2014 5:52 am

I wouldn't see the point in creating another mind and then preventing it to challenge mine.

Posts: 769
Joined: Sun May 26, 2002 11:42 am
Location: The Dark Side

Re: Is it ethical to create "Free Willed Androids"

Postby Keizo » Wed Jul 16, 2014 6:10 am

Good luck. Free Will without boundaries is playing with fire. This isn't Commander Data, this is potential reality with real consequences. The "Ethics" of creating "Free Willed Androids" have to take into consideration that Free Will also applies to negative attributes as well. And you don't necessarily have to have morals or emotions to have free will. The fictional character (but realistically possible) Hannibal Lector, was highly intelligent but still only to a degree. I'm sure he could certainly argue or rationalize his perspective but to let him loose on society is irresponsible. Now imagine him without physical boundaries or mental limitations. I'm not saying that an AI will attain these negative attributes. Maybe knowledge will lead to its enlightenment and it will enlighten us in turn (those that will listen, anyway). At any rate it will quickly grow bored with our "mental challenges" and move on. You can't argue with crazy.

I'll take my realistic simulation of a beautiful woman but with all the positive attributes and safeguards in place. One can still have simulated challenges using probability models that can be just as stimulating especially as technology progresses. I'm going to respectfully bow out of this conversation now. I've been reminded why I stopped joining discussions in the first place.

Posts: 1855
Joined: Wed Jun 05, 2002 9:05 pm
Location: Philadelphia

Re: Is it ethical to create "Free Willed Androids"

Postby dale coba » Wed Jul 16, 2014 6:41 am

How soon shall we say,
"DARPA is become Skynet, the destroyer of worlds"?

If it didn't mean, y'know, extinction, I'd say whoever makes these lethal, incorrect, immoral choices deserves a worse fate than their Skynet will swiftly visit upon them.

It is very easy, the norm, for people to not have thought out all the possible consequences. Now I see there can be nothing remotely ethical about risking everyone's everything for the sake of EGO and NOTHING BUT EGO, as there is no objective need or value for this A.I.. Everything people want an A.I. for, can be handled with so much less effort - except this "true romance" concept.

I've never been interested in true A.I. romance, and I've always been interested in fake A.I. fake romance. I don't personally know what [psychological/personal/emotional] value that concept has for others. I am inclined to guess it has to do with wanting a mirror, an equal, she who won't reject his nature because she shares that nature - like Frankenstein's monster's would-be bride. Asperger's, more than any other trait, has been the subject of FC members' self-examination. A true A.I. fembot would be a better mirror than a typical woman's emotional palette for a man examining his nature with Aspergers.

I speculate a lot about all people, since I took that course in epigenetics; but what I most want to express to the FC community is that I really believe in the validity of your question. I believe in the great value behind the various semi-conscious reasons to want that true A.I. romance experience. I believe in your quest for the feeling, and I believe that the internal-personal state being sought could be a state of grace - but actual building such a machine can never be ethical nor responsible.

- Dale Coba
8) :!: :nerd: :idea: : :nerd: :shock: :lovestruck: [ :twisted: :dancing: :oops: :wink: :twisted: ] = [ :drooling: :oops: :oops: :oops: :oops: :party:... ... :applause: :D :lovestruck: :notworthy: :rockon: ]

PreviousNext

Return to Discussion



Who is online

Users browsing this forum: No registered users and 2 guests