Sentient, but subservant?

General chat about fembots, technosexual culture or any other ASFR related topics that do not fit into the other categories below.
User avatar
The Liar
Posts: 547
Joined: Sat Jul 09, 2005 11:20 am
x 21
x 98
Contact:

Re: Sentient, but subservant?

Post by The Liar » Sat Jan 14, 2012 10:57 am

Asato wrote:
The Liar wrote:Various other environmental and development issues have been known to create further differentiations.
That's my point exactly.
Secondly, identical twins have been known to exhibit similar personality traits and tastes even after being separated at birth.


Similar, but not exactly the same in every way
Thirdly, this is irrelevant. They're humans, and their natures haven't been intentionally designed to retain certain traits.
No, but if you could predict their development so easily than the traits they did have wouldn't be expected to be so different
...

Where do I begin?
No, but if you could predict their development so easily than the traits they did have wouldn't be expected to be so different
I can’t help but feel you're not actually thinking over anything I'm saying. Not in context anyway. You seem to have disregarded the very first line were I pointed out identical twins aren't exactly the same, or the second part were I pointed out they're not expected to be that different, in fact you're disregarding that line you're referencing with the above comment were I pointed out that this is irrelevant, because they're humans with human instincts and biological imperatives, and not A.I's with programmed instincts and imperatives deliberately engineered with a specific purpose.

How does difference’s between two human twins support the idea that you can’t predict the development of something? They have differences, they also have similarities. Your statement makes no sense.

I don't recall ever denying the impact of environment on the development of something, merely that in an A.I. you could control in what way it could be impacted. The fact that identical twins can often have similar characteristics even if separated at birth is, if anything, a proof of concept that certain traits in a sentient mind can be preset and unchanging regardless of environment. You could in fact use the similarities and differences between such twins as a basis for estimating what characteristics are genetically predetermined, and what genes determine them, and what characteristics are environment based… like they do… and have been doing for the last 107 years.

There are notable complications with this method, the aforementioned genetic deviations, which will probably prove informative in the long run, and accurately identifying the impact of environment as with all social studies you can not perform any experiments in a vacuum, and why am I still talking about behavioral genetics and psychology?

The argument at hand is can you design an A.I. that retains certain desired traits regardless of environmental stimulus. I have already put forth arguments as to why I think this is possible, pointed out how this should not be directly paralleled to humans, and now how such traits do exist in humans. Are you going to actually going to argue and address any of my arguments, or are you going to continue with this Non sequitur Ignoratio elenchi?
All criticism of my work is both welcome, and encouraged.
My work is uploaded under the Creative Commons Attribution ShareAlike 4.0 license, so as long as attribution is given, feel free to disseminate.

Asato
Posts: 170
Joined: Thu May 12, 2011 10:59 am
Technosexuality: Built
Identification: Human
Gender: Male
Contact:

Re: Sentient, but subservant?

Post by Asato » Sat Jan 14, 2012 2:44 pm

You can't predict those differences and similarities. If an AI is self-aware, there's only so much you can program, because it can make its own decisions and ignore said programming the same way humans can ignore their evolved instincts to a degree. I'm not saying you can't predict anything, but you can't predict everything either.

User avatar
The Liar
Posts: 547
Joined: Sat Jul 09, 2005 11:20 am
x 21
x 98
Contact:

Re: Sentient, but subservant?

Post by The Liar » Sat Jan 14, 2012 5:36 pm

Asato wrote:You can't predict those differences and similarities. If an AI is self-aware, there's only so much you can program, because it can make its own decisions and ignore said programming the same way humans can ignore their evolved instincts to a degree. I'm not saying you can't predict anything, but you can't predict everything either.
You can't predict those differences and similarities.
I'm not saying you can't predict anything, but you can't predict everything either.
Are you talking about A.I.'s or are you still talking about humans? Cause with the latter, as mentioned in my last post, they're working on, and with the former the similarities would be the programed traits, and the differences would be the learned traits, and I'll agree that you can't predict everything... because I already said that in my first post. When I say you don't think about my arguments in context, this is the kind of thing I'm talking about. You're trying to take one of my initial assertions and pass it off as opposition to those same assertions.
If an AI is self-aware, there's only so much you can program, because it can make its own decisions and ignore said programming the same way humans can ignore their evolved instincts to a degree.


Humans can't suppress instincts, they can suppress impulses, but the instinctual underlining desire that drove that impulse remains affecting their thoughts and behavior. If you could then Catholic priests would have a much easier time of things.
All criticism of my work is both welcome, and encouraged.
My work is uploaded under the Creative Commons Attribution ShareAlike 4.0 license, so as long as attribution is given, feel free to disseminate.

Asato
Posts: 170
Joined: Thu May 12, 2011 10:59 am
Technosexuality: Built
Identification: Human
Gender: Male
Contact:

Re: Sentient, but subservant?

Post by Asato » Sat Jan 14, 2012 7:46 pm

Yes, it's difficult, but we can rise above our instincts. In fact it would be easier for an AI to alter its "instincts" (programming) than it would be for a human, since there would be a good knowledge of the code behind said programming and how to alter it.

User avatar
The Liar
Posts: 547
Joined: Sat Jul 09, 2005 11:20 am
x 21
x 98
Contact:

Re: Sentient, but subservant?

Post by The Liar » Sat Jan 14, 2012 10:16 pm

Yes, it's difficult, but we can rise above our instincts.
Given that instincts are the origin of all motive... no you can't.
In fact it would be easier for an AI to alter its "instincts" (programming) than it would be for a human, since there would be a good knowledge of the code behind said programming and how to alter it.
You're suggesting that an A.I. would seek to destroy it's sense of self?... Well I guess if they're depressed they might try to make themselves always happy, which would probably destroy its mind and potentially make itself a danger to others.

I’d commend you for an interesting parallel to drug abuse, but I’m fairly sure your statement was based on the absurd idea of a self existing outside of any traits that define it.
All criticism of my work is both welcome, and encouraged.
My work is uploaded under the Creative Commons Attribution ShareAlike 4.0 license, so as long as attribution is given, feel free to disseminate.

Asato
Posts: 170
Joined: Thu May 12, 2011 10:59 am
Technosexuality: Built
Identification: Human
Gender: Male
Contact:

Re: Sentient, but subservant?

Post by Asato » Sun Jan 15, 2012 6:12 pm

It's certainly possible to want to do something but be unable or unwilling to follow through with it due to some kind of internal instinct or consideration... I know people who would make the (perhaps unwise) decision to remove those considerations holding them back if they wanted to.

User avatar
The Liar
Posts: 547
Joined: Sat Jul 09, 2005 11:20 am
x 21
x 98
Contact:

Re: Sentient, but subservant?

Post by The Liar » Mon Jan 16, 2012 11:03 am

Asato wrote:It's certainly possible to want to do something but be unable or unwilling to follow through with it due to some kind of internal instinct or consideration... I know people who would make the (perhaps unwise) decision to remove those considerations holding them back if they wanted to.
You are using specific human’s behaviors, which have human instincts as driving forces, as the basis for potential A.I. behavior?

That doesn’t work. You’re assuming traits they may not necessarily have. This is Ignoratio elenchi.

That last part is a tad confusing, did you mean “if they could” instead of “if they wanted to.”, or do you simply lack certainty as to your associates behavior?

Anyway, the issue in the implied scenario isn’t a person versus their instinct, such a separation does not exist, its one instinct versus another instinct.

Example, a child is dared to walk over a gorge on a rickety plank of wood. The instinct for social acceptance makes him wish to do this; the instinct to not die horribly stops him.

Additional, the prohibition may actually be a product of learned or conditioned behavior as opposed to a purely instinctual based response.

And what is the point of your statement?

You’re not arguing the idea of rebellion, you’re not arguing the impact of environment, you’re not arguing the underlining motivational force of instinct; strictly speaking you’re not making arguments at all, as that would involve addressing and refuting the things I have brought up, and providing an actual logical and factual basis for your position.

You are making unsubstantiated assertions, and have conceded every single one of them. You have stood your ground with nothing, save that cherry picking with the twins, which really doesn’t count, and have instead opted to continuously move from one unsubstantiated assertion to another slightly different unsubstantiated assertion.

Except in this case you didn’t do that. I shouldn’t have had to explain the above; origin of all motive is not an unclear statement, and your statement is in no way a refutal of this concept, though it does suggest you ignored it.

So what are we left with? The idea that an A.I. might wish to reprogram itself? I think I said that the theoretical potential might exist in my last post… just not for the reasons you’re thinking, and what is your basis for these reasons anyway?

You have demonstrated a complete lack of any understanding of anything you bring up, and have now made a statement that has absolutely nothing that I haven’t already addressed before, and are referencing invalid sample data.

You have nothing. Why are you still talking!?
All criticism of my work is both welcome, and encouraged.
My work is uploaded under the Creative Commons Attribution ShareAlike 4.0 license, so as long as attribution is given, feel free to disseminate.

ministrations
Posts: 118
Joined: Fri Jul 18, 2003 8:07 pm
Technosexuality: Built
Identification: Human
Gender: Male
x 8
x 2
Contact:

Re: Sentient, but subservant?

Post by ministrations » Mon Jan 16, 2012 1:46 pm

Wow, Liar. You win. Congratulations.

And people ask me why I lurk.

ministrations
Posts: 118
Joined: Fri Jul 18, 2003 8:07 pm
Technosexuality: Built
Identification: Human
Gender: Male
x 8
x 2
Contact:

Re: Sentient, but subservant?

Post by ministrations » Mon Jan 16, 2012 5:07 pm

OK, I'm sorry. My previous response was a little bit callous...but we're on a fantasy site. Discussion should be encouraged.

For instance, my take on the subject: the chances that a species would develop intelligence to the degree that we have is very small. And there's almost nothing we really know about this subject; they can measure all kinds of things, but not how those elements functionally interact to give us self-actualization.

So what we're doing is building brains on the same basic model as ours, and training them with the same trial and error approach our ancestors took with wolves, except instead of having one generation every three or four years, we can have one every three or four seconds (sometimes far less). That seems ripe for all kinds of evolutionary jumps, not just the one we're familiar with.

Asato
Posts: 170
Joined: Thu May 12, 2011 10:59 am
Technosexuality: Built
Identification: Human
Gender: Male
Contact:

Re: Sentient, but subservant?

Post by Asato » Tue Jan 17, 2012 12:05 am

Yes, I meant "if they could". Misspoke.

I haven't conceded anything, I don't know where you're getting that idea from. It seems to me that you're the one not making any real points and ignoring mine.

If an AI can learn from its environment and outward stimulus and is capable of thinking, it is also capable of desiring things. Eventually it will desire something that its human creators do not, and there comes the conflict. For example I have heard of an idea of a hypothetical AI designed to convert raw materials into paperclips, and it was programmed to do only that, but with intelligence and desires it gets the idea to build infrastructure to convert the entire solar system into paper clips, and any humans that attempt to stop it from doing so would be interfering with its objective and would be destroyed.

Even if you make its primary objective something like "to protect humanity", that's very vague and it may decide that the best way to protect the human race is to kill us all and store our DNA in an impenetrable vault so we can't hurt ourselves.'

Furthermore, I've always been operating under the assumption of a "top-down" AI, that is, one that is modeled after a human brain, so obviously human behavioral patterns would be completely relevant.

The idea that we can create an intelligent, thinking, reasoning machine but be able to completely control it and predict everything it would desire and every action it would take is the height of arrogance IMO.

User avatar
The Liar
Posts: 547
Joined: Sat Jul 09, 2005 11:20 am
x 21
x 98
Contact:

Re: Sentient, but subservant?

Post by The Liar » Tue Jan 17, 2012 3:17 pm

I haven't conceded anything, I don't know where you're getting that idea from.


Failing to address my arguments, or provide any counter argument is an act of concession.
It seems to me that you're the one not making any real points and ignoring mine.
You have made no points; you have made 1 to 3 sentence assertions, and I have addressed, analyzed, interpreted and argued them in every which way. How have I been ignoring what you’ve been saying?
Furthermore, I've always been operating under the assumption of a "top-down" AI, that is, one that is modeled after a human brain, so obviously human behavioral patterns would be completely relevant.


So you are limiting your assertions to an A.I. based on human thought, instinct and behavior patterns? Very well; as you have not mentioned or implied this before, and as I have not been working under this assumption, and have both stated and implied the complete contrary in my statements and arguments, you have just devalidated anything and everything you have said in opposition to my position.

In regards to an A.I. based entirely on human thought, instinct and behavior patterns; then yes, the possibility for rebellion, is not only possible, but probable. To rebel is a part of human nature after all. If you program an A.I. to think like a human, it will act like a human… which is why this is an incredibly stupid assumption, and falls under my qualifier for gross incompetence. Why would anyone make an A.I. like this?

Other then as an experiment, then it falls under my qualifier for prototype.

I could go on about how your examples seem to be devoid of this underling assumption or how as you kept bringing up human nature, and I kept addressing it, many of my statements and augments relating to it are still valid even with this underling assumption, and that you've yet to address or refute them.

But as you have just asserted that nothing you’ve said is relevant to my standpoint, I’m not going to bother.
All criticism of my work is both welcome, and encouraged.
My work is uploaded under the Creative Commons Attribution ShareAlike 4.0 license, so as long as attribution is given, feel free to disseminate.

User avatar
The Liar
Posts: 547
Joined: Sat Jul 09, 2005 11:20 am
x 21
x 98
Contact:

Re: Sentient, but subservant?

Post by The Liar » Tue Jan 24, 2012 3:03 pm

Addendum: It occurs to me that I was not as clear or eloquent as I should have been above.

So as a point of reference, the totality of the top-down brain emulation method mentioned above, is to duplicate the functions of a human brain in a computer program... that's it.

This is of interest only for research purposes, and for people who want to make copies of themselves to achieve a form of immortality.

Though derivative data from such a method may be of value; the use of the method itself, for the production of a commercial or military product, is ridicules.

P.S. Yes, I know this is beating a dead horse; but thinking I failed to make myself clear drives me nuts.
All criticism of my work is both welcome, and encouraged.
My work is uploaded under the Creative Commons Attribution ShareAlike 4.0 license, so as long as attribution is given, feel free to disseminate.

User avatar
Grendizer
Posts: 175
Joined: Thu Feb 25, 2010 9:24 pm
Technosexuality: Built
Identification: Human
Gender: Male
Location: The Darkside of the Moon
x 2
Contact:

Re: Sentient, but subservant?

Post by Grendizer » Sun Feb 19, 2012 2:59 am

The Liar wrote: P.S. Yes, I know this is beating a dead horse; but thinking I failed to make myself clear drives me nuts.
Glad to know I'm not the only one plagued with that particular little mental hamster wheel.
If freedom is outlawed, only outlaws will be free.

My Stories: Teacher: Lesson 1, Teacher: Lesson 2, Quick Corruptions, A New Purpose

Asato
Posts: 170
Joined: Thu May 12, 2011 10:59 am
Technosexuality: Built
Identification: Human
Gender: Male
Contact:

Re: Sentient, but subservant?

Post by Asato » Wed Feb 29, 2012 11:53 pm

The Liar wrote:Failing to address my arguments, or provide any counter argument is an act of concession.
First of all, I addressed your arguments. Second of all, it's only a concession if I say I concede, and I didn't.
You have made no points; you have made 1 to 3 sentence assertions, and I have addressed, analyzed, interpreted and argued them in every which way. How have I been ignoring what you’ve been saying?
By assuming you'll be able to predict every way an AI could develop.
So you are limiting your assertions to an A.I. based on human thought, instinct and behavior patterns? Very well; as you have not mentioned or implied this before, and as I have not been working under this assumption, and have both stated and implied the complete contrary in my statements and arguments, you have just devalidated anything and everything you have said in opposition to my position.
Um, no, you're kind of ignoring the idea that an AI like that would be built and not be subject to your assumptions.
In regards to an A.I. based entirely on human thought, instinct and behavior patterns; then yes, the possibility for rebellion, is not only possible, but probable. To rebel is a part of human nature after all. If you program an A.I. to think like a human, it will act like a human… which is why this is an incredibly stupid assumption, and falls under my qualifier for gross incompetence. Why would anyone make an A.I. like this?
Like I said - to prove it can be done. Same reason mountain climbers give to climb a mountain - "because it's there." You're assuming everyone involved in AI research and development would have the same ethical considerations and ideas that you do. It might be a bad idea, but that doesn't change the fact that if it's possible, eventually someone is going to try to do it.
Other then as an experiment, then it falls under my qualifier for prototype.

I could go on about how your examples seem to be devoid of this underling assumption or how as you kept bringing up human nature, and I kept addressing it, many of my statements and augments relating to it are still valid even with this underling assumption, and that you've yet to address or refute them.
Such as?
But as you have just asserted that nothing you’ve said is relevant to my standpoint, I’m not going to bother.
Your "standpoint" is too narrow-minded and ignores other possibilities.

User avatar
Frostillicus
Posts: 293
Joined: Mon Jan 24, 2005 10:04 pm
Technosexuality: Built
Identification: Human
Gender: Male
Contact:

Re: Sentient, but subservant?

Post by Frostillicus » Thu Mar 01, 2012 7:39 pm

LOL!! I love this thread; it's the gift that keeps on giving! You can't argue with CRAZY :dancing:
Thaw me out when robot wives are cheap and effective.

King Snarf
Posts: 909
Joined: Sat Mar 27, 2004 9:02 pm
Technosexuality: Built and Transformation
Identification: Human
Gender: Male
Location: Drexel Hill, PA
x 5
Contact:

Re: Sentient, but subservant?

Post by King Snarf » Fri Mar 02, 2012 4:29 am

Can't we all just agree we like different things and leave it at that?

User avatar
Grendizer
Posts: 175
Joined: Thu Feb 25, 2010 9:24 pm
Technosexuality: Built
Identification: Human
Gender: Male
Location: The Darkside of the Moon
x 2
Contact:

Re: Sentient, but subservant?

Post by Grendizer » Sat Mar 03, 2012 2:40 am

Where's the fun in that?! :twisted:
If freedom is outlawed, only outlaws will be free.

My Stories: Teacher: Lesson 1, Teacher: Lesson 2, Quick Corruptions, A New Purpose

King Snarf
Posts: 909
Joined: Sat Mar 27, 2004 9:02 pm
Technosexuality: Built and Transformation
Identification: Human
Gender: Male
Location: Drexel Hill, PA
x 5
Contact:

Re: Sentient, but subservant?

Post by King Snarf » Sat Mar 03, 2012 4:28 am

Civility and gracious acceptance of differences are always fun.

User avatar
Grendizer
Posts: 175
Joined: Thu Feb 25, 2010 9:24 pm
Technosexuality: Built
Identification: Human
Gender: Male
Location: The Darkside of the Moon
x 2
Contact:

Re: Sentient, but subservant?

Post by Grendizer » Sat Mar 03, 2012 10:49 am

I'd say the same about heated debate. The problem with "gracious acceptance" is that there's nothing left to talk about. Nothing that matters much, anyway...

It usually goes something like this:

John: "I see you've chosen to be different once again, Abe. But if wearing only a beanie and one left sock while running up and down the street quacking like a duck makes you happy, then so be it."

Abe: "It does make me happy, John. It does. And from now on, you must call me Loolabelle. It's such a pretty pretty name. Now, if you don't mind, it's time for my chocolate enema."

Awkward pause, then...

Crickets: "chirp, chirp ..."

An amusing scene, but a boring conversation. Why does it make Loolabelle happy? In what way may it prove to be unwise? Does Loolabelle have kids, and what do they think? On what grounds may Loolabelle be making the wrong decision for his cat? Does the medical community even consider chocolate enemas therapeutic? How does he propose to stay out of jail? Inquiring minds want to know, and "gracious acceptance" won't get you there. Debate, on the other hand, encouraging him to defend his actions, is much more fun.
If freedom is outlawed, only outlaws will be free.

My Stories: Teacher: Lesson 1, Teacher: Lesson 2, Quick Corruptions, A New Purpose

User avatar
dale coba
Posts: 1868
Joined: Wed Jun 05, 2002 9:05 pm
Technosexuality: Transformation
Identification: Human
Gender: Male
Location: Philadelphia
x 12
x 13

Re: Sentient, but subservant?

Post by dale coba » Sat Mar 03, 2012 8:24 pm

But once you've concluded that the Other's tactics, evidence, and intellectual values are bankrupt,
debate is over; argue, leave, go surreal, whatever - don't expect to engage on a serious level.

I think the debaters here are not fighters in the same weight class, if you will.
One has heard the other's fallacies before...

- Dale Coba,
provocateur?
8) :!: :nerd: :idea: : :nerd: :shock: :lovestruck: [ :twisted: :dancing: :oops: :wink: :twisted: ] = [ :drooling: :oops: :oops: :oops: :oops: :party:... ... :applause: :D :lovestruck: :notworthy: :rockon: ]

Post Reply
Users browsing this forum: No registered users and 8 guests