Sentience, Malfunctions, and ASFR

General chat about fembots, technosexual culture or any other ASFR related topics that do not fit into the other categories below.
Post Reply
User avatar
D.Olivaw
Posts: 265
Joined: Sun Jan 20, 2008 9:52 pm
Technosexuality: Built
Identification: Human
Gender: Male
Location: Twixt dusty books and giant guns
x 103
x 64
Contact:

Sentience, Malfunctions, and ASFR

Post by D.Olivaw » Sat Mar 22, 2008 12:36 pm

I've known I was into ASFR since I was in my early teens (I'm a college student now), and I've been wondering about it from an intellectual standpoint for roughly the same amount of time. I am especially interested in the interplay between the level of sentience in a gynoid and its affect on what types of malfunctions (if any) turn us on in that context.

I don't think that anyone who comes to the discussion without deeply irrational preconceptions will deny that a fully sentient AI is a person, deserving of all the rights we reserve for organic sentients (like us). Equally so, I don't think that anyone would deny that a completely nonsentient gynoid, basically just a preprogrammed machine, is not a person, but property (though their owners might lavish them with lots of care and love, rather like some do with their cars today).

That said, I think that the main ethical problem in real life (IMHO) is going to be that we won't go from nonsentient appliance to sentient person in one step, but will (hopefully) get there in a series of steps as AIs gain more sentience (rather like following the path of evolution from insect levels of self-awareness to human levels, though very compressed). This will raise a number of issues for the AIs on the steps in between. Are they worthy of rights? If so, which ones? Can they be bought and sold as property? I feel that in a generation or so, these may become very important issues, as contentious then as, say, abortion or assisted suicide are now. The answers we decide upon for these questions will help shape the response of society and the judiciary to truly sentient AI's, when (fingers crossed) they appear on the scene. Then again, we may make a fully sentient AI all at once by complete accident once we have the computing power, but that's a different discussion).

What I really want to talk about is how the level of sentience ties into the way we view malfunctions in ASFR literature. I know that, for me, if a gynoid is sentient then I greatly enjoy some accidental or even intentional malfunctions as long as the droid in question is treated as a person, with respect. If the gynoid in question is not sentient, though, then things are completely different. In this case, I prefer more severe, even violent malfunctions or damage and I tend to lean towards intentional rather than accidental malfunctions (although it's pretty close). I also begin to lean towards less advanced droids (visible seams in the skin, more synthetic sounding, more robotic movement, etc). Think of this new spectrum for non-sentients as stretching from Fection's stories at it's less extreme end to, say, the late Heinrich Brueckmann's writings at it's more extreme end.

A good example of this distinction comes from the short animated movie entitled "The Second Renaissance," part of the Animatrix short-film anthology. The first half of the story (which is pretty unrealistic, but very stylistically and emotionally relevant to the questions asked above) deals with the AI's of the world demanding some of the rights of personhood. Some people side with them, but the vast majority react violently to this. There is a series of scenes detailing violence against AI's in this part of the movie, mostly utilitarian, barely humanoid models. The scenes are taken from various violent events in 20th Century history. For example, there is a scene similar to one from Vietnam of a soldier shooting a kneeling robot in the head with his sidearm.

There is one scene which stands out, though, and that is of a group of men violently beating a gynoid to death with a metal pipe. She is (at first) almost indistinguishable from a human, and she is very obviously terrified out of her mind. To this day, I refuse to rewatch that scene. It makes me physically ill.

The gynoid in "The Second Renaissance" is sentient, as set up by the story, and she obviously knows she is going to die. Even as I write this, my stomach is tying itself into knots, just as if I was writing about a human woman being beaten to death by a gang of thugs. Herein lies the duality, though. If she were a preprogrammed automaton; if instead of being terrified she simply started stuttering and breaking down, this scene would probably turn me on instead of repelling me.

So, is this similar to how you feel? If not, howso? Anyone care to share similar examples?

I suppose I'm mainly asking this: why? If anyone has any thoughts or insights on the matter, please do share.

Sorry for how long this post was, but this stuff has been floating around inside my head for years and I've never been able to ask anyone about it until now. :D
"Men, said the Devil,
are good to their brothers:
they don’t want to mend
their own ways, but each other's"
-Piet Hein

Borias
Posts: 254
Joined: Sat Feb 09, 2008 8:01 pm
Contact:

Post by Borias » Sat Mar 22, 2008 5:20 pm

.
Last edited by Borias on Sat Nov 17, 2012 8:42 am, edited 1 time in total.

User avatar
fection
Posts: 490
Joined: Fri Jun 07, 2002 11:50 pm
Location: London, UK
x 2
x 90
Contact:

Post by fection » Sat Mar 22, 2008 7:45 pm

Wow. Great post. I'll pop out of 'discussion lurk mode' for this. The malfunctions that happen in my stories, I imagine happen (you correctly deduce) to non-self aware machines. That said, I'm still not a fan of violent malfunction, in a disfiguring or injury-equivalent manner, sentient or not. The big 'kick' for me is (I've recently realised) a kind of vicarious humiliation of the BUILDERS of the android. The fact that they've been so arrogant in the design of this purportedly 'perfect' woman that they've made some pivotal oversight that causes her to fail. A nice dose of irony never goes amiss as well.
That doesn't go toward answering your questions, I suppose...
But from your post, you seem to have a similar view to mine that at some point AIs will require rights similar to humans' (though the nature of their sentience might be quite different to ours). This would seem to be the source of the repulsion in viewing acts of violence acted upon them. They are (or WILL be) people.
I imagine that a sentient AI experiencing a malfunction of the mind would endure an experience not disimilar to someone with some form of rapid-developing dementia - something I expect we'd not wish on anyone. This involves too much fear and a loss of identity for me to find attractive in any way.
A non-sentient machine is just built to perform a certain task. The contrast between the builders' confidence in their work and the glaring flaw in their design, I find hugely attractive.

User avatar
GZ02
Posts: 249
Joined: Sun Sep 22, 2002 12:43 pm
Technosexuality: Built
Identification: Human
Gender: Male
Location: Toronto, Ontario
Contact:

Post by GZ02 » Sat Mar 22, 2008 11:16 pm

I agree with all the above as well. Once an AI develops sentinence it's too late to turn back. Even though it would be possible to reprogram it, it would be similar to performing a cruel lobotomy - removing the 'affected' parts while leaving only the basic parts intact. It would be wrong to take advantage of technology that way.

Non-sentinent machines - my personal fave - can suffer through anything as far as I'm concerned as they are technically no different from a fridge or a car. Still, a damaged or severely malfunctioning fembot would be a waste of an expensive investment!

The thing is, where is the line drawn? If our machines become sentinent to a point where they can decide for themselves how to share the planet with us, who's to say that they may perceive us as a threat - the whole terminator premise notwithstanding?

For now, I find it fascinating of the idea of a machine can learn in similar ways as we do as it develops its own sense of self-awareness completely outside of our own. They would be our cousins. But we will have to be very careful in determining what we actually want our technology to do for us and how to help us as our machines get smarter and smarter.

User avatar
xodar
Posts: 532
Joined: Thu Nov 24, 2005 1:53 pm
Location: South Texas
x 1
Contact:

Post by xodar » Sun Mar 23, 2008 7:52 am

I still don't see how you can tell if a machine is sentient. It could be a supreme imitation.
Besides, you can store each day's "mental" state separately from the machine and use it to reset after any repairs.
In fact, you could have the robot's "brain" in a safe and let it communicate by radio with the body. There might be slight pauses in the action, though, and signals weakened by metal or stone barriers....
"You can believe me, because I never lie and I'm always right." -- George Leroy Tirebiter.
If a tree falls in the forest and there's nobody there to hear it I don't give a rat's ass.
http://www.bbotw.com/product.aspx?ISBN=0-7414-4384-8
http://www.bbotw.com/description.asp?ISBN=0-7414-2058-9

User avatar
D.Olivaw
Posts: 265
Joined: Sun Jan 20, 2008 9:52 pm
Technosexuality: Built
Identification: Human
Gender: Male
Location: Twixt dusty books and giant guns
x 103
x 64
Contact:

Post by D.Olivaw » Sun Mar 23, 2008 10:54 am

Practically, I usually prefer Copeland's "augmented" Turing test. It takes the normal Turing test and adds an analysis of the sentience-candidate's brain/CPU/whatever. If it can both pass the Turing test, and it's brain is actually processing the information (not just a response bank or series of preprogrammed responses), then it passes the augmented Turing test, and it is considered to be sentient. As far as knowing whether it is just a "supreme imitation," I think we should apply the same standards to AI as we do to each other; that is, we don't normally look at each other queerly, questioning whether the person you're looking at is really a person or just a good facsimile. Epistemologically speaking, you can never really know for certain that anyone else is really a person, only that you are. However, we give each other the benefit of the doubt, and I feel that we should do the same for , say, extraterrestrials or AIs that pass the augmented Turing test.
"Men, said the Devil,
are good to their brothers:
they don’t want to mend
their own ways, but each other's"
-Piet Hein

User avatar
xodar
Posts: 532
Joined: Thu Nov 24, 2005 1:53 pm
Location: South Texas
x 1
Contact:

Post by xodar » Sun Mar 23, 2008 1:17 pm

That does make sense, though somehow there's a gap between someone or something that is descended biologically from a chain of creatures going back to what was probably the same entity and something I know has been built by one of those biological beings. At least in my awareness.

I certainly don't intend to go about with a baseball bat or sword clobbering or slashing people to see if they bleed or if wires burst out. That's the best way to tell, though.

I don't know what to do about aliens. You just don't know what they'll do in any situation -- or even if you'll be aware of them. Indeed, they might not even realize we are living or machine entities.
I suppose, just in case, we ought to be sure we can kill or incapacitate them. That experiment is a job for a robot we can then pretend to protect them from.
"You can believe me, because I never lie and I'm always right." -- George Leroy Tirebiter.
If a tree falls in the forest and there's nobody there to hear it I don't give a rat's ass.
http://www.bbotw.com/product.aspx?ISBN=0-7414-4384-8
http://www.bbotw.com/description.asp?ISBN=0-7414-2058-9

User avatar
fection
Posts: 490
Joined: Fri Jun 07, 2002 11:50 pm
Location: London, UK
x 2
x 90
Contact:

Post by fection » Mon Mar 24, 2008 11:05 am

Olivaw, that begs the question - what's to say our own sentience isn't itself a series of (admittedly incomprehensibly complex) automated responses? So complex as to be thoroughly convincing, even from within? This is not to devalue our own sentience (if it's convincing, what difference does it make?), but to suggest that artificial sentience will similarly emerge when the machine is complex enough to operate at the near-random quantum level.
(And I'm aware that this suggestion might seem to raise the paradox of what it is that's BEING convinced, but I think that question confuses the things being percieved with the perciever. I'm talking about the perciever itself being a kind of illusionary emergence from the complexity of the brain's activity. That's how I see AI developing).

User avatar
xodar
Posts: 532
Joined: Thu Nov 24, 2005 1:53 pm
Location: South Texas
x 1
Contact:

Post by xodar » Mon Mar 24, 2008 11:25 am

fection wrote:Olivaw, that begs the question - what's to say our own sentience isn't itself a series of (admittedly incomprehensibly complex) automated responses? So complex as to be thoroughly convincing, even from within? This is not to devalue our own sentience (if it's convincing, what difference does it make?), but to suggest that artificial sentience will similarly emerge when the machine is complex enough to operate at the near-random quantum level.
(And I'm aware that this suggestion might seem to raise the paradox of what it is that's BEING convinced, but I think that question confuses the things being percieved with the perciever. I'm talking about the perciever itself being illusionary).

There is a problem with that in that an "appearance" or an "illusion" must have an observer -- which is sentience.

I'm guessing you here equate sentience with some level of organization, as an "emergent" property.
"You can believe me, because I never lie and I'm always right." -- George Leroy Tirebiter.
If a tree falls in the forest and there's nobody there to hear it I don't give a rat's ass.
http://www.bbotw.com/product.aspx?ISBN=0-7414-4384-8
http://www.bbotw.com/description.asp?ISBN=0-7414-2058-9

User avatar
fection
Posts: 490
Joined: Fri Jun 07, 2002 11:50 pm
Location: London, UK
x 2
x 90
Contact:

Post by fection » Mon Mar 24, 2008 12:01 pm

I'm suggesting that the idea of a perciever IS the illusion. When I think about my 'self', that idea seems ill-formed. Maybe it's just me - I'm probably heading toward getting dementia anyway.
And sorry if I'm taking this off-topic. I just think human sentience is a little over-rated sometimes. I think some sort of machine equivalent will definitely happen at some point.

User avatar
Korby
Posts: 627
Joined: Wed Jun 12, 2002 1:13 am
Technosexuality: Built and Transformation
Identification: Human
Gender: Male
Location: Exo III
x 48
x 8
Contact:

Post by Korby » Mon Mar 24, 2008 1:40 pm

Coming in late to the discussion, and jerking the wheel madly back to the sentience/malfunction question....

I'm not one who enjoys seeing a fembot come to any permanent harm, either, be it sentient or otherwise. Particularly not in any violent or malicious way. There is something I enjoy about malfunctions, though, especially in that they can be repaired--whatever happens to the poor fembot can be rectified completely and with relative ease.

Some kind of damage or malfunction that cannot be thus repaired would be pretty unappealing though. Consider the sentient fembot who is completely destroyed, not just physically but 'mentally'--her memory files, the data that make her who she is, irretrievably lost. That fembot is 'dead', and I find that no more pleasant to contemplate than the death of an organic human being.

(If the memory/personality data survives, though, well, an entirely new body can be constructed for the fembot, and she's as good as new, then, isn't she?)

For my part, what I really enjoy is the 'pleasurable malfunction' (as in D. Olivaw's story here, for instance). If you posit a fembot of reasonable sentience, who is aware of her robotic nature and finds that nature to be a source of erotic excitement (that being my favorite kind of fembot :D ), then you have an ideal subject for the pleasurable malfunction. She enjoys being a machine, and a malfunction is one of the most intense expressions of her machinehood. The situation becomes even more enjoyable for her when you consider that a malfunction must inevitably be corrected, leading to repairs and servicing--another intensely erotic experience for a woman who enjoys the fact that she is a machine.

And perhaps my favorite type of malfunction, along these lines, has to be the 'sexual overload'... the orgasm so intense her robotic systems can't handle it. That's hard to beat in my book.

The thing is, these types of malfunction require a certain amount of self-awareness to really 'work' for me. A simpler low-level AI would not be able to appreciate such a malfunction. It just stops functioning, and there's an end of the matter. It's the sentient fembot who knows and understands what's happening to her that makes the idea click for me... but usually only when she a) enjoys the experience and b) can take comfort in the fact that any harm done can be easily repaired.

(There are certainly other scenarios I can appreciate, of course; there's something to be said for the sentient fembot who doesn't know she's a machine, for whom any memories of the experience she finds frightening or traumatic can be simply deleted... and the non-sentient fembot who malfunctions has her appeal too, much as Fection outlines above.)
"Oh shut up Ray don't talk about gettin' with a robot
That is a ill idea"
--Roast Beef
http://achewood.com

Borias
Posts: 254
Joined: Sat Feb 09, 2008 8:01 pm
Contact:

Post by Borias » Mon Mar 24, 2008 2:20 pm

.
Last edited by Borias on Sat Nov 17, 2012 8:42 am, edited 1 time in total.

User avatar
D.Olivaw
Posts: 265
Joined: Sun Jan 20, 2008 9:52 pm
Technosexuality: Built
Identification: Human
Gender: Male
Location: Twixt dusty books and giant guns
x 103
x 64
Contact:

Post by D.Olivaw » Mon Mar 24, 2008 2:31 pm

Thanks Korby, we were sort of getting caught up on a secondary point, there, when the real question I wanted to ask was about ASFR (not that I don't like philosophical discussions). I guess I'll take this post to try and finish up that part of the discussion.

It's important that we define the terms we are using, and here sentience means "self awareness," or simply that the being in question has a first person perspective. It is recognized that, if we use the certainty model of knowledge, I can know that I have a first person perspective, that I am sentient, because I am capable of phrasing the question. As Descartes saw, if I am mistaken when I introspect upon some aspect of my "self" then there still has to be someone there to be mistaken! (Cogito Ergo Sum, or just the "Cogito" in philosophical circles).

Part of the problem is that I can not know for certain that anyone else has a first person perspective. They might, as fection put it, be acting out a series of intricate preprogrammed responses to stimuli. There is a way out of this though, and that is understanding how their mind works (like seeing that it doesn't consists of an answer bank :D). We don't come close to knowing everything about the human mind, for instance, but neuroscience has discovered if not the way it works then what goes on as it works, and we are reasonable sure that there isn't simply a set of preprogrammed steps or responses. Beyond that, things get fuzzy. Assumedly, though, we would know how an AI's mind works (having created it), and thus be able to verify whether it was likely or not that they were sentient (though as with each other we could never be certain), perhaps to a greater extent that we could with a human! (since we would, in this scenario, understand how they worked better than we understand how we work).

Philosophy of AI and Cognitive science are huge and rapidly growing fields, and I'd like to go on to talk about qualia, the (not very convincing) Chinese Box argument, qualia invariance for functional isomorphs, etc, but I feel it would be better for me to start another thread if we really want to get into that. I mainly started this one to find out how you guys (and gals) feel about malfunctioning gynoids, although I'm glad for the response to my secondary concern too :D
"Men, said the Devil,
are good to their brothers:
they don’t want to mend
their own ways, but each other's"
-Piet Hein

User avatar
Korby
Posts: 627
Joined: Wed Jun 12, 2002 1:13 am
Technosexuality: Built and Transformation
Identification: Human
Gender: Male
Location: Exo III
x 48
x 8
Contact:

Post by Korby » Mon Mar 24, 2008 3:29 pm

Borias wrote:There's little room for objection in a scenario where the robot is enjoying herself, "orgasmic overload" and such. Its just very uncomfortable if A: the poor thing is killed, permanently destroyed, or B: is frightened, in pain, etc.
Just so. I think we are in perfect accord there, pretty much.

(Oh, and I didn't really mean to derail the philosophical end of the discussion, there, D.; just felt an urge to chime in on the original question after turning it over in my head for a while. :) )
"Oh shut up Ray don't talk about gettin' with a robot
That is a ill idea"
--Roast Beef
http://achewood.com

User avatar
D.Olivaw
Posts: 265
Joined: Sun Jan 20, 2008 9:52 pm
Technosexuality: Built
Identification: Human
Gender: Male
Location: Twixt dusty books and giant guns
x 103
x 64
Contact:

Post by D.Olivaw » Mon Mar 24, 2008 4:06 pm

lol, you didn't derail the conversation, Korby, you brought it back on topic :D
"Men, said the Devil,
are good to their brothers:
they don’t want to mend
their own ways, but each other's"
-Piet Hein

User avatar
xodar
Posts: 532
Joined: Thu Nov 24, 2005 1:53 pm
Location: South Texas
x 1
Contact:

Post by xodar » Mon Mar 24, 2008 4:41 pm

fection wrote:I'm suggesting that the idea of a perciever IS the illusion. When I think about my 'self', that idea seems ill-formed. Maybe it's just me - I'm probably heading toward getting dementia anyway.
And sorry if I'm taking this off-topic. I just think human sentience is a little over-rated sometimes. I think some sort of machine equivalent will definitely happen at some point.
Possibly a machine equivalent will one day exist.

I've come across this idea before, that the perceiver is the illusion, but that seems to me either a logical fallacy or a negative way of expressing whatever does happen.
Is 3-d vision an illusion because it depends on the integration of slightly different angles of vision or is there really extension in space? If not, why would the perception exist?

But I digress. I actually regard philosophy as nonsense.
"You can believe me, because I never lie and I'm always right." -- George Leroy Tirebiter.
If a tree falls in the forest and there's nobody there to hear it I don't give a rat's ass.
http://www.bbotw.com/product.aspx?ISBN=0-7414-4384-8
http://www.bbotw.com/description.asp?ISBN=0-7414-2058-9

User avatar
fection
Posts: 490
Joined: Fri Jun 07, 2002 11:50 pm
Location: London, UK
x 2
x 90
Contact:

Post by fection » Tue Mar 25, 2008 3:28 am

Oh, I'm not a fan of philosophy either. Seems to be mostly impractical speculation to me. I was trying to talk about how the brain actually might work, admittedly in a way that can't be known, I suppose.
And sorry if I seemed to take it off topic. I guess what I FAILED to say was that I will value artificial sentience as much as human and (as has now been said) any harm that comes to an AI would be equivalent to harming a human. Olivaw, I meant to suggest that THAT is where the source of 'uncomfortability' comes from.
That's precisely why, in my stories, the inevitably doomed 'fembot' does not have self awareness. The appeal of the malfunction for me is that loss of PERCIEVED identity. The moment where the uncanny valley kicks in and the observer realises they're dealing with an entirely vacuous, purportedly perfect copy of a human is hugely appealing to me.

User avatar
xodar
Posts: 532
Joined: Thu Nov 24, 2005 1:53 pm
Location: South Texas
x 1
Contact:

Post by xodar » Tue Mar 25, 2008 5:57 am

fection wrote:Oh, I'm not a fan of philosophy either. Seems to be mostly impractical speculation to me. I was trying to talk about how the brain actually might work, admittedly in a way that can't be known, I suppose.
And sorry if I seemed to take it off topic. I guess what I FAILED to say was that I will value artificial sentience as much as human and (as has now been said) any harm that comes to an AI would be equivalent to harming a human. Olivaw, I meant to suggest that THAT is where the source of 'uncomfortability' comes from.
That's precisely why, in my stories, the inevitably doomed 'fembot' does not have self awareness. The appeal of the malfunction for me is that loss of PERCIEVED identity. The moment where the uncanny valley kicks in and the observer realises they're dealing with an entirely vacuous, purportedly perfect copy of a human is hugely appealing to me.
Makes sense. I don't think it's exactly off topic.
Probably building bots and artificial minds will reveal more about how awareness emerges than studying living creatures.
I don't know that intuition will help because it's likely based on perceptions below the level of awareness, but electrical patterns likely will.
I've found this a vexing problem for which I always try to gather new ideas -- the other guy may have the answer!
Most philosophy is just covering old ground with word games: the item in my signature about "If a tree falls..." refers to it. There's always the person who cites some such conundrum or another in an effort to sound smart.
Last edited by xodar on Wed Mar 26, 2008 5:02 am, edited 1 time in total.
"You can believe me, because I never lie and I'm always right." -- George Leroy Tirebiter.
If a tree falls in the forest and there's nobody there to hear it I don't give a rat's ass.
http://www.bbotw.com/product.aspx?ISBN=0-7414-4384-8
http://www.bbotw.com/description.asp?ISBN=0-7414-2058-9

User avatar
dale coba
Posts: 1868
Joined: Wed Jun 05, 2002 9:05 pm
Technosexuality: Transformation
Identification: Human
Gender: Male
Location: Philadelphia
x 12
x 13

Post by dale coba » Tue Mar 25, 2008 6:00 am

I haven't much use for sentience in fembots.

There are already enough people and animals with first person perspectives, who perceive the joys and assaults of the material world in analogous ways to my own perceptions.

But I do have use for the inorganic non-person.

She is a very powerful tool, allowing the examination of the consequences of many alternate approaches to a problem. She has all the data in hand, and she can wi-fi display any sound or image that she can imagine into common display-devices. She's a secretary, an answering machine, a telephone which can either vocalize the distant caller in their vocal tones, or in the fembot's default voice.

She's pleasing to look at. Positively inspiring. She can mimic so many appearances which stimulate the arousal centers of my brain. She is not the source of any conflict in ideology or agenda. She is free of any ego whatsoever, and her superego is Me.

With the fembot, I can benefit from and rely on female, human-like interactions without having to provide any benefits or reliances in return, and without the possibility of my disappointing a person.

I'm not one to believe in machine sentience; nor in a hurry to solve the debate. There is still time. But I love objects with the eye of an avid Antiques Roadshow fan. Any crafted material object bears the memory of its wear, bears witness to the times, people, places it has attended. Any electrical, technical, engineered product also foretells of the eventual disposal of its chemical constituents. I wouldn't want to see fembots damaged, the same way Charlie Rose took a header into the concrete last week when he opted to preserve his new Mac AirBook over his face. I don't want my heartstrings tugged at, but if the destruction had some valid end, then I wouldn't object to damage to an object. It's an aesthetic judgment.

The malfunctions I want to see are in behavior, in operation; no need to have to replace anything expensive or non-ecofriendly. Her behavior emulation fails to prevent her from realizing that her true nature is a robot who is in denial of her libidinous technosexuality and her illusion of Self-hood. Her primary emulation can always be rebooted with more ease than replacing components.

- Dale Coba
8) :!: :nerd: :idea: : :nerd: :shock: :lovestruck: [ :twisted: :dancing: :oops: :wink: :twisted: ] = [ :drooling: :oops: :oops: :oops: :oops: :party:... ... :applause: :D :lovestruck: :notworthy: :rockon: ]

FembotsInCharge3
Posts: 23
Joined: Sun Feb 24, 2008 7:43 pm
Contact:

Post by FembotsInCharge3 » Tue Mar 25, 2008 12:31 pm

xodar wrote:I still don't see how you can tell if a machine is sentient. It could be a supreme imitation.
So could, theoretically, all the other humans other than oneself. I can only know from first-hand awareness that I myself exist as a saptient being, everybody else could be clever bioautomatons, I can't prove it either way.

I take it on faith that beings who act sufficiently sapient are so.

Robotussin
Posts: 5
Joined: Thu Aug 05, 2004 2:36 pm
x 1
Contact:

Sentience, Malfunctions and ASFR

Post by Robotussin » Tue Mar 25, 2008 10:59 pm

This has been a wonderful discussion. For those with the interest, a short story which touches on the themes of perception explored here is Phillip K. Dick's "The Electric Ant."

Personally, although I have always found malfunctions of various kinds to highly stimulating, I have (like many here) also found the thought of it happening to a sentient being for my personal pleasure horrifying. I've always thought that a great solution would be to have a gynoid which could simulate such malfunctions without coming to physical or emotional harm. Then again, if they are truly sentient, there's the obstacle of convincing them to do it for you.

I also have to agree with Fection. Having the smug designer of an android receive the comeuppance for their hubris in the form of their ruined "perfect" creation is both stimulating and a lot of fun.

User avatar
dean86
Posts: 21
Joined: Fri Oct 19, 2007 9:39 pm
Location: Toronto
Contact:

Post by dean86 » Sun Mar 30, 2008 9:01 pm

I am also in the same camp as Fection.

A Fembot as a depiction of the perfect woman, especially in accordance with the arrogance of the inventor, is VERY appealing to me. An added bonus is if the fembot should state confidently in a monotone voice starting with her own name: "Candy is the perfect girl ... perfect face ... perfect body ... etc" (as seen in one of Fection's cartoons).

As others have stated, I enjoy her downfall being brought about by an unforeseen, often comical, glitch or defensive tactic, like the unexpected line of questioning in Fection's Candybot animation or the sex-charged striptease in Austin Powers. A great follow-up would be seeing the inventor carry his short-circuited beauties out in a wheelbarrow and casually toss them into a dumpster in a "back to the old drawing board" sort of manner.

The comical element is of extreme importance for me so as no malicious intent is suggested on the part of the second party. Sound effects like wires popping or silly facial expressions on the part of the fembot add to the enjoyment. If the second party displays any sort of malicious intent, it begins to feel like a snuff film. It kills any sort of pleasure.

This is why I think comedy films execute this fetish in the best way because it is always depicted as innocent fun, like in Stepford Wives (2004), Austin Powers, and on shows like Buffy, Kim Possible, Wicked Science, and Beverly Hills Teens.

dieur
Posts: 199
Joined: Thu Jul 17, 2003 10:40 pm
x 5
x 8
Contact:

Malfunctions

Post by dieur » Sun Apr 06, 2008 9:19 pm

In me, I think the thing that appeals about robots (androids/gynoids) and the thing that appeals about malfunctions come from separate sources.

I find stories where the robot girl isn't intelligent (sentient) to be mostly unappealing (no slight intended!). Of course, like it seems most everyone else here, I find the notion of an intelligent being being assaulted repulsive.

So, why do I like malfunctions at all? I've asked myself that question a lot, and have come up with two, probably unrelated reasons. First, one of the things I think I find appealing is the vulnerability inherent in a robot girl. It enables interpersonal interactions between characters that allows them, or even forces them, to be close in situations where it might be impossible between humans (robot girl A may be fiercely independent and powerful, but bound to person B due to programming/love/just a need to trust someone for maintenance, etc). I am distant to other people by nature, and this appeals to the part of me that wants that closeness. In this context, a malfunction can, at best, cast that vulnerability into its sharpest contrast.

But at worst, and more often, it destroys the framework the above rests on. The gyonoid is not sentient, or the trust is broken, or the universe is just too cruel, etc.

Then there's the other reason. I think the explanation for why lesbians are appealing applies very well here.

1) In the words of the internet, hot girls are hot
2) Hot girls in sexual situations are even hotter
3) (Gynoids) in a sexual situation involve #2 above, without a male being in the picture to mess it up.

A gynoid suffering a sexually related malfunctioning is still fits #3.

I think, for me, that's a good first approximation. ASFT being what it is, I tend to target #2 here in stories/pictures.

User avatar
A.N.N.
Posts: 356
Joined: Wed Jun 30, 2004 4:24 pm
Technosexuality: Built and Transformation
Identification: Human
Gender: Male
Location: USA
Contact:

Post by A.N.N. » Sun Apr 27, 2008 7:30 pm

Wow, well spoken words by all here. I have to say Dale Coba was able to state my general opinion better than I ever could. Well done, Dale.

Interestingly, I have strong instictive empathy towards other mammals, but absolutely non at all to any other kind of animal. I rationalize that this is due to the neo cortex (sometimes called the mammal brain) that allows us all significantly higher degree of abstract thinking, and probably familiar expressions of emotions. If you've never heard a rabbit or deer in pain or near death, it's a bone chilling sound and just as painful to me as a human in similar circumstances. And yet, I don't feel a thing for a bug, fish, or lizard.

I bring this up because I don't "feel" a thing for machines. The rational part of my mind understands that a machine may someday become sentient, feel pain, etc. But my instictive feeling (ironic, since instinct come predominately from the brain stem, or reptile brain) can't agree. And so I can rationly understand it, but I can't feel empathy for a machine. I suppose this is why I don't have any turn off to machine violence. It doesn't actually turn me on either, but I don't consider myself the violent type to begin with.

Again, I have no rational argument against any other point of view here, but I know I can't really change how I feel about this any time soon. Perhaps, like so many other things we have prejudices against, I would have to be exposed to a sentient machine, or several, for an extended period of time to develop a sense of empathy.

Sorry I'm not as eloquent as everyone else, but I thought I should state my feelings, thoughts, and speculations. I was really inspired by all the great thoughts going into this thread.
A.N.N.

Post Reply
Users browsing this forum: No registered users and 9 guests