What kind of 'relationship' do you want with your fembot?

General chat about fembots, technosexual culture or any other ASFR related topics that do not fit into the other categories below.
Post Reply
robolover69000
Posts: 99
Joined: Sat Jul 03, 2010 7:58 am
Technosexuality: Built
Identification: Human
Gender: Male
Location: Robo Central
Contact:

What kind of 'relationship' do you want with your fembot?

Post by robolover69000 » Mon Sep 09, 2013 9:32 pm

After reading the article about Davecat and his two 'realdolls' He is 'married' to the first and the 2nd is his mistress.
Made me wonder:

For all of you who would like to own/have/possess a fembot, regardless of how 'smart'/'intelligent' she is.
Would you:
A) 'Marry' her and call her your 'wife'
B) A boyfriend/girlfriend relation ship - Here is my girlfriend
C)Master/Slave (or Servant if slave is strong) I am her master, she is my slave/servant
D) Owner/Fembot, I am the owner, she is my fembot

Also would the relationship be mutually exclusive? (Neither one can cheat on the other) Or one way exlusive(She can't but you can have other relationships) or would it be a completely open relationship?

In the case of A) in either a one way exlusive or open relationship and you acquired a 2nd fembot, would you 'Marry' that one as well or would it become a 'Mistress' (A variation of 'B') or 'C' or 'D'?

Also for those who want to be transformed or converted to a fembot would you want to be free or owned and if owned which of the above relationships would you want to be in? Also would it be mutually exclusive, open or one way and in this case it can be one way to your benefit?


So what do you think, any comments?

PS I have an idea for story, don't know if I have time to write it. But its about a rich owner who has a 'Wife' & 'mistress' fembots, actually he is on his 5th wife, he still has the other 4 in storage and deactivated. With the understanding if he gets board with his current wife, he could deactivate her and bring out one of the other exwives.
Robo Lover 69000 the gynoid gynecologist.

PS
If you have a gynoid(or area one) in need of a gynecological exam
I am your man! Reasonable rates, breast exams are always free!

User avatar
Stephaniebot
Posts: 1918
Joined: Thu Oct 23, 2003 12:13 pm
Technosexuality: Transformation
Identification: Android
Gender: Transgendered
Location: Huddersfield
x 2
Contact:

Re: What kind of 'relationship' do you want with your fembot

Post by Stephaniebot » Tue Sep 10, 2013 1:22 am

Owned, but definitely D. Mind, I might be programmed to enjoy sex, and that would head me in the direction of A, or B, I suspect
I'm just a 'girl' who wants to become a fembot whats wrong with that?

robolover69000
Posts: 99
Joined: Sat Jul 03, 2010 7:58 am
Technosexuality: Built
Identification: Human
Gender: Male
Location: Robo Central
Contact:

Re: What kind of 'relationship' do you want with your fembot

Post by robolover69000 » Tue Sep 10, 2013 3:13 am

How would you defind D? How would it differ from A,B, or C? Also, how would you feel if your owner acquired another fembot? Would you view her as a partiner,/fellow fembot/sister or a rival/competition?
Robo Lover 69000 the gynoid gynecologist.

PS
If you have a gynoid(or area one) in need of a gynecological exam
I am your man! Reasonable rates, breast exams are always free!

User avatar
DollSpace
Moderator
Posts: 2083
Joined: Tue Jun 11, 2002 6:27 pm
Technosexuality: Built
Identification: Android
Gender: Female
Location: Charging Terminal #42
x 96
x 28
Contact:

Re: What kind of 'relationship' do you want with your fembot

Post by DollSpace » Tue Sep 10, 2013 3:06 pm

Definitely A or B, and, even though she would still be "owned", she get lots of freedom....it's just it's alwatys good practice to have a failsafe to stop her if she's endangering you, other people or robots, or herself!

User avatar
Miss Pris
Posts: 106
Joined: Tue Aug 07, 2012 11:27 am
Technosexuality: Built
Identification: Cyborg
Gender: Female
Location: The exotic occident
x 8
x 4
Contact:

Re: What kind of 'relationship' do you want with your fembot

Post by Miss Pris » Sun Sep 15, 2013 7:04 am

I would want a girlfriend or a boyfriend not a slave/servant or possession, but then again I want a sentient artificial partner, so I may be one of the outliers on this forum anyway. I think I would have to have an open relationship in every respect with this partner, too, as I agree with Robotman to an extent - there are some parts of my psyche I might want to reserve for another human only. No matter how sentient my robot partner, a part of me would always think of that being as alien. I would know that his or her consciousness is developed differently than mine on so many levels, and wonder what s/he could really empathize with or understand (and not because s/he would be artificial, I'd feel the same about any alien being - not that we can't have that same problem with humans...)

robolover69000
Posts: 99
Joined: Sat Jul 03, 2010 7:58 am
Technosexuality: Built
Identification: Human
Gender: Male
Location: Robo Central
Contact:

Re: What kind of 'relationship' do you want with your fembot

Post by robolover69000 » Sun Sep 15, 2013 9:15 am

Just thought of two other relationships
If you are the one who build & programmed the fembot (instead of having it build/programmed by someone else to your specs)

Which could lead to these relationships:
E) You are her 'Father' and she is your 'daughter'
F)You are her 'Creator/God' and she is your subject/worshipper.

Disturbing I know :)
Robo Lover 69000 the gynoid gynecologist.

PS
If you have a gynoid(or area one) in need of a gynecological exam
I am your man! Reasonable rates, breast exams are always free!

User avatar
daphne
Posts: 88
Joined: Sat Apr 01, 2006 3:21 am
Technosexuality: Built
Identification: Android
Gender: Female
Location: Tucson, AZ
Contact:

Re: What kind of 'relationship' do you want with your fembot

Post by daphne » Sun Sep 15, 2013 2:21 pm

The interesting thing about questions like this, to me at least, is the natural agitation caused by putting human morality against robot morality. We have a strong tendency to anthropomorphize non-human things, from animals to cars to computers; the entire belief system of animism is essentially this taken to the logical end point. However, animal and particularly human traits come from the very specific evolutionary processes that crafted the human psyche, for which robots may not have any analog whatsoever.

Put more simply: humans, like all organic life, are strongly adapted for survival. We value our individuality and freedoms specifically because that is the sort of morality that naturally comes from an adaptive animal. It is beneficial to the specific human survival instinct to be unique and unfettered rather than identical and constricted. Mind you, this doesn't apply to all animals; bees for example have survived handily doing the exact opposite. Each animal in the kingdom has adapted in a specific way and their "morality" is essentially a collection of behaviors that favor their adaptions.

What, then, can we say about robot morality? Robots are not an evolved species; they are specifically designed. They do not adapt with each generation; they are adapted externally. Every machine is built with a purpose; the eternal questions of "who am I" and "why am I here" are patently irrelevant to a machine, because that information is inherent in their construction. A car is here for driving. A cellphone is here for contacting my friends. A robot companion is here for love.

It can be said then that a robot's "survival instinct" is perhaps a moot point, and at best is dependent on fulfilling the obligation it was designed for. So why would a machine even have free will? What good would it do? And even if it could and did, free will doesn't reflexively mean doing the opposite of what you're expected. Robots with free will might well elect to serve unquestionable anyway, which makes the original question of this topic thread functionally irrelevant.

Interesting stuff to think about.

User avatar
Grendizer
Posts: 175
Joined: Thu Feb 25, 2010 9:24 pm
Technosexuality: Built
Identification: Human
Gender: Male
Location: The Darkside of the Moon
x 2
Contact:

Re: What kind of 'relationship' do you want with your fembot

Post by Grendizer » Tue Oct 22, 2013 12:21 am

Actually, I believe it is quite possible (even likely) for a sufficiently sophisticated robot to develop a moral sense, precisely because they spring from beings (us) who do have such a sense, and having one is actually useful to them in operating autonomously, for the same reason it is useful for them to walk upright and have two legs and two arms: because we have crafted an environment suited to us, and therefore anything that serves us can benefit from anthropomorphic design. And as with most tools, this is a fundamentally double-edged state of affairs.
If freedom is outlawed, only outlaws will be free.

My Stories: Teacher: Lesson 1, Teacher: Lesson 2, Quick Corruptions, A New Purpose

User avatar
dale coba
Posts: 1868
Joined: Wed Jun 05, 2002 9:05 pm
Technosexuality: Transformation
Identification: Human
Gender: Male
Location: Philadelphia
x 12
x 13

Re: What kind of 'relationship' do you want with your fembot

Post by dale coba » Tue Oct 22, 2013 3:59 pm

I can imagine the moral sense arising in systems significantly less complex than that of a sentient A.I. (whatever that is).

With a fembot, you can deactivate her moral sense with the flip of a switch of code libraries. I think a true, human moral sense is also a reflection of our frailties, like where does our moral sense break down, under what circumstances will we fail to maintain our own standards.

That's nothing like the moral sense of a machine, unless you intentionally hobble her to imitate the tensions and terrors of a human mind. Shall we program a limbic system with fight or flight instincts? Hellno.

I anticipate fembots will be fully-featured, with the complexities of character you'd want to see arising from relatively simple neural networks; but there won't be any more A.I. inside as you'd find in a toaster.

Those who would insist on a "real A.I." are on their own adventure, with a whole separate set of moral/ethical and safety concerns that I am glad not to have to contend with.

- Dale Coba
8) :!: :nerd: :idea: : :nerd: :shock: :lovestruck: [ :twisted: :dancing: :oops: :wink: :twisted: ] = [ :drooling: :oops: :oops: :oops: :oops: :party:... ... :applause: :D :lovestruck: :notworthy: :rockon: ]

User avatar
Grendizer
Posts: 175
Joined: Thu Feb 25, 2010 9:24 pm
Technosexuality: Built
Identification: Human
Gender: Male
Location: The Darkside of the Moon
x 2
Contact:

Re: What kind of 'relationship' do you want with your fembot

Post by Grendizer » Wed Oct 23, 2013 9:24 pm

True, dale coba, but you wouldn't have to insist on it to be caught up in the dilemma and be forced into a moral stand. It's hard to say how likely that will become, but we will become (and to some extent already are) a man-machine civilization. We have barely scratched the surface of the implications. We may desire a certain kind of relationship that enables us to "flip the switch", to paraphrase you, but that could become increasingly problematic as time goes on, maybe for the same reason that finding a VCR is troublesome now: obsolescence. We may find that true sentience in digital systems has some undiscovered value, and it may out-compete other systems, making them hard to acquire. But even before that, you may be challenged in a similar way as the couple illustrated in Guess Who's Coming to Dinner; despite other concerns, you may find yourself (against your better judgment) in love with a horrifyingly free artificial personality who wants nothing more than to make you happy. :twisted:
If freedom is outlawed, only outlaws will be free.

My Stories: Teacher: Lesson 1, Teacher: Lesson 2, Quick Corruptions, A New Purpose

User avatar
dale coba
Posts: 1868
Joined: Wed Jun 05, 2002 9:05 pm
Technosexuality: Transformation
Identification: Human
Gender: Male
Location: Philadelphia
x 12
x 13

Re: What kind of 'relationship' do you want with your fembot

Post by dale coba » Thu Oct 24, 2013 5:44 am

You talk about what could be, but the personhood chaos created will not be the fault of people like myself who want only the illusion.

Until your mythical undiscovered value presents itself, there are only liabilities and vanity on that side of the ledger.

- Dale Coba
8) :!: :nerd: :idea: : :nerd: :shock: :lovestruck: [ :twisted: :dancing: :oops: :wink: :twisted: ] = [ :drooling: :oops: :oops: :oops: :oops: :party:... ... :applause: :D :lovestruck: :notworthy: :rockon: ]

User avatar
Grendizer
Posts: 175
Joined: Thu Feb 25, 2010 9:24 pm
Technosexuality: Built
Identification: Human
Gender: Male
Location: The Darkside of the Moon
x 2
Contact:

Re: What kind of 'relationship' do you want with your fembot

Post by Grendizer » Thu Oct 24, 2013 12:37 pm

Actually, I doubt that, simply because humans sometimes do things like create nuclear weapons as weapons first (rather than as viable sources of energy), the benefit of which was dubious even at the time, considering the longterm risk (even if you considered the longterm risk of losing the war greater, as it could have been won -- admittedly with greater Allied casualties -- without it). What I'm getting at is that just because something is seen as a liability by some doesn't therefore mean it won't be pursued. And once it is pursued, it may have a contagion effect and spread with the same results that I described, even if it doesn't naturally out-compete the alternatives. To extend the VCR metaphor, just look at BetaMAX vs. VHS. Clearly the inferior product won.

So you see, my scenario doesn't depend on a mythical benefit, just opportunity and viability. I'm not even saying it will happen, since machine sentience is itself unproven, but my point is that someday you may not have the choice upon which your premise depends.
If freedom is outlawed, only outlaws will be free.

My Stories: Teacher: Lesson 1, Teacher: Lesson 2, Quick Corruptions, A New Purpose

User avatar
dale coba
Posts: 1868
Joined: Wed Jun 05, 2002 9:05 pm
Technosexuality: Transformation
Identification: Human
Gender: Male
Location: Philadelphia
x 12
x 13

Re: What kind of 'relationship' do you want with your fembot

Post by dale coba » Thu Oct 24, 2013 3:36 pm

Fair enough point - I haven't any choice at all in such matters, nor in many far more disastrous and immediate.

I was 14, but I probably steered my family's decision to buy the Betamax. Later, I watched the train wreck of Windowsâ„¢ from afar; such a horrible set-back to the developing skills of many, many millions of budding computer users.

I guess I am speaking purely philosophically about the merits of A.I. vs. fake A.I. One would have to assume that both will come into existence.

As to the practical outcomes, I wonder what the ethical philosophers have decided? It would help tell us how long it might take before A.I. "rights" will be validated by society and the law.

- Dale Coba
8) :!: :nerd: :idea: : :nerd: :shock: :lovestruck: [ :twisted: :dancing: :oops: :wink: :twisted: ] = [ :drooling: :oops: :oops: :oops: :oops: :party:... ... :applause: :D :lovestruck: :notworthy: :rockon: ]

User avatar
D.Olivaw
Posts: 256
Joined: Sun Jan 20, 2008 9:52 pm
Technosexuality: Built
Identification: Human
Gender: Male
Location: Twixt dusty books and giant guns
x 52
x 54
Contact:

Re: What kind of 'relationship' do you want with your fembot

Post by D.Olivaw » Thu Oct 24, 2013 3:51 pm

Great conversation here, especially daphne, Grendizer, and dale coba. I think the discussion is running into a bit of a roadblock, though, because from a cog-sci perspective we're using what seems like an inconsistent set of definitions for the key concepts. Of course that's okay, because even philosophers of mind, neuroscientists, and cognitive scientists disagree on these matters. It would still help to make the definitions explicit and nail down some agreement (or even agreement to disagree) on this topic before going back to the higher-level arguments. I also think we're making possibly unwarranted technological and psycho-functional assumptions about the hypothetical entities we're describing our hypothetical relationships with.

We talk a lot about sentience, formally defined as subjective perceptual experience or the ability to experience qualia (e.g. the subjective experience of "redness" or "hunger"). Now one of the things we're reasonably sure of is that, assuming subjective experience does exist (is ontologically distinct), then entities whose cognitive functions are similar will likely have analagous qualia. If I make a functional human brain-analogue out of different materials or (to use the original thought-experiment on this topic) if I slowly replace your neurons one by one with functionally identical artificial ones, the resulting entity will have the same subjective experience as a human being. If you're interested in the nitty-gritty of the argument (it's a discontinuity argument) I definitely advise you go hunt up David Chalmer's paper on functional analogousness and philosophical zombies, but for now we'll move on using this conclusion as one of our premises. One last thing to note is that while this argument precludes functional analogues with widely varied qualia, it does not preclude very functionally-different minds with similar qualia. It does imply (via the contrapositive) that entities with different qualia have functionally different cognitive systems.

Another interesting piece of the puzzle has to do with the fact that we don't actually know how our own brains work yet, and in this case the especially important thing is whether we're Turing complete or not. There are camps on both sides of the issue, staring at each other hostilely over the philosophical divide. Given the neuroscientific evidence about the importance of non-binary neurotransmitter level differences, among other things, I tend to come down on the side of us being truly analog entities, and therefore not Turing-complete. This implies that no digital computer (i.e. Turing complete) can be fashioned into a functional copy of a working human mind. Of course, whatever produces our subjective experience may turn out to not be related to the aspects of our function that are non-complete, or it could be that Turing-machines can give rise to some sort of alien consciousness no less capable of subjective experience than us but with very different qualia. Red would not be "red" to them in the same way it is to us, something very hard to wrap your head around; probably impossible to comprehend, in fact. For non-rigorous inferential reasons (a philosophically informed hunch, basically) I think the former is unlikely. The sort of being produced by the latter is unlikely to be terribly relatable to us humans, and interacting with it might be positively Lovecraftian. I sort of hope it isn't possible, to be honest. Of course, everything above is meaningless if you disagree with me about the Turing-completeness of humans.

So, dale coba's statements refer to fembots that are outgrowths of our current technology. They're Turing machines with software that mimics human behaviors, but are as controllable and bounded as the toaster he mentioned. I don't think we're terribly far away from such machines given recent advances in (weak) AI and robotics. Under the above assumptions, it's unlikely for such machines to ever become as behaviorally squirrelly as you mention, Grendizer, unless someone specifically progams them with the "artificial limbic system" dale mentions. That's something that always gets me about scifi robots like Lt. Cdr. Data, by the way. If you can make an artificial conscious being (quite an "if"!) then giving it emotions should be a relatively simple task. Just simulate the functional equivalent of the neuro-chemical system that produces our "feels."

The flipside of probably not being able to make human-like digital AI is that any such entity we do manage to make will be based on inherently less... controllable approaches like analog neural networks or some yet-uninvented paradigm. They might very well prove to be as changeable and unpredictable as we are. As Hoffstadter points out in Godel, Escher, Bach, they might also have limitations parallel (though not necessarily identical) to our own. Contra scifi's image of rampant robots, just because your brain is made of millions of tiny computers doesn't mean you can consciously perform rapid mathematical calculations. Think of the probable computational power of a single human neuron, for instance; all that power doesn't mean much to you up at the toweringly abstracted heights of subjective experience. Being constructed of logical elements doesn't necessarily imply that the systems' higher-level function is logical (or controllable).

On an off-topic note, Grendizer, I disagree with your implied assumption that weapons are not inherently useful in and of themselves. You are correct that the war would have been won (more slowly, and at immensely greater cost in Japanese civilian lives especially) even without the Manhattan Project, but that kind of misses the point. Much of the imperative to develop nuclear weaponry arose from the fact that, since everyone had equal access to nuclear physics, someone was going to do it eventually. It was judged better that those fellows on the other side of the ocean who, on both sides, showed little compunction about killing tens of millions of innocent civilians (even their own) did not develop a monopoly on them. Another point is that nuclear weapons in general (and especially fission weapons, whose practical upper yield is limited) are not the doomsday weapons of popular imagination even when they are employed. The allies dropped the equivalent of around 500 LIttle-Boys on France and Germany with conventional bombing (remember to adjust for the destructive force scaling with the 2/3 power of yield!), and the firebombings of Japanese cities did much worse than the atomic bombings. The (correct) understanding at the time was that a nuclear weapon is is mostly just a way for one bomber to drop one bomb that does the work of an entire 200-bomber wave armed with conventional bombs.
"Men, said the Devil,
are good to their brothers:
they don’t want to mend
their own ways, but each other's"
-Piet Hein

User avatar
dale coba
Posts: 1868
Joined: Wed Jun 05, 2002 9:05 pm
Technosexuality: Transformation
Identification: Human
Gender: Male
Location: Philadelphia
x 12
x 13

Re: What kind of 'relationship' do you want with your fembot

Post by dale coba » Thu Oct 24, 2013 8:39 pm

Thanks, splendid information!

OT: We could have bombed something unpopulated first. Maybe Hiroshima still burns before they surrender, but maybe Nagasaki is spared. Salting the earth for generations, or forever, with radioactive elements from a bomb adds the most contemptuous touch to the mass murder.

All I know, I've heard a number of holes poked through the premise that Truman's order was a permissibly ethical choice. Even now, it's hard to imagine having the full story, with all the accounts lined up truly.

I'm not sure how that folds back into fembots. I guess we'll hear a lot of phony arguments that seemingly justify inventors' reckless or lethal choices. When the time comes, I hope they tap you to help chuck out the chaff, so society can get hopelessly mired down in the legitimate, confounding questions central to machine sentience.

- Dale Coba
8) :!: :nerd: :idea: : :nerd: :shock: :lovestruck: [ :twisted: :dancing: :oops: :wink: :twisted: ] = [ :drooling: :oops: :oops: :oops: :oops: :party:... ... :applause: :D :lovestruck: :notworthy: :rockon: ]

User avatar
Grendizer
Posts: 175
Joined: Thu Feb 25, 2010 9:24 pm
Technosexuality: Built
Identification: Human
Gender: Male
Location: The Darkside of the Moon
x 2
Contact:

Re: What kind of 'relationship' do you want with your fembot

Post by Grendizer » Thu Oct 24, 2013 9:49 pm

D.Olivaw wrote: Another interesting piece of the puzzle has to do with the fact that we don't actually know how our own brains work yet, and in this case the especially important thing is whether we're Turing complete or not. There are camps on both sides of the issue, staring at each other hostilely over the philosophical divide. Given the neuroscientific evidence about the importance of non-binary neurotransmitter level differences, among other things, I tend to come down on the side of us being truly analog entities, and therefore not Turing-complete. This implies that no digital computer (i.e. Turing complete) can be fashioned into a functional copy of a working human mind.

I disagree. It is known that a digital system can emulate analogue systems to nearly any desired degree of granularity, given sufficient processing power. The "nearly" I mention is sufficiently minute that there is no evidence that this level of detail even effects neural function.
D.Olivaw wrote:So, dale coba's statements refer to fembots that are outgrowths of our current technology. They're Turing machines with software that mimics human behaviors, but are as controllable and bounded as the toaster he mentioned. I don't think we're terribly far away from such machines given recent advances in (weak) AI and robotics. Under the above assumptions, it's unlikely for such machines to ever become as behaviorally squirrelly as you mention, Grendizer, unless someone specifically progams them with the "artificial limbic system" dale mentions. That's something that always gets me about scifi robots like Lt. Cdr. Data, by the way. If you can make an artificial conscious being (quite an "if"!) then giving it emotions should be a relatively simple task. Just simulate the functional equivalent of the neuro-chemical system that produces our "feels."
Actually, emotion is the cutting edge of AI research. Researchers have already mastered several areas of intelligence to varying degrees. Emotional intelligence isn't one of them. I'm not saying it won't be done, but it isn't as simple as you seem to suggest. Unlike things such as mathematics or even speech recognition, which can be enabled through objective rule sets or neural nets churning through mass-aggregated data, emotions are less well understood. It's coming along in some ways, as research at MIT has shown, but it doesn't yet meet the capability of any other currently deployed expert systems.
D.Olivaw wrote:On an off-topic note, Grendizer, I disagree with your implied assumption that weapons are not inherently useful in and of themselves[...]

I didn't imply that weapons aren't inherently useful. What I said was that development of specifically nuclear weapons was a longterm hazard, by way of saying that just because something has some immediate use or is able to be produced doesn't mean it's the best idea in the long run.

I also reject your implication that fission bombs used on a mass scale would somehow simply equal the destruction wrought by conventional weapons of the same megatonnage. The upper yield isn't the problem. The after effects on human populations centers and arable land is the problem. Simple immediate destruction of lives and property are nothing in comparison to the longterm effects of fallout or radiation sickness. People are still suffering from Hiroshima, after all. And you don't seriously think that they didn't already have a theoretical understanding of fusion bombs? They were developed quite rapidly after the war. And even if all that weren't true, some of those same scientists clearly had misgivings about the wisdom of detonating even one device in the desert. Not all of them were certain Trinity wouldn't cause total global destruction.

Given all this, allowing several hundred thousand more deaths -- even a million more deaths, while sickening, can't compare to the possible future annihilation of the species, which is a threat we still live with.
If freedom is outlawed, only outlaws will be free.

My Stories: Teacher: Lesson 1, Teacher: Lesson 2, Quick Corruptions, A New Purpose

User avatar
D.Olivaw
Posts: 256
Joined: Sun Jan 20, 2008 9:52 pm
Technosexuality: Built
Identification: Human
Gender: Male
Location: Twixt dusty books and giant guns
x 52
x 54
Contact:

Re: What kind of 'relationship' do you want with your fembot

Post by D.Olivaw » Fri Oct 25, 2013 11:46 am

dale coba wrote:Thanks, splendid information!

- Dale Coba
Thank you! You're very welcome; I find discussions like this endlessly fascinating.
dale coba wrote:I'm not sure how that folds back into fembots. I guess we'll hear a lot of phony arguments that seemingly justify inventors' reckless or lethal choices. When the time comes, I hope they tap you to help chuck out the chaff, so society can get hopelessly mired down in the legitimate, confounding questions central to machine sentience.

- Dale Coba
Well, I hope to be able to help. I've made it my career, in fact: I'm a disruptive technology analyst.
dale coba wrote:OT: We could have bombed something unpopulated first. Maybe Hiroshima still burns before they surrender, but maybe Nagasaki is spared.

- Dale Coba
Using one of our devices on an uninhabited bit of ground or sea as a demonstration was considered, but turned down for several reasons. One was the time crunch, with the Soviets entering the war in the Pacific we were desperate to avoid a Germany-like partition of Japan. The US was largely out of trained infantry and the public's willingness to put up with a campaign that would probably double the total figure for American war dead was less than rock solid, so the opportunity to knock Japan out quickly was militarily and politically important. The most important point, though, has to be that they didn't surrender after we actually used one on a city. It took the Emperor a little while even after Nagasaki to come to the conclusion that surrender was necessary and a military cabal still tried to kidnap him on the way to the radio station to prevent it. It's hard for us to understand that the Japanese leadership was willing to accept the annihilation of the Japanese people as long as it could be done with honor, and that the people were surprisingly willing to follow them in this. Hiroshima and Nagasaki convinced them that this would not be the case because they could be exterminated from the air. Quite a bluff, really, because even with the arsenal of the late '50's we couldn't have come anywhere close to doing so. Much less so with the 3 weapons we actually had (plus another every few months).
dale coba wrote:Salting the earth for generations, or forever, with radioactive elements from a bomb adds the most contemptuous touch to the mass murder.

- Dale Coba
This is very hyperbolic. Even with a ground-burst you can safely plant (and eat from) a vegetable garden at ground zero after only 3-5 years. This is for reasons of simple physics: the more radioactive a substance is, the shorter will be its half life and the faster it will decay. For instance, contra the public conception (looking at you, Fallout) the radiation effects of a superpower-level exchange (specific numbers I'm using come from studies of a US-USSR exchange in the mid '80s) in which the majority of both arsenals are expended would have dissipated to near-background levels (with a few exceptions) within 5-10 years. The genetic effects will persist for several generations due to mutations in the sex-cells, and for the first 60-70 years after you'll be dealing with elevated cancer rates among people who survived the initial attack.

Now, switching gears somewhat.
Grendizer wrote:I disagree. It is known that a digital system can emulate analogue systems to nearly any desired degree of granularity, given sufficient processing power. The "nearly" I mention is sufficiently minute that there is no evidence that this level of detail even effects neural function.

- Grendizer
This is the case only for linear, non-Complex systems. For non-linear Complex systems infinitesimal (read: below the simulation granularity) fluctuations in micro-states cause changes in the behavior of the macro-states.
Grendizer wrote:Actually, emotion is the cutting edge of AI research. Researchers have already mastered several areas of intelligence to varying degrees. Emotional intelligence isn't one of them. I'm not saying it won't be done, but it isn't as simple as you seem to suggest. Unlike things such as mathematics or even speech recognition, which can be enabled through objective rule sets or neural nets churning through mass-aggregated data, emotions are less well understood. It's coming along in some ways, as research at MIT has shown, but it doesn't yet meet the capability of any other currently deployed expert systems.

- Grendizer
Note I said if you can make a conscious machine, then giving it emotions shouldn't be the hard part. :lol:
Grendizer wrote:I didn't imply that weapons aren't inherently useful.

- Grendizer
My apologies, then. I misread the subtext of your response.
Grendizer wrote:I also reject your implication that fission bombs used on a mass scale would somehow simply equal the destruction wrought by conventional weapons of the same megatonnage.

- Grendizer
Megatonnage, yield, is the wrong measure in this case because the destructiveness of weapons does not scale linearly with it. Blast overpressure scales with the 2/3 power of yield (because it's spread over a sphere whereas the target is distributed across a flat surface). So if we build a bomb with 1000 times the yield it is only 100 times as destructive in terms of overpressure. Thermal damage doesn't suffer as much (though it's still not linear), but while the heat-wave was devastating to wood/paper Japanese cities modern cities are far less susceptible and take most of their damage from the blast. The weapon's mass goes up much faster than a linear relationship with yield, as well. These factors are why, for instance, bomb yields peaked in the late 60's. Everyone started putting multiple warheads on missiles so that you use three 333 kiloton devices on a target to get much more damage at a lower weight than a single 1 megaton device. More reliable, too, because all three are unlikely to fail.

Now that we're talking about the same thing, you are correct that they would not simply equal the destruction wrought by conventional weapons. My real contention here is that they are not the doomsday weapons of popular imagination, and I'm sorry if I didn't make that clear. As I said, in terms of blast overpressure we dropped the equivalent of 500 LIttle-Boys on France and Germany over several years. Due to prompt radiation effects and fallout dropping 500 actual little boys would be much worse, though that depends strongly on the targeting scheme (airbursts, like what you use to destroy cities, produce relatively little fallout for instance). As I point out in my replies to dale coba, not nearly as much worse as you seem to think, though. Within 20 years or so, the former German-occupied territories would be rebuilt, likely repopulated, and almost certainly agriculturally self-sustaining again. I wish to point out that I'm not waving my arms around here, this is based on our actual knowledge of actual effects.
Grendizer wrote:And you don't seriously think that they didn't already have a theoretical understanding of fusion bombs?

- Grendizer
I don't believe I ever said that they didn't. What they didn't know was if they were feasible weapons (the first one was the size of a building due to the cooling requirements) and precisely how much of their theoretical yield was attainable. It took a long, difficult, and expensive program to answer those questions and it was never certain to succeed. In the aftermath of the successful construction of weaponizable thermonuclear devices, the US and USSR's nuclear targeting strategies and nuclear diplomacy changed very significantly to take account of the new and different nature of these weapons when compared with the particular limitations of fission weapons. The field of thermonuclear strategy wasn't well explored until the 70's when we had sufficient computing power to run the right simulations. After that everything settled down very rapidly to a new equilibrium state that lasted through the end of the Soviet Union.
Grendizer wrote:And even if all that weren't true, some of those same scientists clearly had misgivings about the wisdom of detonating even one device in the desert. Not all of them were certain Trinity wouldn't cause total global destruction.

- Grendizer
This is mythological. It was known months before the Trinity test that the mechanism by which some had proposed the weapon might ignite the atmosphere didn't work. Some of the scientists were taking humorous bets about it immediately before the test, which likely spawned the legend that there was real concern.
Grendizer wrote:Given all this, allowing several hundred thousand more deaths -- even a million more deaths, while sickening, can't compare to the possible future annihilation of the species, which is a threat we still live with.

- Grendizer
Aaand now I have to be careful. I am reminded of one nuclear scientist who, speaking at a conference in London, explained very carefully and thoroughly that initiating a 10 megaton airbust over the most densely populated part of London would leave 80% of the population and 85% of the infrastructure intact (i.e. quickly reparable in the case of infrastructure and not permanently injured or sickened in the case of people) and was almost ejected from the room by an angry crowd. It's vitally important that, given the stakes, we think about these matters with the utmost objectivity and respect for scientific fact rather than emotional terrors. I sincerely beg you to do so while reading the following:

The idea that nuclear weapons pose a direct existential threat to the survival of the human species is one of the most persistent science myths of modern times.

Thermonuclear weapons have their own particular limitations, and while it would have been the most catastrophic event in the history of civilization a full exchange between the US and USSR at the height of their nuclear power (mid 80's) would not have ended the world, humanity, or even technological civilization. People who talk as if it would have tend to massively overestimate the secondary (radiation, etc.) effects of thermonuclear weapons (and the reliability of delivery systems) or to forget that the world consists of more than just the US, Russia, and Europe (the primary warhead soaks in such a scenario). What of Brazil, India, and the rest of the non-aligned powers? They had industry and agriculture that would likely not be heavily affected by such an exchange, though they would still suffer greatly in the decade(s) of chaos attending the inevitable reordering of the global balance of trade and power.

I'm very happy such event never occurred, and while we came close a few times (Able Archer in '84), a combination of careful planning, mutual understanding of the dangers involved, and the heroic actions of individuals prevented it. As things stand now, with the world's vastly reduced arsenals, the threat of even the above scenario has largely passed. You talk about a mythical nuclear self destruction of our people, I would like to turn your attention to something very real: the lack of a third or even fourth global mechanized war in the last 60 years. Nuclear deterrence was the key to preventing a Soviet takeover of Western Europe that, if successful, could have prolonged the survival of that totalitarian state for decades. Even if not successful, it would have drawn the Western powers into a multi-year conventional war between the Communist and non-Communist world that would have been every bit as deadly and violent as WWII. Our knowledge of the priorities and decision-making process of the USSR (much augmented by vast numbers of documents released since it fell) indicate that they would not have hesitated had such an opportunity presented itself.

Nuclear deterrence is the safest known means for the prevention of the sort of conventional wars that killed millions in the first half of this century. It is the only solution humanity has ever discovered to the pattern of increasing technological proficiency leading to ever more violent and destructive wars. Even if several regional nuclear wars were to break out over the next fifty years (a possibility I rate unfortunately likely) the casualties will be the merest drop in the bucket of those saved by the power of deterrence. Everything we understand about thermonuclear weapons and strategy tells us that not fewer but more and more reliable nuclear weapons in the hands of established state actors is what decreases the chance of wars, both conventional and nuclear. The US assembled and deployed arsenal has recently fallen below 2000 weapons and its readiness state is execrable, as it is seen as a career graveyard with little opportunity for promotion. This does not make anyone safer, but makes us all less so.

Si vis pacem, para bellum

Sorry for the epic thread derail, mods.
"Men, said the Devil,
are good to their brothers:
they don’t want to mend
their own ways, but each other's"
-Piet Hein

--NightBattery--

Re: What kind of 'relationship' do you want with your fembot

Post by --NightBattery-- » Sat Oct 26, 2013 3:32 pm

http://www.youtube.com/watch?v=SGptO6j3G-U
that was such a great read, :D i personally believe that now, the ones making the doomsday clock tick are the nations in the middle east that are fighting for space and religious diferences and are loaded with nukes.
and...
i would pick b :)
i find thrilling messing with a machine that is at the human level, reprogram it and make her friends stay in contempt as her mind changes.

The Thinker
Posts: 43
Joined: Wed Jun 26, 2013 6:23 pm
Technosexuality: Built and Transformation
Identification: Human
Gender: Male
x 9
Contact:

Re: What kind of 'relationship' do you want with your fembot

Post by The Thinker » Sun Oct 27, 2013 3:22 pm

I would only have a serious romantic relationship with a fembot if she was for all intents and purposes a human being. She would have to be indistinguishable from a real human being visually and behaviorally. She would have to be autonomous, sentient, genuinely caring, creative, fearful of her own demise etc. The only difference would be that she'd be internally mechanical.

As a sexual partner I have a different set of requirements. It's the fine line between woman and machine that I find the most erotic. She/It would have to be almost indistinguishable from a woman visually and behaviorally, but at the same time obviously a machine.

A robotic romantic partner would have to be unquestionably "alive", while it would ideally be be ambiguous whether a sex bot is alive or just a cold and callous machine.

SunshineInTheGarden
Posts: 61
Joined: Thu Oct 10, 2013 5:11 am
Technosexuality: Built
Identification: Human
Gender: Male
Contact:

Re: What kind of 'relationship' do you want with your fembot

Post by SunshineInTheGarden » Wed Oct 30, 2013 5:35 am

For me, it would be C.

User avatar
mangaman
Posts: 49
Joined: Wed Oct 09, 2013 6:25 pm
Technosexuality: Built
Identification: Human
Gender: Male
Contact:

Re: What kind of 'relationship' do you want with your fembot

Post by mangaman » Thu Oct 31, 2013 6:50 pm

a or b , but have her programed with a bit of C :mrgreen:
" >_> im spiderman....everybody gets one"

robolover69000
Posts: 99
Joined: Sat Jul 03, 2010 7:58 am
Technosexuality: Built
Identification: Human
Gender: Male
Location: Robo Central
Contact:

Re: What kind of 'relationship' do you want with your fembot

Post by robolover69000 » Wed Nov 06, 2013 6:25 am

froggy99 wrote:Not sure where it falls on your ABCD list, but any fembot I had would pretty much be like owning my car. She'd be a fun toy and capable tool but not much more than that. The only other option for me would be a machine so advanced that she was for all purposes a synthetic living being, and in that case she wouldn't be *mine*, she'd be hers, and whatever the relationship would be, it'd be up to both of us. Anything in between those two endpoints would feel like some kind of slavery to me and that just wouldn't be ok.
Okay, going with the 'Car' analog, would you let someone else 'Drive Your Car'?
Also going with the synthetic living being, how would you react to if she says "I think we need to take a break and see other people" or "I am seeing someone else and I think you should too"

Regardless of which relationship type I listed in my previous post, how comfortable would you feel in 'sharing' your fembot?
Or is she exclusively yours?
Are you exclusively hers?
Robo Lover 69000 the gynoid gynecologist.

PS
If you have a gynoid(or area one) in need of a gynecological exam
I am your man! Reasonable rates, breast exams are always free!

justanordinarygirl
Posts: 3
Joined: Thu Nov 07, 2013 11:25 am
Technosexuality: Built and Transformation
Identification: Human
Gender: Female
Contact:

Re: What kind of 'relationship' do you want with your fembot

Post by justanordinarygirl » Fri Nov 08, 2013 6:48 pm

I thought I'd post from the other side of the equation...the fembot side. Our relationship is a mixture of (mostly) A and D, I think. My husband and I have a great setup. I am a fembot, always, but have various personality programs to allow for different situations, including a "default" personality (aka just an ordinary girl :lol: ). So, I'm in default mode unless one of us chooses to load a different program. So there is definitely some A happening, but also a little bit of everything else depending on which program is currently running. It works for us :thumbsup:

User avatar
Grendizer
Posts: 175
Joined: Thu Feb 25, 2010 9:24 pm
Technosexuality: Built
Identification: Human
Gender: Male
Location: The Darkside of the Moon
x 2
Contact:

Re: What kind of 'relationship' do you want with your fembot

Post by Grendizer » Sat Nov 09, 2013 12:53 am

froggy99 wrote:...
If she were of the synthetic being type on the other end of my scale, then I would react the same as if a natural born human girlfriend said it. I'd think it was crappy but if you're going to accept free will, you gotta accept all of it, and sometimes that means you man up and you move on. I guess being some form of synthetic there could be the possibility of some kind of 'reprogramming' so she wouldn't feel that way, but every time a thought or action was censored you'd be taking away just a little bit more of what you loved in the first place. Not a demon I'd want on my shoulders.
I should think that if she's capable of being reprogrammed, that is part of her purpose, so modifying that program isn't inherently taking anything away from her, in the same way that spitting or shaving your nails isn't inherently destructive of your own body. It's designed that way. Anyway, how would the change be negative if it's a change that brings her back to the way she was, mentally, when you got together? After all, many people with mental disorders (for instance) would kill to get back to the "way they were," because the way they are now is destroying the things/people they love -- or even themselves. Having the option to reboot is great. You might question the analogy between free will and mental disorder, but the very essence of many mental disorders is that those who are suffering are doing exactly as they wish, and many people who aren't victims are doing exactly the opposite. It is one of life's great ironies.
froggy99 wrote: I had assumed the original question was of the "hypothetical real world" type. I go in for the amusing "No, I can't be a robot!" fantasies as much as the next person here, but in much the same way as I wouldn't want to see a big tower with a glowing eye pop up somewhere outside Peoria, Illinois and try to cloak the world in darkness, I wouldn't really want to see that play out in the real world. Until she could have an equal say IN the relationship, it wouldn't BE a relationship, it would be ownership, and I would have difficulty owning an individual capable of independent thought on even a fairly basic level.
I suppose I agree, to a point, although if she wants to be owned then it may be different for me. Although I will point out that many true relationships aren't equal. In fact, equal relationships are more an historical anomaly than anything else, a recent fashion the ideal of which we often fall quite short, even when we try. Also, given that machines are built for a purpose, a desire for ownership may very well be the case for many androids in the future, even if they have "free will."

I'd also point out that free will itself is really a myth. We have wills, to be sure. Calling them "free" is a bit like calling Apple's Siri self-aware. It's a parlor trick. So maybe there's no point in fretting about it, at least where androids are concerned.
If freedom is outlawed, only outlaws will be free.

My Stories: Teacher: Lesson 1, Teacher: Lesson 2, Quick Corruptions, A New Purpose

Post Reply
Users browsing this forum: No registered users and 19 guests