AI or SI?

General chat about fembots, technosexual culture or any other ASFR related topics that do not fit into the other categories below.
Post Reply

AI or SI?

Artificial Intelligence.
17
38%
Simulated Intelligence.
28
62%
 
Total votes: 45

Esleeper
Posts: 96
Joined: Fri Mar 11, 2016 6:48 am
Technosexuality: Built
Identification: Human
Gender: Male
Contact:

Re: AI or SI?

Post by Esleeper » Mon Aug 01, 2016 11:58 pm

Svengli wrote:


Hmm,

Within the domain of fembot fetishism, there's a spectrum of likes.

Some like surface mechanical appearances - panels, plastic skin, metal skeletons, metalic skin, etc,
Some like a degree of mechanical actingness, rigid motion, malfunction.
Some the tendency/ability to alternate between seeming a person and seeming a robot
Some like ability to program and control a fembot.

So at one extremely, some might be satisfied with the better-functioning robot sex dolls that look like they might appear with incremental progress and at an other extreme, some would like robots that are containers of a downloaded human consciousness or simulated equivalent with a variety of intermediate forms (say Stepford Wives, biomechancial devices, West World Hosts, etc).

The thing about convincingness is that someone might want a fembot that would indefinitely convincing as a person or person-equivalent and some might want a femobt that was convincing for a "scene" - well, basically a programmable person

Maybe simulated intelligence would be a fembot that's only provisionally convincing where what people are calling an "artificial intelligence" would be a never-distinguishable from human-behaving fembot (IE, always has the potential to "make their own choices"). Suit yourselves, though I've mentioned I think AI should be anything that can solve any problems a human can solve without regard to the "free will" question.
Perhaps, though that feels like a poor sort of intelligence to me- one that's only marginally better than our current chatbots and faltering attempts at artificial intelligence. You can solve all of those problems just fine and yet have no genuine comprehension of what those problems might actually be or what impact the solutions have. They'd be non-human in the worst way, viewing the universe as nothing but a mix of inputs and outputs with no real ability to relate to any of it.

Myself, I want more of a synthesis between the fully human and fully in human- the type of fembot who knows for a fact she is not and never will be a "real" human but chooses to be as much like one as she can manage in spite of any inherent limitations she might have in that respect.

The programmable human aspect on the other hand, is one I find myself especially struggling to wrap my head around. I can understand intellectually why it may be appealing, but from my viewpoint it's little better than a form of slavery or worse because it involves fundamentally altering the mind of an entity that is either sentient or very close to being so.

Svengli
Posts: 331
Joined: Mon Jul 14, 2003 3:47 pm
x 25
x 4
Contact:

Re: AI or SI?

Post by Svengli » Tue Aug 02, 2016 6:07 pm

Esleeper wrote: The programmable human aspect on the other hand, is one I find myself especially struggling to wrap my head around. I can understand intellectually why it may be appealing, but from my viewpoint it's little better than a form of slavery or worse because it involves fundamentally altering the mind of an entity that is either sentient or very close to being so.
Some background:
We humans basically have experience with the category of things, of tools/computers, of animals and of people. Most of entities we encounter fit into exactly one of these categories. Human beings and animals have continuous and intermittent needs and desires - they need to breath, they need to eat, they usually desire some forms of sexual activity as well as stimulation, novelty, "choice", etc...Tools and animate objects usually don't have needs and if they do have needs (says a car needs gasoline) they generally don't act to meet their needs and so-far inanimate object don't have desires.

Even more, we humans "relate" to other human beings, we care about each other, defend each other, wish to maintain our freedom, essentially, we mutually recognize each other's needs and desires (to some extent). That's natural and healthy for a human being.

So, given the limits of our cognitive systems and categories, it's natural that if something doesn't fit into the category "tool" we tend to think of it as something else - either person or animal. Hence, the assumption that if a computer "become intelligent enough", it will become like a human, Hence, the "Eliza Effect" where people attribute human qualities to certain kinds of computer programs, even those lacking intelligence.

However, I would claim that there is nothing that requires or mandates that an "intelligent" computer program would gain the quality of having its own agenda or anything like human desires or human or animal reflexes to meet those needs or desires. Essentially, it is possible to build a "tool intelligence" - an intelligence that essentially acts to satisfy a human's desires without having its own desires.

The thing is "we" human being really are approaching the prospect of being able to create intelligent entities and this promises numerous problems and paradoxes.

My caveat here is that a "tool intelligence" does not need be a "sentient being" reduced to slavery but rather can just be a tool even it is able to understand the desires of a human being who uses it.

Fembots is our interest but the same questions comes in with "robot servants" which implicitly "everyone" wants. And I'd say the "ethics" of creating robot servants is rather the oppose of the way you put it. If I create an extension of a tool intelligence, which I understand is merely simulating a person's reactions for some purpose, and some period, I have not created something that I feel an obligation to and neither have I created something that is ever going to "feel" I have an obligation to it. I have simply created a tool () . Essentially, we are talking about an extension of pornography. And this has all the objection people make to pornography (and which don't hold up to scientific scutiny etc).

On the other hand, if someone were to create "sentient" being, one with its own needs and desire, which I and other humans would naturally respect, there are a variety of challenges. Being essentially autonomous, we'd have the obligation to make such a thing "sane", make such a thing capable of happiness, have expectation that it could be accept by a society of sentient beings (given the alternative of a sentient being autonomous and not caring about other sentient beings would be problematic), all humans would have obligations to such a thing just as we have an obligation to each other. An obligation to, for example not to fill inhabitable space with such beings in such a way that it prevents any of them from satisfying their desires, etc (ie, overpopulation of a world of immortal things is something to consider). The creator of a sentient being would be something akin to a parent though figuring out exactly what would be a challenge, etc. And creating a sentient being wouldn't be creating a mere servant so we'd still have the question of how to automate our activities etc.

So those are the issues as I see them - I may add more when I have time.

Esleeper
Posts: 96
Joined: Fri Mar 11, 2016 6:48 am
Technosexuality: Built
Identification: Human
Gender: Male
Contact:

Re: AI or SI?

Post by Esleeper » Thu Aug 04, 2016 9:28 pm

Well, as far as I can see it such a "tool intelligence" would barely be worthy of being called an intelligence at all- at its worst calling it "intelligence" would be an insult to the word. At best, it would have the bare minimum amount of intellect to follow orders, but in every other way it would be absolutely mindless. It would have no desires because it lacks the intelligence to even have those desires. Besides, why would a tool even need intelligence to do its job in the first place? The only thing it needs is a rigid set of rules for it to follow ad nauseum, like current industrial robots.

And I don't think I made it clear earlier, but if you want fembots to simulate having desires of their own (even if it's as simple as the desire to please, which would itself require them to learn what pleases a given person)then the only way to do so is to actually give them those desires, which is beyond mere tool intelligence. Sure it can imitate it for brief periods of time but the lack of authenticity must inevitably show through in all but the most bleeding-edge cases, in which case it may as well be the real thing inasmuch as we can't tell for sure if other humans feel emotions either.

All that aside, I feel the tool intelligence idea just doesn't work when it's applied to entities whose sole purpose for existence is to be imitation humans. Without those desires, they'd be capable of little more than sleepwalking through existence or standing idly waiting for orders- which is all the more blatant since in 90% of their appearances, they're referred to specifically as people and not just tools. And since such beings have desires and discernible personalities, I can only call it slavery when one of those personalities is forcibly overwritten because someone else wills it.

But to jump back to the subject of the topic, how would you even test to see if a given fembot was an AI or a SI? I've already discussed how they'd be functionally impossible to discriminate unless the latter was possessed of no more intelligence than a Roomba.

dieur
Posts: 199
Joined: Thu Jul 17, 2003 10:40 pm
x 4
x 8
Contact:

Re: AI or SI?

Post by dieur » Tue Aug 09, 2016 11:21 am

Opinion:
The leap to a cognitive AI - an AI that produces it's own goals and can evaluate them - remains unexplored ground. Where Svengli is probably wrong is that a CAI isn't just a machine learning tool that has gotten "intelligent enough". It's fundamentally designed a different way. Google recently fed their huge library of books into a neural net that distilled all words into points on a ~350 dimensional space based on their syntactical associations. The output is pretty cool. You can ask questions like "He is to She as Count is to ?" and it will answer Countess. But it's still just pattern matching. It's saying "Well, among all the things I've seen, 'he' shows up in similar positions to 'she' in the same places that 'count' shows up in similar positions to 'countess'." It's an abstract understanding, to be sure, and maybe a good thing to plug into a larger system that does self-direct. But by itself, it's not much of a step towards that self direction.

When we finally figure out how to create cognitive AI at a human level of complexity that drives itself, we will have some ability to control what goals it attempts to accomplish. (Some ability - since the overall complexity is above the ability of a human to design from scratch, we'd probably rely on growth from a nucleus, just as nature does with children). Svengli is right that we are under no obligation to give it goals at all in alignment with human ones. But such an intelligence will then be alien. Not in the always cute "cold and distant until someone loves it and then it learns to become human" sort of alien in anime. But rather, "I really love turning things into cars. I will do my best to turn everything into cars! I just figured out how to build cars out of people. Why are the foolish humans trying to stop me, do they not understand the importance of making cars?????!?" kind of alien.

These alien intelligence also won't be very good at emulating a companion, unless
A) They get SO smart that they can fully comprehend what a human wants, and can emulate it entirely. In the same way that today, we could probably make a robotic ant that just totally knocks the socks off of any ant it meets. Such an ant doesn't have to worry about abusing our creation. The creation has no independent feelings. It's entire existence is a subset of the true drives of our vastly greater human intelligence. So that's just awesome, if the vastly-more-intelligent computer was successfully designed to take care of us. Not so great if it's just fooling us until the probability of successfully turning us all into cars exceeds some threshold.

B) The intelligence is built sufficiently close to human lines that it can understand the parallels between our drives and its own. You want it to understand human curiosity? Sex drive? Companionship? It must also need explanation, sexual gratification, and company.

And herein lies the problem. Something built along the lines of B is, essentially, human. We have no obligation to give it rights or consideration, but we have no obligation to give each other rights or consideration either.
We could maybe tweak the formula, such that they were particularly passive or obedient or patient. But we can do that with humans too, to be honest, even with today's technology.

So as much as the fiction of some college fembot gasping in shock/surprise as her rival messes with the big obvious control panel on her back appeals to my sense of control/vulnerability, I could never see something like that being allowed to happen ethnically in reality.

Esleeper
Posts: 96
Joined: Fri Mar 11, 2016 6:48 am
Technosexuality: Built
Identification: Human
Gender: Male
Contact:

Re: AI or SI?

Post by Esleeper » Wed Aug 10, 2016 10:31 am

I get what you say about the potential for an alien intelligence in that sense, but by definition it seems like it would be unnecessary at best and dangerous at worst. What would a machine whose sole purpose is to make cars need to be intelligent for in the first place? If anything, it would be more prudent to make them just barely smart enough to understand orders in the most basic, literal manner or compel them to stick to a single, predefined routine. In your example, if the car-making intelligence ran out of materials it would simply wait for more to arrive instead of acting on the initiative it was never given.

No, it's far better that AI be designed specifically with human interaction in mind, or at least only give it the degree of intelligence needed to interact with humans when doing so is necessary for the task it was made to do. An old saying comes.to mind here- "if you want to eat a steak, you don't cook the whole cow". The simplest way to keep cognitive AI from acting up is to only make it as smart as it needs to be.

And quite frankly, what's wrong with making something like Option B and then simply letting it exist as an equal to us? Oh sure, it might become a homicidal maniac eventually- but the same can be said of any human too. More importantly, the very process of effectively creating a mind similar but not identical to our own could shed a lot of light on how our own minds work.

dieur
Posts: 199
Joined: Thu Jul 17, 2003 10:40 pm
x 4
x 8
Contact:

Re: AI or SI?

Post by dieur » Tue Aug 16, 2016 6:50 pm

I agree... in a story I never posted here, I made reference to a fictional accord that required all true AI to be given a human-like body, in order to have a human-like existence. It made a good excuse anyway.

The people who are really terrified of cognitive AI fear an AI given the directive to improve AI the most. Say North Korea concludes the best way to get a technological leg up is to build a CAI to make better ones with the same directive, up to a point it's smart enough to instantly close the technology gap. Assuming such a device could go through cycles quickly (and I personally doubt that very much - I suspect it's as difficult for a CAI can build something more complex than itself as it is for us), the CAI might quickly see North Korea's plan to hijack the process for their own ends as an impediment to be removed. Next thing we know, the world is swarming with nanobots building an ever smarter computer out of everything they can break down.

User avatar
Propman
Posts: 324
Joined: Tue Jun 15, 2004 12:42 am
Technosexuality: Built
Identification: Human
Gender: Male
Location: East of Berlin, West of Moscow
x 1
x 14
Contact:

Re: AI or SI?

Post by Propman » Tue Aug 16, 2016 10:44 pm

Esleeper wrote:And quite frankly, what's wrong with making something like Option B and then simply letting it exist as an equal to us? Oh sure, it might become a homicidal maniac eventually- but the same can be said of any human too. More importantly, the very process of effectively creating a mind similar but not identical to our own could shed a lot of light on how our own minds work.
The thing is, we create machines for particular purpose. If what we need is essentially a human, why even create a machine in the first place? I mean, there still would be reasons to use robots (not necessarily androids), just not widespread.

Age_Of_Information
Posts: 3
Joined: Fri Aug 19, 2016 7:28 am
Technosexuality: Built and Transformation
Identification: Human
Gender: Male
Contact:

Re: AI or SI?

Post by Age_Of_Information » Tue Aug 23, 2016 12:39 am

Such a hard choice lol

User avatar
Murotsu
Posts: 230
Joined: Sun Apr 17, 2016 10:47 pm
Technosexuality: Built and Transformation
Identification: Human
Gender: Male
Contact:

Re: AI or SI?

Post by Murotsu » Tue Aug 23, 2016 7:02 pm

How would you determine the difference? Are emotions a requirement for AI or SI? Also, self-awareness does not necessarily require freewill.

What about a "group" or hive mind? A being in one could be self-aware while still being part of the "collective."

If both can pass, say, a Turning Test is there really a difference?

Or, what if the intent was to make an AI with the intellect of say a parrot or dog? Both are clearly self-aware. Both are intelligent. If that's enough for the purpose...

Personally, done right, they're equal and interchangeable.

--NightBattery--

Re: AI or SI?

Post by --NightBattery-- » Tue Aug 23, 2016 10:07 pm

For me "Simulated intelligence" suggest much more less degrees of freedom than "Artificial inteligence"

I picture that if there was a robot designed to drink tea with you, the one with a simulated intelligence would be like following a script requiring imput to respond while an artificial intelligence would be even expected to lead you in the tea ritual and be capable to see that everything is going smoothly. That's how I see it.

User avatar
Stephaniebot
Posts: 1918
Joined: Thu Oct 23, 2003 12:13 pm
Technosexuality: Transformation
Identification: Android
Gender: Transgendered
Location: Huddersfield
x 2
Contact:

Re: AI or SI?

Post by Stephaniebot » Tue Aug 23, 2016 11:12 pm

Murotsu wrote:How would you determine the difference? Are emotions a requirement for AI or SI? Also, self-awareness does not necessarily require freewill.

What about a "group" or hive mind? A being in one could be self-aware while still being part of the "collective."
As far as I know, we don't yet have a genuine hive mind collective, but if we do, let them know about me, please. In theory, its an amazing concept, but in practice, linking even 2 human minds to think alike, let alone far more, looks way beyond us at present. As to robots, even a little thing like signal interference would make this complex, I suspect?

I'm sure collective minds, of both varieties will come in time, especially the fembot one, but for now, we can only wish...
I'm just a 'girl' who wants to become a fembot whats wrong with that?

User avatar
Murotsu
Posts: 230
Joined: Sun Apr 17, 2016 10:47 pm
Technosexuality: Built and Transformation
Identification: Human
Gender: Male
Contact:

Re: AI or SI?

Post by Murotsu » Tue Aug 23, 2016 11:37 pm

Stephaniebot wrote:
Murotsu wrote:How would you determine the difference? Are emotions a requirement for AI or SI? Also, self-awareness does not necessarily require freewill.

What about a "group" or hive mind? A being in one could be self-aware while still being part of the "collective."
As far as I know, we don't yet have a genuine hive mind collective, but if we do, let them know about me, please. In theory, its an amazing concept, but in practice, linking even 2 human minds to think alike, let alone far more, looks way beyond us at present. As to robots, even a little thing like signal interference would make this complex, I suspect?

I'm sure collective minds, of both varieties will come in time, especially the fembot one, but for now, we can only wish...
We do in a sense. It is called the internet. While that is a very crude hive mind, for the first time in history you, me, anybody, can interact instantaneously with any number of people around the world. Get a group of like-minded people together on a subject and suddenly for the first time they can bring together masses of information and sources and come to a "best" conclusion. It is almost the human thought process replicated.

Signal interference can be dealt with in a number of ways. It is no different than military jamming of electronics and the countermeasures to that. Narrower gain, bandpass, or filters do this. For example, in the days of film and tape like cassettes, Dolby noise reduction did this by sorting out most of the "background" noise, like tape "hiss." Sure there was some signal loss but the receiving individual was able to still make out the intended information / signal easily and the interference was gone.
Just how you process the signal can make a difference. You inject noise into a Doppler signal. Since the noise is static in frequency the receiver ignores it entirely. Only a "moving" signal (that is, one that is changing in frequency continuously) is considered. By adding upper and lower limits to the shift, you make it harder to inject a false signal.
Or, you can use multiple frequencies. Add on that the signal "jumps" frequencies continuously. Now you have to take out a a wide range of signals to effect things. Much more difficult, much more power required.
Change the gain and get "burn through."

Think about this site. We are all of reasonably like mind on the subject at hand. At least, we have an interest in it. We discuss it at length. We take polls, and come to conclusions about things related to the subject of fembots. We also introduce each other to ideas and information we likely otherwise would never see. If a troll or three were to try and disrupt that we'd react to that situation and filter them out.
That goes for all the things you discuss on the internet. Sure, it's a very crude hive mind situation, but it is one. With time it will evolve into something more refined. Think of it as the machinery of the Industrial Revolution say, 200 years ago. Primitive by today's standards but doing essentially the same job in many cases, just unrefined and with poor accuracy compared to today.
The electronics revolution of today is advancing far faster. But, we have no way to accurately predict when AI or SI will match human intelligence. There is plenty written on when the Singularity might occur, but I'm not going to put money on that. Like psychics and astrologers, the futurists are usually wrong except by luck.
It will happen though. You just have to last long enough to reach it... :)

User avatar
Stephaniebot
Posts: 1918
Joined: Thu Oct 23, 2003 12:13 pm
Technosexuality: Transformation
Identification: Android
Gender: Transgendered
Location: Huddersfield
x 2
Contact:

Re: AI or SI?

Post by Stephaniebot » Wed Aug 24, 2016 11:14 am

Given I'm 58 already, I doubt I will last long enough, to be honest.
As to a hive mind here, have you never spotted the creation v transformation issues we split on, on here? :lol:
I'm just a 'girl' who wants to become a fembot whats wrong with that?

Esleeper
Posts: 96
Joined: Fri Mar 11, 2016 6:48 am
Technosexuality: Built
Identification: Human
Gender: Male
Contact:

Re: AI or SI?

Post by Esleeper » Fri Aug 26, 2016 6:53 am

Stephaniebot wrote:Given I'm 58 already, I doubt I will last long enough, to be honest.
As to a hive mind here, have you never spotted the creation v transformation issues we split on, on here? :lol:
To say nothing of this very topic.

Esleeper
Posts: 96
Joined: Fri Mar 11, 2016 6:48 am
Technosexuality: Built
Identification: Human
Gender: Male
Contact:

Re: AI or SI?

Post by Esleeper » Fri Aug 26, 2016 6:57 am

Murotsu wrote:How would you determine the difference? Are emotions a requirement for AI or SI? Also, self-awareness does not necessarily require freewill.

What about a "group" or hive mind? A being in one could be self-aware while still being part of the "collective."

If both can pass, say, a Turning Test is there really a difference?

Or, what if the intent was to make an AI with the intellect of say a parrot or dog? Both are clearly self-aware. Both are intelligent. If that's enough for the purpose...

Personally, done right, they're equal and interchangeable.
The Turing test is a measure of human gullibility, not artificial intelligence.

Plus, I figure most beings that were self aware but lacked free will would go insane with the awareness that they were simply slaves or be incapable of acting on their own initiative. As you describe it, an AI acts, but a SI can only react.

User avatar
Murotsu
Posts: 230
Joined: Sun Apr 17, 2016 10:47 pm
Technosexuality: Built and Transformation
Identification: Human
Gender: Male
Contact:

Re: AI or SI?

Post by Murotsu » Sat Aug 27, 2016 12:58 am

You have never owned a dog have you...?

Esleeper
Posts: 96
Joined: Fri Mar 11, 2016 6:48 am
Technosexuality: Built
Identification: Human
Gender: Male
Contact:

Re: AI or SI?

Post by Esleeper » Sun Aug 28, 2016 6:14 am

Murotsu wrote:You have never owned a dog have you...?
I have, but anyone who owns a dog can say that their emotions are anything but fake. Plus, they lack the sentience and intelligence needed for true free will and self-awareness like an AI would need to possess. An AI with intelligence akin to a dog would be little better than what passes for AI today.

User avatar
Mixgull
Posts: 136
Joined: Fri Sep 09, 2016 9:23 pm
Technosexuality: Built
Identification: Human
Gender: Male
x 2
x 8
Contact:

Re: AI or SI?

Post by Mixgull » Thu Sep 15, 2016 5:44 pm

I like a variation, where the AI have emotions but she talks like a robot (saying statistics, analysis, etc...)

Post Reply
Users browsing this forum: No registered users and 16 guests