How would she learn?
-
- Posts: 369
- Joined: Fri Feb 27, 2004 5:38 pm
- Technosexuality: Built and Transformation
- Identification: Human
- Gender: Male
- x 1
- Contact:
How would she learn?
This question formed in my mind while terracing several of BA's stories. We seem to take for granted that the female robot mind would work in a manner akin to a human's, synthesizing fact and experience to make its next move. However, BA's stories often feature the evolution of the fembot's consciousness through brute force application of code and reprogramming, often to the strenuous objections of the unit herself. This depiction got me thinking as to how an AI truly would evolve, and if it is possible to make a positronic brain that is, at its core, similar to or exactly like the organic counterpart.
Can a robot mind evolve like a human's, or does each leap in comprehension need to be accompanied by reprogramming? Is it possible, or even desirable, to have a robot who thinks and learns like her creators? And how does the robot view these intrusions that advance her manner of thinking?
I'd love to hear from the rest of the group, because I know that I'm baffled by this question. One could say that I'm as interested in her brain as I am in her body...
Can a robot mind evolve like a human's, or does each leap in comprehension need to be accompanied by reprogramming? Is it possible, or even desirable, to have a robot who thinks and learns like her creators? And how does the robot view these intrusions that advance her manner of thinking?
I'd love to hear from the rest of the group, because I know that I'm baffled by this question. One could say that I'm as interested in her brain as I am in her body...
Re: How would she learn?
human mind is also a bunch of 0 and 1 made with ions, with a lot of them and a mind boggin number of electrical routes dashing inside and our feelings, our feelings are a bunch of hormones splashing in the hipotalamus, all mammals have a degree of sentience, because they have a part of the brain dedicated to the self.
perhaps sentiences is less special to emerge and may be actually in function of number of neural conections that are so many that they are always on.
perhaps sentiences is less special to emerge and may be actually in function of number of neural conections that are so many that they are always on.
-
- Posts: 336
- Joined: Mon Jul 14, 2003 3:47 pm
- x 30
- x 8
- Contact:
Re: How would she learn?
"General Artificial Intelligence" is currently beyond the understanding of computer science or neurology.
I happen to have some ideas about it.
I've been a bit reticent about talking this given that the instructions on how to create an intelligent being from a common computer seem about as dangerous as instructions on how to construct an atomic bomb from ingredients available in your kitchen. Still, it might not be any more likely.
So anyway, Jeff Hawkins' book "On Intelligence" is one good starting point for some ideas about intelligence. The key point he makes is that "neocortex", the mammalian brain, essentially involves a single structure, implementing a single algorithm, repeated trillions of times over. And this is with both humans and higher mammals. One would tend to deduce that this is a priority balancing, information storing, sequence predicting, social-game-theoretically-interacting, etc system. IF you can get a handle on this system or an artificial equivalent, you could build something artificially intelligent.
But thing about this line of reasoning is that it doesn't separate the intelligence of a cat from the intelligence of a person. Not that I don't like cats but any human being can "feel" that there's a difference between human intelligence and cat intelligence. Cats can create new behaviors, can learn things, can pursue goals and can be generally sensible - they'll "do something" even in relation to things they don't understand. But the "mental hardware" of human being is different from a cat only in that we have a larger neocortex that is arranged in a particular fashion.
OK, so given this, what I'd claim creates "uniquely human intelligence" is essentially language. Where a cat deals with a single reality, balancing priorities and seeking goals, a human can create an overlay of multiple realities through the manipulation of symbols (a process which is also highly social without us realizing it). It's not just using language as such but "considering multiple possibilities" - an approach that is using the manipulation of symbols whether or not we experience it as language.
But thing is that this manipulation of symbols is not the deterministic symbolic manipulations of our computers but rather, a process controlled by the same priority balancing, goal seeking etc algorithm of the neocortex. We can see this in the ability of a person to "know what you mean" when you make a statement. In contrast, a computer only reacts to "what you say".
So, with all that, if we look at a human being, the baby is born with biological needs but these fairly quickly are meshed together with social/symbolic constructs which our person balances according to the complex neocortical algorithm.
So, jumping ahead to assume we've created an artificial equivalent of this neocortical algorithm, the programming of our artificial intelligence would involve setting the priorities of the system and giving orders whose priority we could control. Essentially, we'd program our gynoids by telling them what to do.
But anyway, this is just my exploration of the possibilities. If someone wants to program gynoids like computers in fiction, by all means, go ahead. It will be a few years before the real-seeming ones appear, to say the least.
Those are my thoughts on the subject...
I happen to have some ideas about it.
I've been a bit reticent about talking this given that the instructions on how to create an intelligent being from a common computer seem about as dangerous as instructions on how to construct an atomic bomb from ingredients available in your kitchen. Still, it might not be any more likely.
So anyway, Jeff Hawkins' book "On Intelligence" is one good starting point for some ideas about intelligence. The key point he makes is that "neocortex", the mammalian brain, essentially involves a single structure, implementing a single algorithm, repeated trillions of times over. And this is with both humans and higher mammals. One would tend to deduce that this is a priority balancing, information storing, sequence predicting, social-game-theoretically-interacting, etc system. IF you can get a handle on this system or an artificial equivalent, you could build something artificially intelligent.
But thing about this line of reasoning is that it doesn't separate the intelligence of a cat from the intelligence of a person. Not that I don't like cats but any human being can "feel" that there's a difference between human intelligence and cat intelligence. Cats can create new behaviors, can learn things, can pursue goals and can be generally sensible - they'll "do something" even in relation to things they don't understand. But the "mental hardware" of human being is different from a cat only in that we have a larger neocortex that is arranged in a particular fashion.
OK, so given this, what I'd claim creates "uniquely human intelligence" is essentially language. Where a cat deals with a single reality, balancing priorities and seeking goals, a human can create an overlay of multiple realities through the manipulation of symbols (a process which is also highly social without us realizing it). It's not just using language as such but "considering multiple possibilities" - an approach that is using the manipulation of symbols whether or not we experience it as language.
But thing is that this manipulation of symbols is not the deterministic symbolic manipulations of our computers but rather, a process controlled by the same priority balancing, goal seeking etc algorithm of the neocortex. We can see this in the ability of a person to "know what you mean" when you make a statement. In contrast, a computer only reacts to "what you say".
So, with all that, if we look at a human being, the baby is born with biological needs but these fairly quickly are meshed together with social/symbolic constructs which our person balances according to the complex neocortical algorithm.
So, jumping ahead to assume we've created an artificial equivalent of this neocortical algorithm, the programming of our artificial intelligence would involve setting the priorities of the system and giving orders whose priority we could control. Essentially, we'd program our gynoids by telling them what to do.
But anyway, this is just my exploration of the possibilities. If someone wants to program gynoids like computers in fiction, by all means, go ahead. It will be a few years before the real-seeming ones appear, to say the least.
Those are my thoughts on the subject...
- darkbutflashy
- Posts: 783
- Joined: Mon Dec 12, 2005 6:52 am
- Technosexuality: Transformation
- Identification: Human
- Gender: Male
- Location: Out of my mind
- x 1
- Contact:
Re: How would she learn?
I'd like to go back a step: Why does the robot need artificial intelligence at all? Animals have evolved intelligence because they have to adapt to environment conditions that change faster than their reproduction steps. If we have a robot in a non-changing environment (say a welding robot on a car-manufacturing line), proper programming will do the trick. So the fembot would only need AI to adapt to her environment, everything else could be pre-programmed. But the result would not be a human- but animal-resembling machine (the all-purpose catgirl type). Sure, that fembot would be clearly limited to its domain.
As for reprogramming against strenuous objections, well, that's that kind of story...
As for reprogramming against strenuous objections, well, that's that kind of story...
Do you like or dislike my ongoing story Battlemachine Ayako? Leave a comment on the story's discussion pages on the wiki or in that thread. Thank you!
Re: How would she learn?
yeah, well, we are human, we are selfish, we want to create artificial intelligence,as far from us it might be, to entertain us...just because we want to challenge nature (since she' has been kinda of a bitch the last milleniums), and darker even because thinking is also workin'...
and maybe because flesh might eventually falter...and we have so many dreams yet to fulfill
and maybe because flesh might eventually falter...and we have so many dreams yet to fulfill
-
- Posts: 170
- Joined: Thu May 12, 2011 10:59 am
- Technosexuality: Built
- Identification: Human
- Gender: Male
- Contact:
Re: How would she learn?
The idea of a "positronic brain" is actually pure nonsense, it was invented by science fiction writer Isaac Asimov, simply because "electronic" didn't sound science-fictiony enough. In real life, attempting to shoot positrons through a computer circuit would be a very, very bad idea. Namely because positrons are antimatter, and anyone with a bit of knowledge knows what happens when antimatter comes in contact with normal matter....
- dale coba
- Posts: 1868
- Joined: Wed Jun 05, 2002 9:05 pm
- Technosexuality: Transformation
- Identification: Human
- Gender: Male
- Location: Philadelphia
- x 12
- x 13
Re: How would she learn?
Two distinct problems:
1) Some want a mirror they can dance through, and with.
2) Some want to ask their twins, "Now that you're here, tell us what's the point to life?"
- Dale Coba
1) Some want a mirror they can dance through, and with.
2) Some want to ask their twins, "Now that you're here, tell us what's the point to life?"
- Dale Coba























-
- Posts: 14
- Joined: Mon Sep 14, 2009 5:49 pm
- Contact:
Re: How would she learn?
Why would we want to make an artificial brain that works just like a natural human brain, when natural human brains are so plentiful?
In my view, the whole point of an artificial brain is that it can be programmed to work exactly the way its creator wants it to work. We wouldn't *want* it to evolve and learn on its own, except in specific ways appropriate to its purpose (e.g. a sexbot figuring out on its own what kind of sex its user wants is good; realizing it doesn't like having sex and would actually prefer to be an actuary is not so good).
In my view, the whole point of an artificial brain is that it can be programmed to work exactly the way its creator wants it to work. We wouldn't *want* it to evolve and learn on its own, except in specific ways appropriate to its purpose (e.g. a sexbot figuring out on its own what kind of sex its user wants is good; realizing it doesn't like having sex and would actually prefer to be an actuary is not so good).
-
- Posts: 170
- Joined: Thu May 12, 2011 10:59 am
- Technosexuality: Built
- Identification: Human
- Gender: Male
- Contact:
Re: How would she learn?
I completely disagreeEnchanter wrote:Why would we want to make an artificial brain that works just like a natural human brain, when natural human brains are so plentiful?
In my view, the whole point of an artificial brain is that it can be programmed to work exactly the way its creator wants it to work. We wouldn't *want* it to evolve and learn on its own, except in specific ways appropriate to its purpose (e.g. a sexbot figuring out on its own what kind of sex its user wants is good; realizing it doesn't like having sex and would actually prefer to be an actuary is not so good).
-
- Posts: 170
- Joined: Thu May 12, 2011 10:59 am
- Technosexuality: Built
- Identification: Human
- Gender: Male
- Contact:
Re: How would she learn?
Well I would obviously prefer a sapient partner who I could have a real relationship with
- Saya
- Fembot Central Staff
- Posts: 421
- Joined: Sat May 07, 2011 5:04 pm
- Technosexuality: Built
- Identification: Android
- Gender: Female
- Location: Right here, silly.
- x 14
- x 12
- Contact:
Re: How would she learn?
Some learning capacity will be necessary in order to perform the tasks that even the least sentient gynoid is typically depicted as performing, because even that is leaps ahead of standard AI technology. Considering the numerous tasks within the home that a domestic gynoid would be called upon to perform, as well as the fact that homes are non-standard in the equipment they use, a gynoid would likely have to have some form of adaptive programming capable of "learning" at some level. And that is the very basic of reasons. I don't think an owner or a manufacturer would want to program each possible task individually into each unit, because they would probably die of old age before they get it done or exceed the memory limit of the CPU trying to program in all those parameters (I hate to use this, but if you want an example of what I am getting at, think of Robocop 2 and the over-programming he gets). Thus, you would need a robot that would "learn" from "experience". In fact, it might even be possible to "teach" said robot. Remember, teaching an organism a task to repeat does not exactly mean that the organism will suddenly become sentient, or we would have already had dogs quoting Shakespeare.
So at this point, the question is not, in my opinion, whether the robot can learn. It would have to. But the question we now have to ask is if it can develop it's own self-awareness and if it should, which I am sure has been the focus of three or four topics already. And to answer that question, one needs to answer numerous other questions. Chief among them:
1. Would it impede in the robot's function? (An emotional companion model would probably be a benefit, especially if used as a teaching aide to people with social developmental disabilities. But a combat robot that questions the morality of it's orders?)
2. Is it moral to do so? (Is it moral to make a pleasure model emotional? How might it perceive its function? Would the outcome even be predictable?)
3. And the most important question. Can the hardware even replicate sentience or create true sentience?
And a good chunk of the question is a personal one, which can only be answered on the individual level. It's basically an opinion, and thus there is no "right" answer. And, er, sorry if I have gone off on a tangent.
So at this point, the question is not, in my opinion, whether the robot can learn. It would have to. But the question we now have to ask is if it can develop it's own self-awareness and if it should, which I am sure has been the focus of three or four topics already. And to answer that question, one needs to answer numerous other questions. Chief among them:
1. Would it impede in the robot's function? (An emotional companion model would probably be a benefit, especially if used as a teaching aide to people with social developmental disabilities. But a combat robot that questions the morality of it's orders?)
2. Is it moral to do so? (Is it moral to make a pleasure model emotional? How might it perceive its function? Would the outcome even be predictable?)
3. And the most important question. Can the hardware even replicate sentience or create true sentience?
And a good chunk of the question is a personal one, which can only be answered on the individual level. It's basically an opinion, and thus there is no "right" answer. And, er, sorry if I have gone off on a tangent.
"If the time should ever come when what is now called science, thus familiarized to men, shall be ready to put on, as it were, a form of flesh and blood, the Poet will lend his divine spirit to aid the transfiguration, and will welcome the Being thus produced, as a dear and genuine inmate of the household of man."
- William Wordsworth
- William Wordsworth
Re: How would she learn?
hi saya
i don't have answers either, i would be winning the nobel prize so,
but...
animal sentience...well...elephants mourn and bury their dead and so...
http://www.youtube.com/watch?v=joUactmW ... re=related
i don't have answers either, i would be winning the nobel prize so,
but...
animal sentience...well...elephants mourn and bury their dead and so...
http://www.youtube.com/watch?v=joUactmW ... re=related
- Saya
- Fembot Central Staff
- Posts: 421
- Joined: Sat May 07, 2011 5:04 pm
- Technosexuality: Built
- Identification: Android
- Gender: Female
- Location: Right here, silly.
- x 14
- x 12
- Contact:
Re: How would she learn?
Well, what I mean to say is that people tend to assume that if a robot is capable of learning, it will develop sentience without a doubt. But plenty of animals learn, but none are what I or any biologist would call self-aware, with the POSSIBLE exception of the great apes, specifically chimpanzees. Any robot intelligence would be limited by the hardware that contains the software. The software it's self could be limitless in its development, but it is still confined within the parameters of the hardware. It is the same with animals, really. Humans tend to view intelligence as the way it relates to themselves, because a human being can just pick up a book and learn. But a human being is unique in that it has the logical capabilities to understand the words in the book and the meaning behind them. No other animal is capable of this.
"If the time should ever come when what is now called science, thus familiarized to men, shall be ready to put on, as it were, a form of flesh and blood, the Poet will lend his divine spirit to aid the transfiguration, and will welcome the Being thus produced, as a dear and genuine inmate of the household of man."
- William Wordsworth
- William Wordsworth
-
- Posts: 14
- Joined: Mon Sep 14, 2009 5:49 pm
- Contact:
Re: How would she learn?
[quote="Asato"]Well I would obviously prefer a sapient partner who I could have a real relationship with[/quote]
Sure, me too. But why an android? Don't get me wrong; I love the idea of an android/gynoid/fembot/what-have-you, but to me, that's not the sapient partner to have a real relationship with; that's the toy to enjoy playing with or the tool to make my life easier. For a real relationship, I'll choose a human being.
Sure, me too. But why an android? Don't get me wrong; I love the idea of an android/gynoid/fembot/what-have-you, but to me, that's not the sapient partner to have a real relationship with; that's the toy to enjoy playing with or the tool to make my life easier. For a real relationship, I'll choose a human being.
-
- Posts: 170
- Joined: Thu May 12, 2011 10:59 am
- Technosexuality: Built
- Identification: Human
- Gender: Male
- Contact:
Re: How would she learn?
That's just what I'm into. Part of it could be the element of forbidden love.
- gynoneko
- Posts: 918
- Joined: Thu Oct 21, 2010 1:42 pm
- Technosexuality: Built
- Identification: Cyborg
- Gender: Male
- Location: In the not too distant future
- x 2
- x 61
- Contact:
Re: How would she learn?
I like the idea that various census have come up with... Essentially life will find a way. As Malcom said in Jurassic Park, you can not ultimately control life, someone that has a chaotic nature, and it will adapt and evolve in order to succeed. In Tron, the idea was that if the conditions are right, life will find a way. Just by creating the environment, a new digital life form presented itself to them. While this is actually a lot of scifi mumbo jumbo, it does have basis in science. Scientists have discovered living beings in places where it was thought nothing could survive. Somehow it found a way to live and adapt and evolve to the environment. Even after the extinction of the dinosaurs, mammals, which were already around, adapted and evolved and prevented the same fate, and those dinosaurs that did survive turned into other things (likes crocodiles and birds). Regardless, the idea appeals to me is that if you can create an artificial mind that is complex enough, whether it uses traditional electronics or something more advanced, it will find a way to become sentient, or at least to adapt to the environment and strive to survive on it's own.
I think that if we can make something that advanced, we really will find some interesting new developments and it may be possible to create an artificial being that is self aware. Give us 30 years... Technology will catch upl to fantasy as it always does.
I think that if we can make something that advanced, we really will find some interesting new developments and it may be possible to create an artificial being that is self aware. Give us 30 years... Technology will catch upl to fantasy as it always does.
My heart and soul locked up in a cold steel frame
- Brian May
- Brian May
- xodar
- Posts: 532
- Joined: Thu Nov 24, 2005 1:53 pm
- Location: South Texas
- x 1
- Contact:
Re: How would she learn?
Enchanter wrote:Why would we want to make an artificial brain that works just like a natural human brain, when natural human brains are so plentiful?
In my view, the whole point of an artificial brain is that it can be programmed to work exactly the way its creator wants it to work. We wouldn't *want* it to evolve and learn on its own, except in specific ways appropriate to its purpose (e.g. a sexbot figuring out on its own what kind of sex its user wants is good; realizing it doesn't like having sex and would actually prefer to be an actuary is not so good).
Good idea.
I'd guess that once a type of brain is built and can be copied and improved you might want to experiment with various changes just to see what happens.
Could you build one that was nuts, that appeared to be undergoing internal subjective pain and trying to deal with it? Maybe you could create some with contradictory programs that might struggle for predominance. Maybe you could add some stimulus that causes it to self-destruct; or a program that picks such a stimulus you don't know and then destroys the bot if the stimulus recurs. Or a different software program that responds to some situation, making it have a different personality entirely. Maybe causes it to act like a fish.
In some of those cases you could have the original program/personality somehow overcome and then you try to figure out how to behave to restore it....
With such experiments you can gradually create an alien intelligent mind or try to duplicate one such as an octopus or raven.
"You can believe me, because I never lie and I'm always right." -- George Leroy Tirebiter.
If a tree falls in the forest and there's nobody there to hear it I don't give a rat's ass.
http://www.bbotw.com/product.aspx?ISBN=0-7414-4384-8
http://www.bbotw.com/description.asp?ISBN=0-7414-2058-9
If a tree falls in the forest and there's nobody there to hear it I don't give a rat's ass.
http://www.bbotw.com/product.aspx?ISBN=0-7414-4384-8
http://www.bbotw.com/description.asp?ISBN=0-7414-2058-9
Users browsing this forum: No registered users and 346 guests