Computing speed will be a necessary leap, but we've also got to tackle a lot of programming issues which don't generally come up with your average software engineer.
Imagine telling your fembot to "do the laundry." Your average software engineering student would get as far as a language parser and AI system to determine what you mean, but would probably end his answer with "which then launches the laundry subroutine."
That laundry subroutine has got to be pretty sophisticated, or require additional hardware which you would be responsible for installing - RFID tags in your clothes, and a reader/reciever in your laundry hamper, for instance. The fembot has to know where what dirty clothes are, be able to identify them visually, navigate to them, collect them, notice if any have been dropped, take them to the laundry room, add the necessary amount of detergent, open a new package of detergent if necessary (and if none is available, remind you to buy more detergent at the earliest possible moment and/or the next time you go to the store), put the clothes in the washer without overloading it, etc. For a human, most of these tasks are trivial. For a computer, these are the kinds of things which make programmers violently ill.
I foresee that the earliest things that we would recognize as commercial fembots will probably do things like try to wash a roll of paper towels, get trapped behind baby gates, trip over things that they dropped, and then get stuck in the laundry room because they aren't programmed with an exception case for "out of detergent" or "can't properly fill/close washer door."
Memristors - the future of fembot technology?
-
- Posts: 666
- Joined: Tue Aug 26, 2003 8:25 pm
- Technosexuality: Built
- Identification: Human
- Gender: Male
- Location: An infinite distance away in a direction which can't be described in 3-dimensions.
- x 3
- Contact:
- Stephaniebot
- Posts: 1918
- Joined: Thu Oct 23, 2003 12:13 pm
- Technosexuality: Transformation
- Identification: Android
- Gender: Transgendered
- Location: Huddersfield
- x 2
- Contact:
- Grendizer
- Posts: 175
- Joined: Thu Feb 25, 2010 9:24 pm
- Technosexuality: Built
- Identification: Human
- Gender: Male
- Location: The Darkside of the Moon
- x 2
- Contact:
20 gigs in 1 square cm of logic
Hardware is moving apace, I think. The critical mass will be reached when learning algorithms are perfected. That would be the ability to identify percepts and filter and categorize them into concepts. I still think Ray Kurzweil is more right than wrong about this: it's going to be easier to design such a system through reverse-engineering the brain than by just trying to trial-and-error code the thing. I've said it before -- strong AI won't get here for some time. But perhaps it will be good enough before then, hopefully in the next fifteen to twenty years, to suit most people here. If Ben Goertzel gets his way, we'll have strong AI in ten years, but I don't think the economic pressures are acute enough for that to happen.
As in my stories, I think the chief pressure for androids is a desire for cheap domestic labor. That pressure is higher in Japan than it is in North America, because it is an island nation surrounded by enemies and/or wealthy nations, making migrant labor harder to come by and more costly. Americans don't need robots to do our dirty work, because we have a large border with a populace and poorer nation that is a major trading partner.
Watch the money. That will tell you everything you need to know. When the pressure is high enough, funding will shift. The human brain is existence-proof that intelligence is designable; the only real question is whether the first successful design will be brain-based.
Hardware is moving apace, I think. The critical mass will be reached when learning algorithms are perfected. That would be the ability to identify percepts and filter and categorize them into concepts. I still think Ray Kurzweil is more right than wrong about this: it's going to be easier to design such a system through reverse-engineering the brain than by just trying to trial-and-error code the thing. I've said it before -- strong AI won't get here for some time. But perhaps it will be good enough before then, hopefully in the next fifteen to twenty years, to suit most people here. If Ben Goertzel gets his way, we'll have strong AI in ten years, but I don't think the economic pressures are acute enough for that to happen.
As in my stories, I think the chief pressure for androids is a desire for cheap domestic labor. That pressure is higher in Japan than it is in North America, because it is an island nation surrounded by enemies and/or wealthy nations, making migrant labor harder to come by and more costly. Americans don't need robots to do our dirty work, because we have a large border with a populace and poorer nation that is a major trading partner.
Watch the money. That will tell you everything you need to know. When the pressure is high enough, funding will shift. The human brain is existence-proof that intelligence is designable; the only real question is whether the first successful design will be brain-based.
If freedom is outlawed, only outlaws will be free.
My Stories: Teacher: Lesson 1, Teacher: Lesson 2, Quick Corruptions, A New Purpose
My Stories: Teacher: Lesson 1, Teacher: Lesson 2, Quick Corruptions, A New Purpose
- Sthurmovik
- Posts: 324
- Joined: Mon Jul 14, 2003 10:11 pm
- Technosexuality: Built and Transformation
- Identification: Android
- Gender: Male
- Contact:
We don't need the learning algorithms, we need the command and control systems and emotions that turn the raw processing into what you know as a person. Once we have that then we'll have to try to code up an AI that isn't either insane or autistic. Training AI's will work a lot like training animals or children. You will have just about as much ability to "programme" a fembot as you do as programming your kids. Just about the only advantage will be the ability to snapshot your AI so you can reset it back to a certain point and try again...of course that's terribly unethical. 

- Grendizer
- Posts: 175
- Joined: Thu Feb 25, 2010 9:24 pm
- Technosexuality: Built
- Identification: Human
- Gender: Male
- Location: The Darkside of the Moon
- x 2
- Contact:
We have AI capable of learning with the same plasticity as humans? I don't mean just mimicry, but the ability to take a concept, put it in context, and then extend that to a different context (for instance, realizing that if falling hurts you, it might hurt some other person who falls, not just you). Also, emotional response (which matters more than the experiencing of emotions, which can't be proven in any case) can be learned by machines, and if you want to replicate the subtlety of human emotional response it probably should be learned -- which will require very good learning algorithms. Hard-coding human level emotional response would be prohibitively tedious at best, and probably incomplete without the ability to extend and revise from experience (learning). That's not to say that command and control is somehow small stuff, but it's not the only hard problem.Sthurmovik wrote:We don't need the learning algorithms, we need the command and control systems and emotions that turn the raw processing into what you know as a person. Once we have that then we'll have to try to code up an AI that isn't either insane or autistic. Training AI's will work a lot like training animals or children. You will have just about as much ability to "programme" a fembot as you do as programming your kids. Just about the only advantage will be the ability to snapshot your AI so you can reset it back to a certain point and try again...of course that's terribly unethical.
As far as ethics goes, I don't have much trouble with that one, considering the possible downside of an "insane" AI; things change if your mind represents an existential threat to humanity. However, if you are just editing a sentient personality on a whim, that's a different matter.
But from what I know so far of this community, there are many who would be happy with a companion who couldn't really pass the Turing Test reliably, therefore personality editing would be less of an issue in that case.
If freedom is outlawed, only outlaws will be free.
My Stories: Teacher: Lesson 1, Teacher: Lesson 2, Quick Corruptions, A New Purpose
My Stories: Teacher: Lesson 1, Teacher: Lesson 2, Quick Corruptions, A New Purpose
-
- Posts: 85
- Joined: Tue Aug 01, 2006 12:25 pm
- Technosexuality: Transformation
- Identification: Human
- Gender: Male
- Contact:
- Sthurmovik
- Posts: 324
- Joined: Mon Jul 14, 2003 10:11 pm
- Technosexuality: Built and Transformation
- Identification: Android
- Gender: Male
- Contact:
The pattern matching algorithms that your brain uses to like recognize things, have intuition, respond to stimulus, etc, are probably the best understood part of AI and you can download some applications that you can be trained to do a pretty good job on limited case image recognition. What you think of as "learning" doesn't break down the way you assume. Emotion is critical to the way you and all other animals learn. Feelings like pleasure, pain, fear and joy are critical to how the brain learns. Emotion is not some bolt on option to any biologically modeled AI, but a critical component of the operating system.We have AI capable of learning with the same plasticity as humans? I don't mean just mimicry, but the ability to take a concept, put it in context, and then extend that to a different context (for instance, realizing that if falling hurts you, it might hurt some other person who falls, not just you).
What you experience as emotion is the manifestation of what your mind needs to function autonomously. If you fuck up someone's emotional system they'll become insane. The human brain is basically an emotion based kernel with a general purpose super computer bolted on. The reason you can relate to other mammals is because we all share the same core programming. The ability to learn quickly is what is bolted on, the emotion is what makes a person a person.
Our ability to study and map the organic brain is advancing at an amazing rate and copying that is probably our best bet to develop an AI. Millions of years of evolution has produced an amazingly elegant system and designing a completely new form of intelligence would not only be exceedingly difficult, but would probably result in something so alien that we might not be able to relate to it.
I'm speaking from conversations I have had with people involved in AI research so don't take what I say as 100% gospel, but what I can say for sure is that the learning isn't the key issue for AI. It's the control mechanisms that strengthens connections and focuses your awareness.
Here are some links that might be helpful.
http://www.numenta.com/for-developers/e ... nd-htm.php
http://www.numenta.com/for-developers/e ... tation.pdf
I have no doubt that we will probably develop some sort of conventionally programmed bots, but don't expect them to be very engaging.But from what I know so far of this community, there are many who would be happy with a companion who couldn't really pass the Turing Test reliably, therefore personality editing would be less of an issue in that case.
Users browsing this forum: No registered users and 20 guests