Memristors - the future of fembot technology?

General chat about fembots, technosexual culture or any other ASFR related topics that do not fit into the other categories below.
Post Reply
WilloWisp
Posts: 666
Joined: Tue Aug 26, 2003 8:25 pm
Technosexuality: Built
Identification: Human
Gender: Male
Location: An infinite distance away in a direction which can't be described in 3-dimensions.
x 3
Contact:

Post by WilloWisp » Mon Apr 12, 2010 8:24 am

Computing speed will be a necessary leap, but we've also got to tackle a lot of programming issues which don't generally come up with your average software engineer.

Imagine telling your fembot to "do the laundry." Your average software engineering student would get as far as a language parser and AI system to determine what you mean, but would probably end his answer with "which then launches the laundry subroutine."

That laundry subroutine has got to be pretty sophisticated, or require additional hardware which you would be responsible for installing - RFID tags in your clothes, and a reader/reciever in your laundry hamper, for instance. The fembot has to know where what dirty clothes are, be able to identify them visually, navigate to them, collect them, notice if any have been dropped, take them to the laundry room, add the necessary amount of detergent, open a new package of detergent if necessary (and if none is available, remind you to buy more detergent at the earliest possible moment and/or the next time you go to the store), put the clothes in the washer without overloading it, etc. For a human, most of these tasks are trivial. For a computer, these are the kinds of things which make programmers violently ill.

I foresee that the earliest things that we would recognize as commercial fembots will probably do things like try to wash a roll of paper towels, get trapped behind baby gates, trip over things that they dropped, and then get stuck in the laundry room because they aren't programmed with an exception case for "out of detergent" or "can't properly fill/close washer door."

User avatar
Stephaniebot
Posts: 1918
Joined: Thu Oct 23, 2003 12:13 pm
Technosexuality: Transformation
Identification: Android
Gender: Transgendered
Location: Huddersfield
x 2
Contact:

Post by Stephaniebot » Mon Apr 12, 2010 1:27 pm

Not really pulling to my 'desires' but anything that improves fembot technology has to be a good thing. But the technological side of it, not a clue.

If I find a robotising scientist, I'll ask his opinion lol!
I'm just a 'girl' who wants to become a fembot whats wrong with that?

User avatar
Grendizer
Posts: 175
Joined: Thu Feb 25, 2010 9:24 pm
Technosexuality: Built
Identification: Human
Gender: Male
Location: The Darkside of the Moon
x 2
Contact:

Post by Grendizer » Mon Apr 12, 2010 2:06 pm

20 gigs in 1 square cm of logic

Hardware is moving apace, I think. The critical mass will be reached when learning algorithms are perfected. That would be the ability to identify percepts and filter and categorize them into concepts. I still think Ray Kurzweil is more right than wrong about this: it's going to be easier to design such a system through reverse-engineering the brain than by just trying to trial-and-error code the thing. I've said it before -- strong AI won't get here for some time. But perhaps it will be good enough before then, hopefully in the next fifteen to twenty years, to suit most people here. If Ben Goertzel gets his way, we'll have strong AI in ten years, but I don't think the economic pressures are acute enough for that to happen.

As in my stories, I think the chief pressure for androids is a desire for cheap domestic labor. That pressure is higher in Japan than it is in North America, because it is an island nation surrounded by enemies and/or wealthy nations, making migrant labor harder to come by and more costly. Americans don't need robots to do our dirty work, because we have a large border with a populace and poorer nation that is a major trading partner.

Watch the money. That will tell you everything you need to know. When the pressure is high enough, funding will shift. The human brain is existence-proof that intelligence is designable; the only real question is whether the first successful design will be brain-based.
If freedom is outlawed, only outlaws will be free.

My Stories: Teacher: Lesson 1, Teacher: Lesson 2, Quick Corruptions, A New Purpose

User avatar
Sthurmovik
Posts: 324
Joined: Mon Jul 14, 2003 10:11 pm
Technosexuality: Built and Transformation
Identification: Android
Gender: Male
Contact:

Post by Sthurmovik » Mon Apr 12, 2010 4:52 pm

We don't need the learning algorithms, we need the command and control systems and emotions that turn the raw processing into what you know as a person. Once we have that then we'll have to try to code up an AI that isn't either insane or autistic. Training AI's will work a lot like training animals or children. You will have just about as much ability to "programme" a fembot as you do as programming your kids. Just about the only advantage will be the ability to snapshot your AI so you can reset it back to a certain point and try again...of course that's terribly unethical. :(

User avatar
Grendizer
Posts: 175
Joined: Thu Feb 25, 2010 9:24 pm
Technosexuality: Built
Identification: Human
Gender: Male
Location: The Darkside of the Moon
x 2
Contact:

Post by Grendizer » Mon Apr 12, 2010 6:01 pm

Sthurmovik wrote:We don't need the learning algorithms, we need the command and control systems and emotions that turn the raw processing into what you know as a person. Once we have that then we'll have to try to code up an AI that isn't either insane or autistic. Training AI's will work a lot like training animals or children. You will have just about as much ability to "programme" a fembot as you do as programming your kids. Just about the only advantage will be the ability to snapshot your AI so you can reset it back to a certain point and try again...of course that's terribly unethical. :(
We have AI capable of learning with the same plasticity as humans? I don't mean just mimicry, but the ability to take a concept, put it in context, and then extend that to a different context (for instance, realizing that if falling hurts you, it might hurt some other person who falls, not just you). Also, emotional response (which matters more than the experiencing of emotions, which can't be proven in any case) can be learned by machines, and if you want to replicate the subtlety of human emotional response it probably should be learned -- which will require very good learning algorithms. Hard-coding human level emotional response would be prohibitively tedious at best, and probably incomplete without the ability to extend and revise from experience (learning). That's not to say that command and control is somehow small stuff, but it's not the only hard problem.

As far as ethics goes, I don't have much trouble with that one, considering the possible downside of an "insane" AI; things change if your mind represents an existential threat to humanity. However, if you are just editing a sentient personality on a whim, that's a different matter.

But from what I know so far of this community, there are many who would be happy with a companion who couldn't really pass the Turing Test reliably, therefore personality editing would be less of an issue in that case.
If freedom is outlawed, only outlaws will be free.

My Stories: Teacher: Lesson 1, Teacher: Lesson 2, Quick Corruptions, A New Purpose

Steamboy
Posts: 85
Joined: Tue Aug 01, 2006 12:25 pm
Technosexuality: Transformation
Identification: Human
Gender: Male
Contact:

Post by Steamboy » Tue Apr 13, 2010 9:43 pm

Hmmm... it could lead to hardware based reconfigurable AI, in other words, it could form new connections at circuit level, like the synapses in brain cells.

This means programmers wouldn't have to code all the behavior of the computer, since it could learn from the environment...

User avatar
Sthurmovik
Posts: 324
Joined: Mon Jul 14, 2003 10:11 pm
Technosexuality: Built and Transformation
Identification: Android
Gender: Male
Contact:

Post by Sthurmovik » Wed Apr 14, 2010 5:13 pm

We have AI capable of learning with the same plasticity as humans? I don't mean just mimicry, but the ability to take a concept, put it in context, and then extend that to a different context (for instance, realizing that if falling hurts you, it might hurt some other person who falls, not just you).
The pattern matching algorithms that your brain uses to like recognize things, have intuition, respond to stimulus, etc, are probably the best understood part of AI and you can download some applications that you can be trained to do a pretty good job on limited case image recognition. What you think of as "learning" doesn't break down the way you assume. Emotion is critical to the way you and all other animals learn. Feelings like pleasure, pain, fear and joy are critical to how the brain learns. Emotion is not some bolt on option to any biologically modeled AI, but a critical component of the operating system.

What you experience as emotion is the manifestation of what your mind needs to function autonomously. If you fuck up someone's emotional system they'll become insane. The human brain is basically an emotion based kernel with a general purpose super computer bolted on. The reason you can relate to other mammals is because we all share the same core programming. The ability to learn quickly is what is bolted on, the emotion is what makes a person a person.

Our ability to study and map the organic brain is advancing at an amazing rate and copying that is probably our best bet to develop an AI. Millions of years of evolution has produced an amazingly elegant system and designing a completely new form of intelligence would not only be exceedingly difficult, but would probably result in something so alien that we might not be able to relate to it.

I'm speaking from conversations I have had with people involved in AI research so don't take what I say as 100% gospel, but what I can say for sure is that the learning isn't the key issue for AI. It's the control mechanisms that strengthens connections and focuses your awareness.

Here are some links that might be helpful.

http://www.numenta.com/for-developers/e ... nd-htm.php

http://www.numenta.com/for-developers/e ... tation.pdf
But from what I know so far of this community, there are many who would be happy with a companion who couldn't really pass the Turing Test reliably, therefore personality editing would be less of an issue in that case.
I have no doubt that we will probably develop some sort of conventionally programmed bots, but don't expect them to be very engaging.

Post Reply
Users browsing this forum: No registered users and 21 guests