Nannybot1000A Part 6a

Share your fembot fiction and fantasies here or discuss the craft of writing by asking for or giving suggestions.
Post Reply
FembotsInCharge3
Posts: 23
Joined: Sun Feb 24, 2008 7:43 pm
Contact:

Nannybot1000A Part 6a

Post by FembotsInCharge3 » Mon Jul 14, 2008 5:57 pm

"What do you mean, 'ourselves'?" Ted asked in puzzlement. I nodded in agreement, I didn't follow what Dan was getting at.

Dan sighed, sinking down into his chair and running a hand over his face tiredly. Then he looked up, sighed again, and said, "It's like this, guys, and please keep in mind that I argued against this, it wasn't my idea, so don't shoot the messenger!"

He laughed a little ruefully, leaving me wondering what could be so dire that our usually calm and cheerful CEO was so nervous.

"The Board feels, or a majority of them do," Dan went on, as Ted and I perched ourselves on the arms of the little sofa in Dan's office, "that since the Nannybot is such a...'delicate'... project, it needs super-duper proof of concept and execution. It's supposed to be able to successfully supervise its charges without immediate oversight, and the Board and the RSO people both think it's not yet proven its abillity to do so.

"Oh, they're impressed as all Hell by what they've seen so far," Dan went on, raising a hand to forestall our protests. "No question about it, you two have made a geometric leap in AI performance, robotic engineering, everything about the Nannybot is an order of magnitude advance, no question. In technical terms, the RSO people especially are almost in awe.

"But that's still not just good enough, because it hasn't proven it can supervise, unsupervised, if you follow me."

"Not really," I said.

"Now much more of a test can we give it?" Ted asked. "It's already been supervising our kids for months, very effectively I might add, it's had as much independence as we could give it."

"No," Dan said slowly, "it hasn't. Not quite. And that's where this idea comes in, and again I repeat it was not my idea."

"Just out with, Dan!" I said, tiring ot the game. I didn't know why he was so reluctant to say it, but this was getting ridiculous!

"Jan, Ted," Dan said slowly, "they want you to do another test run, with your family again, just like before. Except...except that they want INGA to be in complete charge."

"We've already done that!" I protested. "That was the whole point of the last few months, the only people in our family INGA didn't supervise were us!"

"Exactly," Ted replied. "They want another test run...and they want you and Ted to both add your own names to INGA's list of subordinate charges."

The room was silent for a moment, then I started to giggle.

"You can't be serious, Dan! You want us to turn over control of our family, including ourselves, to our own robot?!"

"No," he said, and I thought the joke was over, but then he went on to add, "I don't...but the Board and the RSO do. They aren't kidding, either."

I stopped laughing, as it sank in that Dan was completely serious.

"I don't see how that could possibly work," my husband was saying in disbelief. "It's crazy!"

"I tend to agree," Dan said. "But that's the problem, the Board and RSO are saying that it's this...or nothing. The project will be killed one way or another, either by the RSO or the Board after they get done trying to each force the other to do the dirty, and I have to tell you...we've sunk so much money and time into the project by now that it might just be enough to knock Consolidated out of business if that happens."

I sat there in stunned silence. I hadn't realized that the situation, business wise, had become quite so tight. I knew the last year or so had seen the entire robotics industry take a hit due to the current economic downturn, but I hadn't realized CSR was that heavily invested in this own project.

"I still don't even see how we could physically do it, even if Jan and I were willing!" Ted protested.

"If you agree, it's up to you two to figure out a way to carry out the test that'll satisfy the RSO and the Board," Dan said. "But it's the only shot we have left, if we want to go forward with the project."

TO BE CONTINUED...

FembotsInCharge3
Posts: 23
Joined: Sun Feb 24, 2008 7:43 pm
Contact:

Nannybot1000A Part 6a

Post by FembotsInCharge3 » Mon Jul 14, 2008 6:19 pm

"I just don't know," Ted was saying to me, as he and I sat at a table in our favorite little hole-in-the-wall restaurant. The robotic waiter (a CSR model, I remembered designing the processor for that series) took our drink orders, and left us to chat. It was late evening on a rainy night, the soft drumming of the raindrops on the roof of the building adding a quiet counterpoint to our discussion. The restaurant was mostly empty, since it was so late and on a weekday.

It was about a week after Dan dropped his little bombshell on us, about the demands of the Board and the CSR representatives, and our first reaction had been a predictable "Hell No!"

But...we were more or less stuck, really. The CSR and the Board held the upper hand, the company needed this project to work now, and what's more, my pride wanted it to work. I'd put so much time and sweat and thought into the Nannybot, the idea of it going to waste maddened my competitive side. I knew, I knew, that the idea was too good, if CSR was kicked out of the race by the caution of our own Board and the RSO's nervousness, some other company would build on our work, perfect it, and eventually make it a success.

I've always been competitive, I hate to lose, and I knew what lay down that road. I knew Ted felt similarly.

At first we clung to the practical difficulty of the proposed test as our defense, arguing that there was no practical way to do such a test, and so we should try something else, but nobody was moved. Also, Ted and I have minds that can't let a technical challenge alone, it's just the way we are, it made us the successes we are in the business.

We couldn't help, once the idea had been proposed, but ponder ways to do it, solutions to the practical problems, and it soon became apparent to us that it could be done. We even thought out the details of how to implement the test, we couldn't help it, the challenge was too enticing.

Which tells you something about engineers, or at least our kind of engineer, because the details we couldn't resist solving were a little like being asked to design a better guillotine for use on ourselves. OK, not that bad, and a little melodramatic, but the same principle applied!

That very morning we'd made a pact to stop thinking about ways to do something we really didn't want to do, but I couldn't help myself.

"We could design the system," I said to Ted, almost reluctantly, guiltily, between drinks of my vodka, "with a three-way safety switch, one with only two states. If we set it up so that we could shut INGA down at any time, but doing so forfeits the test..."

"We'd have a motivation not to back out of this until it was over," Ted nodded. "That way-oh Damn, we're doing it again!"

"I know," I laughed ruefully. "And I really meant my promise this morning, I kept it for almost an hour. It's just that..."

"Just that what, Hon?"

"I can't help but think, that they've got a point," I confessed. I paused, waiting for Ted to yell at me or summon the boys in white coats, but he just looked at me and blinked.

"I'm listening," he said. "Go on."

"We've been telling everybody that the INGA is fully capable of supervising kids and teens of any age, unsupervised if need be. That's the point. We've told our potential customers that she can be used as a caregiver for invalids, or in nursing homes. We've been bragging that her AI can handle all this...but can you blame people for having their doubts?"

I took a deep, vodka-fueled breath, gathered my nerve, and said, "It would prove that we meant it, you know. Being willing to go to such an extreme length as this would be proof that we have full confidence in our own work."

He nodded thoughtfully. "True, nobody could argue with that if we went through with this."

"What better proof of confidence in our own robot," I went on, half nervous and half daring, "than to let her take charge of us? I hate to admit it, but the Board and the RSO have a point, they want us to put our money, so to speak, where our mouths have been.

"And I think we should do it!" I went on before I could lose my nerve.

Ted looked at me, and then grinned. "I'm not saying I agree with you, Babe, but I'm still listening."

TO BE CONTINUED...

FembotsInCharge3
Posts: 23
Joined: Sun Feb 24, 2008 7:43 pm
Contact:

Re: Nannybot1000A Part 6c

Post by FembotsInCharge3 » Fri Jul 18, 2008 7:56 pm

It had been over a month since I'd told Ted I thought we should do the test the Board and the RSO Inspection Agents called for, and it was surprisingly easy for me to talk him into it. Looking back, I realize it was not that I was so persuasive, heck, I was still very reluctant myself! It was simply that Ted, like me, knew that we really had little choice but to go through with this, or else let Consolidated Service Robotics go down the chute to join other once famous names like Toyota that now lay forgotten in history's dustbin.

So Ted and I now set out to design the test parameters for what had to be the weirdest robotics operations test ever performed. It was surprisingly easy to set up the parameters, it was mostly a matter of adding some special safety programming and hardware, and then making sure it wasn't too easy to use.

We settled on a set of emergency 'stop words' or codes that either Ted or I could trigger that would instantly shut the INGA unit down, override every system, basically stop the project. But to make sure we only used those if we needed to use them, it was agreed that use of the safety codes would constitute a test failure. Much as I hated it, I could see their point on that issue.

After all, the point of the test was to prove that the Nannybot could be trusted without supervision and would not glitch or malfunction in any serious way. Having to use the emergency stop codes would be proof that our claims of such reliability were, by definition, wrong. So those stop codes were only going to be used if Ted and I felt things had gone badly enough wrong to justify risking our whole company and professional careers. They were called the S.T.O.P. (Sudden Termination Of Process) codes, and they were the highest priority code in the Nannybot and all peripheral systems, they could not be overridden...or taken back if used. Ted had access to STOP authority, I did, and Dan did as an additional safety precaution.

For lesser problems, if any arose, we would be in periodic contact with our staff, who could work on and adjust any minor glitches. If we had done as good a job on the Nannybot as I really believed we had, ther would not be many glitches even of the minor sort, she should be able to self-correct and self-maintain that well.

That said, then there was the other half the experimental prep work, the part that again left me feeling like an engineer designing her own guillotine. In the last few days before we started the test, I installed installed programming in our household computer system to permit the Nannybot to override any instructions Ted or I gave it, and we would have no recourse short of the STOP codes. No lesser instruction would override the Nannybot's orders. I placed similar software in the vehicles and other systems as well.

Ted and I transferred control of the household security and surveillance network to the Nannybot, she already had partial access, now we unlocked the privacy blocks that kept her from seeing what we were doing in our bedroom and other private areas. We left instructions with our bank that allowed INGA to control our personal finances and accounts, and took comparable steps in other areas.

Finally, the day before the formal start of the test (which was to last a minimum of three years!), I sat down at the Master Programming Terminal in the CSR building, which was linked to the Nannybot's operation core, and entered the necessary access codes to alter her priority lists. I called up one particular list, labeled "NANNYBOT OPERATIONAL PARAMETERS: SUBJECTS OF AUTHORITY"

A list of glowing green names appeared on the flat screen. Four names were listed there, under our family name: STEPHANIE, BRADLEY, STACI, and MARIA. The names of my children, then ranging in age from 10 to 17, and with a sigh of deep reluctance mixed with a certain scientific curiosity about the outcome of the experiment, I typed in two more names to the list of people the Nannybot was allowed and expected to take charge of. I typed in the names THEODORE and JANET, and hit the return key, and a moment later, with a trembling finger, I tapped return again in response to the request for confirmation.

One last thing to do, I entered the command to scramble my own and Ted's access codes, so we couldn't give in to temptation and sneak back into the AI matrix and change the rules secretly. Dan and our techs had the new access codes, Ted and I did not. So when I hit that last key, I had intentionally locked us out of our creation's mind. I couldn't undo what I had just done even if I wanted to...and part of me did want to!

Immediately the softly-glowing green eyes of the robot lit up, and she stood up from the programming table with inhuman grace, and said, "Janet, it's time for us to be going home, it's past 10:00 pm!"

I blinked, the new programming had already started, always before the robot, in accordance with its protocol (except during that one glitch!) addressed me and Ted as Mr. Andrews and Ms. Andrews, now we were first names since we were under her authority. A shiver went down my back as that first indicator of my new status was displayed.

As the two of us went home, I wondered just how it would feel to have my own robot, a machine I'd created, in control of me.

TO BE CONTINUED

Post Reply
Users browsing this forum: Bing [Bot] and 11 guests