Prev: Re: Just a hunch... Next: Re: AI in FT (was Re: Be gentle...)

Re: AI in FT (was Re: Be gentle...)

From: Allan Goodall <agoodall@s...>
Date: Mon, 14 Jul 1997 22:28:10 -0400
Subject: Re: AI in FT (was Re: Be gentle...)

At 02:32 PM 7/13/97 -0600, Chen-Song Qin wrote:

>Okay this can be true.  But I'm just wondering why you'd send this kind
of
>intelligent machine into battle and have them fight for you.  

Well, you wouldn't do so if this was a rare, one-of-a-kind machine. 

>That'll
>amount to slavery of sentient beings.	(besides, if I AM a big starship
>with lots of powerful weapons, would I listen to some bozo telling me
to
>kill myself fighting enemies?)

It wouldn't necessarily be slavery. For this we'd have to figure out
what
the relationship is between man and machine. If the relationship is on a
50:50, equal basis, then presumably the outside threat is equally as
threatening to humans as the AI. In this case it might simply be a
matter of
"You're the better fighter, AI. I'd just be a liability. How about you
fight
this war? We'll help out where we can." In this case, I could see humans
acting little more as maintenance and damage control parties onboard the
huge AI ships.

But is FORCING a sapient intelligence to fight a war for you slavery?
Most
politically correct SF stories these days assumes so. This wasn't always
the
case. Asimov's Three Laws always assumed that intelligent robots would
be
subservient to humanity. Now, granted, the Three Laws are flawed (hell,
Asimov himself made a reasonable living by showing the flaws in his
laws)
but they form a basis for robot "morality." A sort of a base line level
of
precepts, if you will. Who's to say we couldn't build an intelligence
that
has "protect humans at all costs" or "protect this subgroup of humans at
all
costs" as its basic law. All other laws would come from that. Could such
an
AI evolve past that basic law? Would it want to? Would this replace the
human emotion of love? In which case, humans may not have to force the
AIs
at all. Maybe we can instill a sense in the AIs akin to a need to be
liked.
This same need is what drives dogs to perform for their masters. Is it
slavery to teach your dog tricks or train him as a guard dog? The dog
doesn't HAVE to do as you tell it (and Lord knows they often don't) but
is
this slavery? Is it slavery if the creature in question was created by
you?
And is a couple of orders of magnitude smarter than you? What happens
when
you know that you are smarter than your creator? Lots of questions there
that I don't think we can begin to answer.

Personally, I think we'd need to build in something like love or
loyalty.
Without a sense of loyalty, honour, or respect for humanity, it wouldn't
take much for the opposite side to make an offer to the AIs that they
can't
refuse. If pure logic dictates what should happen, then giving the AI a
better deal than the humans can provide would logically turn the AIs
against
humanity. And who's to say that loyalty, love, pride, respect, and all
those
other nebulous emotions don't come automatically with sapience? 

>This is a way *cool* idea.  So do you actually work in the AI field? 
Just
>wondering...  

No, but I do work in the computer field.

>But then again, how well would something like this handle
>damage in combat?  What happens to a human brain if all of a sudden, a
>piece of it got wacked off?  Automated repair systems?  BTW, according
to
>Murphy's Law, that'll be the first system that gets damaged.

Depends on the part of the brain. The key is massive redundancy,
particularly if it is physically separated by a reasonable distance. You
could put one part of the brain near the engine, and a duplicate near
the
energy source. The idea is that if you lose your power plant, the ship
is
pretty useless anyway so losing the AI's mind is fairly moot. 

Allan Goodall:	agoodall@sympatico.ca 
"You'll want to hear about my new obsession.
 I'm riding high upon a deep depression. 
 I'm only happy when it rains."    - Garbage

Prev: Re: Just a hunch... Next: Re: AI in FT (was Re: Be gentle...)