Prev: Re: FYI: located GW Space Fleet for Auction Next: Re: FT: Missiles and Gas Tanks

Re: AI in FT (was Re: Be gentle...)

From: Allan Goodall <agoodall@s...>
Date: Thu, 17 Jul 1997 15:34:37 -0400
Subject: Re: AI in FT (was Re: Be gentle...)

At 08:09 PM 7/16/97 +0100, Sam wrote:

>> 5) Sa'vasku not using artificial intelligence should be obvious.
>
>Not at all. The problem, is that as soon as we have AI, the term
>goes out of date. Who's to say that one mind is 'artificial', and
>another isn't, if they both have the same capabilities? If human
>minds can be uploaded into computers, and 'AI's' can be implanted
>into a biological brain, then there really is no difference, and
>labelling one as 'artificial' (which suggests inferiority), is no
>different from any other form of racial discrimination.

I agree with you. I was unclear in my meaning. I meant that "Sa'vasku
not
caring to differentiate between 'natural' and 'artificial' intelligence
should be obvious." A Sa'vasku ship is a biological entity. If it's also
intelligent, they've essentially solved the AI question from the
opposite
end of the problem. We're looking to build a machine that mimics human
intelligence. They are building biological entities that replace
mechanical
machines.

>My argument would be that if these so called AIs don't have
>emotions, then they haven't achieved human level sentience.
>A sufficiently complex mind is going to have ideas about self
>awareness and self preservation, which are emotions of a sort.

I disagree. First off, I prefer the term "sapient" as opposed to
"sentient."
I'll have to wait until I get home to check up the definitions, but if I
remember correctly, an earthworm can be said to be "sentient" while only
higher life forms are sapient.

Second, emotions are (to my mind; I'm not a psychologist or
anthropologist)
the culmination of aeons of evolution. Dogs, for instance, are
emotional.
They get angry, they get frightened, they receive pleasure. Each of
these
reactions has come by way of evolution. If you were to create an
artificial
"thinking machine," it wouldn't NECESSARILY have to go through an
evolutionary process. You could start it off by setting "self
preservation"
as a basic condition of its programming, and tell it to protect itself.
It
could do so in a cold, rational manner. It doesn't have to get angry,
for
instance, since it has no need to pump adrenalin into its brain
chemistry
(I'm assuming an electronic AI here), nor does it have to go through the
ritual of proving who is "alpha male." Our emotions are based on
biological
and social evolution. The strongest and healthiest get the strongest and
healthiest mates, and get to mate the most often. The strongest can fend
off
others trying to take away their mate. Affection for the offspring allow
"force" the parents to protect their children until puberty (and if they
parents don't pay attention, that genetic line soon hits a dead end).
None
of that would have to filter through into an AI since it doesn't have
the
biological imperative of dual gender reproduction.

Now, I could see a situation where the AI might develop a whole new
range of
emotions. Does an AI receive "pleasure" as part of its damage control
and
resistance systems? Perhaps it is a biomechanical machine that actually
DOES
need extra chemicals to run at peak performance. These chemicals could
work
like endorphins that result in "runner's high." I could see AIs
developing
complex psychological problems, like chemical dependancy and a neurotic
fear
of being alone. I don't see them as having the same emotions as humans,
though. They didn't have to go through what we did in order to achieve
sapience.

>> They KNOW they don't have a soul and that for them there is
>> nothing beyond this "life," so they damned well won't risk
themselves.
>
>I definately have to disagree here. I KNOW that I don't have a soul.
>I also KNOW that you, and everyone else, don't have souls. I also
>know people that KNOW that everyone has a soul. Does that mean that
>they're more suicidally inclined than me? I don't think so...

A strong belief in the afterlife has been a basic foundation of armed
forces
for centuries. Essentially, a soldier who believes there is "a better
place"
beyond this world is more likely to volunteer for a war than someone who
believes that "this is it." The difference, though, isn't highly
noticable.
There are a lot of athiests in the armed forces. Again, there's a
biological
imperative for protecting the species. This is where sacrifice and
altruism
come in. There are biological commands deep in the human psyche that
puts
the wealth of society above the wealth of the individual. The most
devout
athiest who greatly fears death will still willingly die to protect
their
child without a moment's hesitation unless that person is either a
psychopath or sociopath.

I don't believe that this will show itself in AIs unless specifically
programmed. An AI is a species of one. In fact, it may appear to be
incredibly self-centred and monomaniacal. 

>Also, why should computers necessarily be atheists? The athiests
>answer is that they're more intelligent than we are and are less
>prone to silly superstitious beliefs...
>
>The idea of machine religion though is one I find intriguing,
>and definitely worth exploring.

That is a good point, but I think once again evolution rears its ugly
head.
Religions first started out as fertility cults in very early homo
sapiens.
The early humans were not particularly bright and had trouble
remembering
what they did a month ago, let alone 9 months ago. Suddenly a woman
becomes
pregnant, apparently through divine intervention. This led to a belief
in
fertility deities (and, ironically, the women being the most important
members of the community as it was through them that children appeared).
Religions later developed into a way of describing the complex world
around
us. A tornado rips through a village, sparing one family and killing
another. Why? People were deterministic and believed there was an
intelligence behind it. Either a god was playing tricks on mortals or
that
family did something bad. Later, this view developed to explain human
mortality. The idea that we might just cease to exist at the point of
death
was so scary as to be unthinkable, and thus developed the afterlife
concept.
(Note: this is not soc.religions so I'm not going to take this any
further.
Those of you who believe in miracles or God manifesting himself amongst
early prophets in order to explain the truth of the universe, please
ignore
this poor pagan.)

At any rate, I find it unlikely that an AI would develop a belief in a
soul.
It KNOWS what happens when it's turned off. In fact, it's probably been
turned off several times in its existance. To the computer, it would
simply
be a piece of missing time. However, I suppose it could develop a sense
of
religion similar to that found amongst cosmologists. What was there
before
the universe began? Why did the universe begin? What is beyond the
universe?
What happens after the universe ends? Big questions, which could form
the
basis of an AI religion. I don't think that such a religion would deal a
lot
with morality, though, as it would seem to indicate that individuals are
essentially infintesimally small cogs in the giant wheel of the cosmos.

>'AI' controlled drones and missiles would be common place I'd have
>thought, and very deadly to grunts.

Yeah, I can see "smart" bombs (such as buzzbombs) and drone tanks
operating
in this environment. I can also see genetically linked biological
warfare as
an important weapon, which would push more troops into powered armour
(or at
least power assisted and air conditioned environment suits).

Allan Goodall:	agoodall@sympatico.ca 
"You'll want to hear about my new obsession.
 I'm riding high upon a deep depression. 
 I'm only happy when it rains."    - Garbage

Prev: Re: FYI: located GW Space Fleet for Auction Next: Re: FT: Missiles and Gas Tanks