Re: [FT] Unpredictable AI
From: Richard and Emily Bell <rlbell@s...>
Date: Thu, 21 Jun 2001 19:51:09 -0400
Subject: Re: [FT] Unpredictable AI
Allan Goodall wrote:
> On Wed, 20 Jun 2001 18:06:12 -0400, Richard and Emily Bell
> <rlbell@sympatico.ca> wrote:
>
> >Computers are really, really bad at
> >recognizing things quickly, and it will take an improvement of
several orders of
> >magnitude before they are as good as humans.
>
> I think you're using conflicting arguments to dismantle the AI
argument. At
> one point you talk about AIs having the same "learning curve" problem
as
> humans because they would have to be built using "genetic algorithms".
But,
> then you say that AIs can't recognize things as well as humans. In
other
> words, they will be designed too much like humans that they will have
the same
> liabilities, but not have the same benefits? I don't think that's
likely.
The wetware computer is a massively parallel structure. Nerve impulses
travel at a
mere 90 meters per second, compared to the computer's 0.6c, yet we still
do things a
lot faster. The genetic algorithm for learning to recognize objects has
been running
for several hundred million years. It also runs on optimized hardware.
Unfortunately, it is not obvious how the hardware is optimized, so we
are limited to
general purpose computers attempting to emulate dedicated hardware.
>
>
> Computers are incredibly good at recognizing things quickly. That is,
> recognizing SPECIFIC things. Try searching through a list of 100,000
10-digit
> numbers for a specific string of digits. A human will probably miss
it, a
> computer will find it quickly.
If you can categorise every likely situation in fighter operations to
100,000
10-digit, I will concede this point; however, AI's major stumbling block
is
enumerating possibilities, and NP-complete problems cannot be solved in
this
fashion. Also, one 10-digit number is impossible to confuse for another
10-digit
number that is not equal. Also, if the list is sorted, humans are
unlikely to miss
the number. Finally, as the database gets much larger than a mere
100,000 elements
(say a phone directory for NYC), the human is not that much slower than
a computer,
and the computer is much slower than the human if it has to physically
leaf through
the book. Combat situations are not readily available in machine
readable formats.
>
>
> Unfortunately, computers (currently) have no sense of context. Here's
a good
> example. Question 1: What did you have for lunch last Tuesday?
Question 2:
> Have you ever wrestled an aligator? I'm guessing that you answered
question 2
> MUCH faster than question 1. A computer, on the other hand, will
accurately
> answer question 1 rather quickly, but would have to go through its
database of
> "experiences" to answer question 1.
Context is everything, successful pilots have mastered two very
important skills.
the first is immediately recognizing all of the important things, and
the second is
ignoring everything else. What is important and unimportant varies with
the context
of the engagement. I actually answered both questions quickly (muffin,
no), but I
wasted much of my youth optimizing the wetware for fast information
retrieval.
>
>
> Human memory is deeply flawed. The human brain has evolved, and still
operates
> on a "fight or flight" mechanism. A computer will not panic if swamped
by an
> overwhelming number of enemies. A computer will not panic and rout. In
my
> original comments, I made mention that a computer controlled fighter
would be
> less massive and have faster reaction times. This will be enough that
human
> run fighter ships just won't be realistic in the far future... for
combat
> roles.
People do not actually panic. Fight or flight impulses only cause
problems when
neither is a practical response to the situation. All reasonable
options in a combat
situation fall into one of those two categories. What people refer to
as panic is
actually the failure of training. Adrenalin causes hyper focussing.
The things that
you are good at, you become very good at. Things that you do not do
well become
impossible. A licensed skydiver was killed when he mistakenly assumed
that it would
be no difficulty jumping with an off-handed harness. He went to pull
the rip cord,
but his hand did not find it. Because he never reached for his rip cord
with his off
hand, he kept trying to pull the non-existant rip cord all the way down
to his
death. It never occurred to him to try reaching for the other side.
>
>
> On the other hand, I can see why you need a sapient lifeform in
control of a
> craft when a situation is complex and likely to result in unique
problems all
> the time. A _true_ artificial intelligence may make this possible, but
I could
> then see such a machine having a survival instinct that would make it
> essentially useless in combat. It, quite simply, wouldn't want to die.
I've
> actually developed a background universe for this, but I haven't done
anything
> with it as yet. I had intended it for DS2 and FT, but I still have to
map it
> out...