Prev: RE: [FT] Unpredictable AI Next: Support the Origins Awards!

Re: [FT] Unpredictable AI

From: Richard and Emily Bell <rlbell@s...>
Date: Wed, 20 Jun 2001 18:06:12 -0400
Subject: Re: [FT] Unpredictable AI



Binhan Lin wrote:

> <SNIP>
> It is not that the AI is predictable, it is that it is mindnumbingly
> stupid and the problems faced by pilots maintaining situational
awareness
> while dodging fire is not solved merely by being really fast.  Unlike
> playing chess or diagnosing engine problems, most of a pilot's skill
set
> are psycho-motor skills which can be learned, but not taught.  Humans
are
> also much harder to fool than computers
> <SNIP>
>
> On the other hand, people can be easily fooled without experience. 
The
> common examples are basketball, hockey and soccer.  In these sports,
novices
> often make the mistake of watching the opponent's eyes, not their
center of
> mass (hips).	In these cases, a feint by looking sideways or a slight
> movement with the arms can send an opponent off in the wrong
direction.
> More experienced players are not phased by these maneuvers because
they have
> learned where the "true" indicator lies.  But in an evolving
environment,
> your experience or instinct may be wrong as things change.  What may
be a
> true indicator today could be used as a false indicator tomorrow.
>
> Computer intelligences would have a tremendous advantage in that new
> information or techniques would be applied quickly, if not instantly
across
> all the units a la Bolos (Keith Laumer) making innovations in tactics,
> equipment etc much less valuable, maybe even one-shot affairs for
surprise
> or advantage.  Humans take quite a while to train properly, artificial
> intelligences could be plug and play.

This is not likely.  AI routines will be almost certainly genetic
algorithms, so
to get two identical AI's they have to have the same set of experiences.
 Given
the amount of information that would have to be recorded, it becomes
prohibitive.  Just cloning the successful AI has problems, as the
computers that
run these things are so complex, that they will simply be designed to be
tolerant enough to manufacturing defects that most of them pass quality
control
and none are identical.  They are much too expensive to just throw away
due to
defects.

>
>
> On your point of situational awareness, the human mind has difficulty
in
> processing more than one data stream at a time.  It can be done but
takes
> practice and focus.  AI's on the other hand are not limited in this
way.
> You can have separate modules that watch for lock-ons and activate the
> appropriate counter-measures, a module that watches the range to
target,
> monitors the weapon status and makes sure the ordinance is delivered
without
> having to distract the higher AI.  Instead of thinking of single crew
> fighter, it would be like having 10 highly co-ordinated people working
> inside a single cockpit.
>
> Humans are evolved to deal with human scale events - things that
happen in
> the range of seconds or even tenths of a second.  Making decisions in
the
> nano-, micro, or millisecond range are completely out of our
abilities.
> Biologically we aren't capable of reacting faster than a few hundred
> milliseconds (i.e. the drop test where you drop a yardstick between
> someone's fingers, even the fastest reflexes allow a drop of several
inches)
> and your thought processes are based on a complex set of electrical
and
> chemical impulses, some neurotransmitters actually have to travel
across a
> gap between neurons.	Although fast, these speeds pale in comparison
to pure
> electrical or photonic speeds.

Then why can't computers play go very well?  All they have to do is
weigh the
options and take the best one.	The problem is that there are too many
options
and the pruning algorithms have yet to catch up.

>
>
> In returning to the thought that humans are much harder to fool than
> computers, I would argue that is merely a matter of experience and
knowledge
> base.  If you show half a picture that has a trunk on it, most people
would
> say it was an elephant.  If you show the same picture to a someone who
has
> never seen an elephant, what would they say?	They would try to relate
it to
> something that was in their experience.  If a computer had a photo
database
> of millions of pictures, then broke down the picture into shapes and
colors
> it might also come up with elephant ( it would probably also say tree,
hose,
> or worm).  The point is that given a sufficient database and enough
> computing power the computer can come to the same result as a human.
> Computers are going to get radically better in the future, humans are
not.

A large database is its own worst enemy when you have to spot things
quickly.
Flying a fighter is a hard real time problem, so the AI better have a
good
response for unidentified things.  Computers are really, really bad at
recognizing things quickly, and it will take an improvement of several
orders of
magnitude before they are as good as humans.  For short response times,
a large
group of neural networks will attack the problem, and hopefully provide
a
correct response.  Unfortunately, it will recognize stimuli as what it
most
likely is, not what it really is (but the two will coincide, more often
than
not), and unfamiliar things will be recognised as familiar things.  I
admit that
I have not tried this, but I suspect that a neural network designed to
read
chinese characters will return a chinese character when presented with
an
elephant, but will report a low level of confidence, if equipped to do
so.  A
computer that scans a database for a similar chinese character to the
input will
recognise an elephant as not a chinese character.


Prev: RE: [FT] Unpredictable AI Next: Support the Origins Awards!