Prev: Re: [FT] Unpredictable AI Next: Re: [FT] Unpredictable AI

Re: [FT] Unpredictable AI

From: Derk Groeneveld <derk@c...>
Date: Thu, 21 Jun 2001 16:32:26 +0200 (CEST)
Subject: Re: [FT] Unpredictable AI

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Thu, 21 Jun 2001, David Griffin wrote:

> Well, remember a cruise missile is a replacement
> for a Bomber. In the past, with a man in the loop,
> the bomber could be recalled or decide the orders
> had been in error (wow, maybe we shouldn't bomb
> that big white building with the red cross). Now
> with a cruise missile that decision is gone. So
> you're right technically, but in a real sense, the
> cruise missile IS an example of the man cut out of
> the loop (or at least moved back in the loop).

Granted, but there is a very big difference still. The mission for the
cruise missile itself is completely pre-defined (go to a certain
location
and nuke it into the stone age/whatever your payload does). The policy
and
decision makers for the nation take the decision whether or not that
mission is to be executed in it's entirity.

 
> You could say the same thing about launching a 
> AI drone. Ok drone, you say, go to these coordinates
> and shoot down any enemy planes. You made the 
> decision, but now you're out of the loop. The drone
> will be making the decisions from then on. 

Already, a big distinction: 'should down any _enemy_ air planes'. This
is
where the drone/fighter has to make a kind of decision that a cruise
missile never has to. It will have to decide whether a certain contact
gets to live or die, by deciding if it is enemy or not.

I'd suggest the vast majority of fighter missions are NOT predetermined
in
the way cruise missile launches are (Go to point A, and destroy TARGET
B).
There is almost always a decision making process (Determine whether B
_is_
hostile, shoot down B _if_ hostile. Unless a bigger threat _C_ is
present,
also _hostile_, then take out _C_ BEFORE you take out _B_, etc)

When a cruise missile is launched and it destroys a milk powder factory,
it is a fault of the decision makers/intel people, and actions can be
taken to prevent this. If a drone decides a nice fat Airbus is hostile,
it's a different matter entirely. I'd rather trust fallible humans than
a
piece of software with those decisions.

Cheers,

   Derk
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.0.6 (GNU/Linux)
Comment: Made with pgp4pine

iD8DBQE7MgWCJXH58oo6ncURAjByAJ47uBtIoylDkUm3889esWEcvS7t3wCfRZzw
QxbYhp5u5BOEz2UeZexQZbk=
=Zofl


Prev: Re: [FT] Unpredictable AI Next: Re: [FT] Unpredictable AI