Re: AI in FT (was Re: Be gentle...)
From: Joachim Heck - SunSoft <jheck@E...>
Date: Wed, 16 Jul 1997 10:02:40 -0400
Subject: Re: AI in FT (was Re: Be gentle...)
Tom McCarthy writes:
@:) Is that a danger with AIs ? Will we interact with them so little
@:) that we fail to teach them something as fundamental as capturing
@:) the objective whole instead of just capturing the objective ?
@:) Mightn't they leapfrog past us to do efficient things we wouldn't
@:) do, like torpedoing hospital ships or destroying enemy
@:) civilian/manufacturing centres. We might give them the firepower
@:) to raze huge and rare tracts of mabitable land.
@:)
@:) Is this a potential problem ?
Yes although the smarter your AI gets the better you can control
this kind of behavior. However as an example I will mention a "dumb"
independent-acting weapon that causes some of the problems you
describe, namely the land mine. During a war, land mines mostly do
their job of denying territory to the enemy. Once the war is over,
however, the mines remain and now they attack civilians and livestock,
the very things they were designed to protect. Particularly unfunny
are a type of Soviet mine that was designed to be dropped from an
airplane. The mine has "wings" and looks a little like a butterfly -
the wings allow it to hit the ground gently enough not to detonate.
They also make the device an attractive plaything for children, who
then get their hands blown off.
Recent legislative efforts in this country and Europe have been
aimed at reducing the danger from weapons of this type and one
suggested method has been to make them smarter. For example, a mine
could disarm itself after a certain period of time. The problem then
is that if the smarts don't do their job correctly, you will still
have a dangerous weapon. In futuristic terms, I would say you would
face similar problems, like an AI that gets damaged and becomes a
threat to its designers or to bystanders.
-joachim