Re: AI in FT (was Re: Be gentle...)
From: Samuel Penn <sam@b...>
Date: Wed, 16 Jul 1997 16:06:06 -0400
Subject: Re: AI in FT (was Re: Be gentle...)
In message <Pine.HPP.3.92.970713142658.3824B-100000@hp10.ee.ualberta.ca>
Chen-Song Qin <cqin@hp10.ee.ualberta.ca> wrote:
> On Sun, 13 Jul 1997, Allan Goodall wrote:
> > My belief is that the human brain can eventually be duplicated (and
even
> > surpassed) through electronic engineering or biomechanical
engineering. At
> > that point, we'll have an artificially constructed intelligence that
can
> > think, learn, and have an imagination.
>
> Okay this can be true. But I'm just wondering why you'd send this
kind of
> intelligent machine into battle and have them fight for you. That'll
> amount to slavery of sentient beings.
Why send another human into battle and have them fight for you?
That'ss amount to slavery of sentient beings.
> > However, it should be possible to build a massively parallel
artificial mind
> > that can behave logically, and intuitively, and FASTER than a human.
>
> This is a way *cool* idea. So do you actually work in the AI field?
Just
> wondering... But then again, how well would something like this
handle
> damage in combat? What happens to a human brain if all of a sudden, a
> piece of it got wacked off?
Typical computer systems will work perfectly until one or two
pieces get damaged, at which point it stops working.
Human brains will gradually decrease in effectiveness as bits
of them get damaged, making them far more fault tolerant.
Neural networks exhibit the same behaviour as human minds -
you can chop great chunks out of them, and they'll continue
to work, albiet at reduced effectiveness.
--
Be seeing you,
Sam.