Prev: Re: AIs are not human! (was Re: AI in FT (was Re: Be gentle...)) Next: Re: FYI: located GW Space Fleet for Auction

Re: AIs are not human! (was Re: AI in FT (was Re: Be gentle...))

From: Joachim Heck - SunSoft <jheck@E...>
Date: Thu, 17 Jul 1997 12:22:05 -0400
Subject: Re: AIs are not human! (was Re: AI in FT (was Re: Be gentle...))

Mikko Kurki-Suonio writes:
@:) On Thu, 17 Jul 1997, Joachim Heck - SunSoft wrote:

@:) > [ AIs do what we tell them to ]

@:) I think the big question here is CAN we do that with the
@:) complexity of programming required for sentience? IMHO, it is
@:) entirely possible that AI might the result of, say, evolving
@:) genetic programming. The resultant system is quite likely
@:) sufficiently complex that all possible input combinations could
@:) not be tested. In short, it may well be a process that works but
@:) is not fully understood.

  True enough.	But before I hand a gun to my computer I'm going to
try real hard to be sure it won't shoot me.  I don't have to
understand everything about what it will do but some things are
critical.

@:) Let me compare to raising human children. We can't program
@:) them.

  And we CAN program a machine, even one that's been evolved rather
than designed in the traditional manner.  Moreover, we may well have
fewer moral compunctions about doing it.  Giving six-year-olds frontal
lobotomies is generally considered to be in poor taste, but
reformatting your hard drive is no big deal.

@:) Now, producing AIs certainly has advantages. It's probably faster
@:) (if not cheaper), you can conduct very stringent testing and weed
@:) out the failures without anyone complaining of cruelty.

  Right.

@:) So, if we're given a choice between not producing AIs and
@:) producing them but not fully understanding the process -- which do
@:) you think will be chosen? Especially given that the discoverer
@:) will most likely be a curious scientist?

  I have to agree with you here that we'll probably get dangerous AIs
before we get none at all.  With luck, we will learn how they think
just as fast as we learn how to make them think.  But if that's not
the case then yes, some guy will build a better mousetrap and it'll go
off and start killing people.  The way I see it, though, either we'll
learn and institute some draconian laws to deal with these issues, or
the AIs will wipe us out.  So the only interesting future is the one
in which we build AIs that behave.

-joachim

Prev: Re: AIs are not human! (was Re: AI in FT (was Re: Be gentle...)) Next: Re: FYI: located GW Space Fleet for Auction