Prev: Re: Some FT background stuff (guidelines for writers) Next: [OT] B5 fans going to Lancaster

Re: Some FT background stuff (guidelines for writers)

From: Alexander Williams <thantos@d...>
Date: Thu, 12 Feb 1998 20:12:29 -0500 (EST)
Subject: Re: Some FT background stuff (guidelines for writers)

On Thu, 12 Feb 1998, Thomas Barclay wrote:

> Hmm. I'm in favor of your POV, but it may be the human is a tad more 
> complex than the model suggested. For us to assume we know what the 
> brain is or what a human is (as of yet) inherently questionable. 

You're making appeal to a line of reasoning that is inherently self
defeating.  Saying, 'I don't think we can know what this is' says
about our ability to build something that does a function better.  We
still don't know everything about the wing-operation of birds and yet I
can hop a plane to Colorado Springs in an hour and be there soon after,
faster than Canadian Geese can wing their way.

> But in the long run machines COULD be made smarter than man. People 
> tend to be making machines like SMARTER men. What isn't being asked 
> enough is WHY? Or whether it is a good idea or a desirable thing? Or 

See my previous post; I don't advocate making a computing center that's
'better man,' but a 'better pilot,' which is a far more specialized
I'll be happy if I can write an AI which uses laser-beaconing to
communicate short-range with its flock-mates who swarm upon the enemy
use utterly alien tactics to destroy them.

> if we make self-aware machines, other than to say we've done it, 

There is a line of philosophical thought (and one I largely share) that
suggests that self-awareness is a dead-end evolutionary track, that it
not a sufficently powerful tool, in and of itself, to continue
indefinitely in the human genome save as a vestigal reminder of what we
once were (are, temporally).  

Why build a self-aware fighter pilot who can say 'no' when I say 'blow
that hospital?'  Its a straw-man argument, since self-awareness is /not/
necessity for an entity to be better at a task than a human.

> the smarter you make a machine, the closer it will be to going "Who 
> the heck are you to be giving me orders?" or "Why should I do this 
> for you who just created me to die or to do labour for you?". 

Depends on how that machine-mind's constructed.  Consider that its
perfectly possible to brainwas a human into /not/ asking the above
questions.  How much easier a mind that never had an opportunity to
anything other?

> And yet a number of large scale computing problems (Deep Blue, some 
> math proofs) have recently taken advantage of massively parrallel 
> supercomputers to solve by exhaustive cases things that could not be 
> done any other way or to beat a human opponent. Don't rule out brute 
> force if you have the compute cycles. The phrase KISS still has 
> meaning, even in 2300.

If you have the compute cycles, being smart /and/ fast beats just being
smart.	:)

[  Alexander Williams {}  ]
[ Alexandrvs Vrai,  Prefect 8,000,000th Experimental Strike Legion ]
[	     BELLATORES INQVIETI --- Restless Warriors		   ]
"Here at Ortillery Command we have at our disposal hundred megawatt
    laser beams, mach 20 titanium rods and guided thermonuclear
 bombs. Some people say we think that we're God. We're not God. We
   just borrowed his 'SMITE' button for our fire control system."

Prev: Re: Some FT background stuff (guidelines for writers) Next: [OT] B5 fans going to Lancaster