Re: Some FT background stuff (guidelines for writers)
From: Thomas Barclay <Thomas.Barclay@s...>
Date: Thu, 12 Feb 1998 15:10:38 -0500
Subject: Re: Some FT background stuff (guidelines for writers)
> A human is just a really big parallel neural network, after all.
There's
> no reason to believe that there's some mystical falderall which will
> forever seperate the performance of humans and machines, /especially/
in
> high-stress, limited-field areas like combat via machines themselves
which
> depend heavily on computational power in the first place.
Hmm. I'm in favor of your POV, but it may be the human is a tad more
complex than the model suggested. For us to assume we know what the
brain is or what a human is (as of yet) inherently questionable.
But in the long run machines COULD be made smarter than man. People
tend to be making machines like SMARTER men. What isn't being asked
enough is WHY? Or whether it is a good idea or a desirable thing? Or
if we make self-aware machines, other than to say we've done it,
don't we have to treat them as sentients, give them rights, etc. and
thus remove their utility as slaves? I don't want to see us build
robots and AI systems to REPLACE men (or more importantly, women!) or
make BETTER than human creatures (although some seem set on this). It
would seem more to the point to make smart windows that won't slam
on kids heads, smart stoves that cook you dinner but don't burn
stuff, medical expert systems which help diagnose and treat people
so they live a long healthy life, etc. which collectively improve our
quality of life. If we're stuck on the morality of cloning and
regeneration, we're equally far behind on the ethics and morality of
AI and self aware machines. (Yes this is tangential, but relevant if
you are discussing AI in the future.... maybe universal sentient
rights make it immoral to involuntarily involve another intelligence
in conflict without its permission, hence the only AI fighters could
be voluntary ones... rather than just ones you built....). Remember,
the smarter you make a machine, the closer it will be to going "Who
the heck are you to be giving me orders?" or "Why should I do this
for you who just created me to die or to do labour for you?".
> The AI community hasn't done simple brute-force computation of over a
> decade now. Wake up and smell the 90's. :)
And yet a number of large scale computing problems (Deep Blue, some
math proofs) have recently taken advantage of massively parrallel
supercomputers to solve by exhaustive cases things that could not be
done any other way or to beat a human opponent. Don't rule out brute
force if you have the compute cycles. The phrase KISS still has
meaning, even in 2300.
Tom.