Re: Some FT background stuff (guidelines for writers)
From: Jerry 'Ghoti' Han <jhan@i...>
Date: Thu, 12 Feb 1998 14:40:23 -0500
Subject: Re: Some FT background stuff (guidelines for writers)
Oooo, a debate! (8-)
Alexander Williams wrote:
> On Thu, 12 Feb 1998, Jerry Han wrote:
> > Now, what happens to computer pilots when their radar is jammed? Or
> > fails? Or returns ghost signals?
> Well, if its one of the reactive networks/subsumpive I spend a lot of
> hacking on and developing at work and for my own amusement, it reacts
> whole /heck/ of a lot faster than a human operator to the change in
> imput quality (after all, you'd be an idiot to not put in a detection
> module that does nothing but recognize when a sensor input is of
> diminished quality, which would then remove some of the activation
> that modules depending on that data receive ...) and proceeds to 'open
> keg of whup-ass' on its opponents.
I guess my point above was that computer AI (granted, this can be
'done away with the Universe, as much as anything else), will probably
never achieve the same amount of flexibility as a human mind.
(This is not the same as 'intuition' or 'creativity, but closely
>From what work I've done in AIs (granted, more theoretical than
practical), they're great for scenarios that match their parameters.
Once you go out of parameters, though, they go boom. Even 'reactive'
networks (what's your design model, if you don't mind me asking (8-) )
have points where the system fails, due to stress, programmer error,
invalid inputs, what have you.
Don't get me wrong; I'm a big fan of automation. Personally, I
prefer automated combat systems when possibile. However, I also
believe in the 'friction' of war, which means you stick in
redundancies wherever possible. That means you're always probably
going to have human pilots, until you can get AIs that think like
At that point, well... Dahak (from Weber's 'Fifth Imperium trilogy')
can command my fighters anyday! (8-)
> A human is just a really big parallel neural network, after all.
> no reason to believe that there's some mystical falderall which will
> forever seperate the performance of humans and machines, /especially/
> high-stress, limited-field areas like combat via machines themselves
> depend heavily on computational power in the first place.
Agreed. But, given the high hopes AI had in the 50s, and the rather
bitter reality they face now, it's going to be a while before
science fact matches science fiction. (Of course, since it appears
current research in AI is proceeding in specialized as opposed to
generalized lines, I may have just put egg on my face especially
when it comes to combat AI, something that has been an active field
of research for the last fifteen years or so.)
> > whatever the hell we want (8-) ), but brute force computations
> > the answer for a combat AI. Perhaps some kind of optimized neural
> The AI community hasn't done simple brute-force computation of over a
> decade now. Wake up and smell the 90's. :)
Oh, hell, I know, I was just answering somebody elses comment.
I haven't kept current, but, at least the time I did my education/
research, Neural Nets seemed to be the most promising, if you could
get around their long training time. Of course, since this was
about three or four years ago, I'm already hopelessly out of date.
> "Here at Ortillery Command we have at our disposal hundred megawatt
> laser beams, mach 20 titanium rods and guided thermonuclear
> bombs. Some people say we think that we're God. We're not God. We
> just borrowed his 'SMITE' button for our fire control system."
I like. (8-)
*** Jerry Han - firstname.lastname@example.org - http://www.idigital.net/jhan ***
"And we will raise our hands, and we will touch the sky;
Together we will dance in robes of gold;
And we will leave the world remembering when we were kings..."
When We Were Kings - Brian McKnight/Diana King - TBFTGOGGI