Re: Some FT background stuff (guidelines for writers)
From: Alexander Williams <thantos@d...>
Date: Thu, 12 Feb 1998 19:55:52 -0500 (EST)
Subject: Re: Some FT background stuff (guidelines for writers)
On Thu, 12 Feb 1998, Jerry 'Ghoti' Han wrote:
> Oooo, a debate! (8-)
Nah, just an exchange of broadsided opinions at long range.
> I guess my point above was that computer AI (granted, this can be
> 'done away with the Universe, as much as anything else), will probably
> never achieve the same amount of flexibility as a human mind.
> (This is not the same as 'intuition' or 'creativity, but closely
> related.)
I just don't see that; in the big picture? Certainly correct. In a
limited domain, in which there are a limited number of responses? (And,
let's be honest, there are only so many things you can do in command of
a
fightercraft in 0g.) Certainly incorrect.
This does imply I think fightercraft combat is an essentially 'solvable'
problem, yes. That is to say, I believe that for any set of initial
conditions, a set of actions can lead to consistant victories within
minor
variances between initial conditions.
> >From what work I've done in AIs (granted, more theoretical than
> practical), they're great for scenarios that match their parameters.
> Once you go out of parameters, though, they go boom. Even 'reactive'
> networks (what's your design model, if you don't mind me asking (8-) )
> have points where the system fails, due to stress, programmer error,
> invalid inputs, what have you.
Currently, I'm putting together (for fun; I can't tell you about the
at-work stuff, DEC'd have my intestines :) some modular
spreading-activation-based networks with functional sub-modules for a
sort
of Crobots-inspired combat/tank game. The idea being that one
impliments
a set of sensor-units (functions which put/remove state info to an
inter-tank blackboard), a set of module/expertise-units (functions which
trigger based on activation energy and which carry out some function and
which may be an activation network themselves) and a set of Goal units
(states which feed energy backwards through the network just as
sensed-states send energy forwards).
Choosing a sufficently general set of expertise modules and goals allows
such a network to 'generalize' about its situation and respond
appropriately. They don't end up having 'parameters' as such to have a
scenario /match/, the idea is to give them the tools for them to work
out
locally best solutions and have an emergently /best/ solution come up
out
of that. Its bottom-up logical construction rather than traditional
top-down, and I find that, if its pursued vigorously, you'll find that
it,
as a logical framework, leads to good solutions to lots of typically
'hard
AI' problems, generalized 0g fighter combat being one of those.
> redundancies wherever possible. That means you're always probably
> going to have human pilots, until you can get AIs that think like
> humans.
Why think /like/ humans? Given that humans are just big AIs themselves,
shouldn't you strive to go 'outside their parameters' and give up alien
solutions to the current situation, so that humans are at a
disadvantage?
We have millions of years of evolution to /unlearn/ about space combat,
drone minds do not.
> Agreed. But, given the high hopes AI had in the 50s, and the rather
> bitter reality they face now, it's going to be a while before
> science fact matches science fiction. (Of course, since it appears
> current research in AI is proceeding in specialized as opposed to
> generalized lines, I may have just put egg on my face especially
> when it comes to combat AI, something that has been an active field
> of research for the last fifteen years or so.)
GA Tech has some almost frightening research on automated combat AIs;
they
currently have a HUMMV which drives itself and can follow general
directives from command and pursue their accomplishment. More, they
have
a simulator that /you can run on your Linux box at home/. That's fairly
low-horsepower compared to systems the military /might/ be putting in
drone craft.
Someone once said 'specialization is for insects, humans are meant for
better.' They neglected to notice that insects make up more biomass on
this planet than any other phylum. Specialization /works/.
> Oh, hell, I know, I was just answering somebody elses comment.
> I haven't kept current, but, at least the time I did my education/
> research, Neural Nets seemed to be the most promising, if you could
> get around their long training time. Of course, since this was
> about three or four years ago, I'm already hopelessly out of date.
> (8-)
NNets are a right bugger unless your problem is almost purely a reactive
one, they don't do changing goal-seeking well at /all/. On the other
hand, they make great expertise modules for activation networks since
they're quite good at 'making things happen' which can then be handed
over
to another module who does a different thing that leads, eventually,
toward the goal.
--
[ Alexander Williams {thantos@alf.dec.com/zander@photobooks.com} ]
[ Alexandrvs Vrai, Prefect 8,000,000th Experimental Strike Legion ]
[ BELLATORES INQVIETI --- Restless Warriors ]
====================================================================
"Here at Ortillery Command we have at our disposal hundred megawatt
laser beams, mach 20 titanium rods and guided thermonuclear
bombs. Some people say we think that we're God. We're not God. We
just borrowed his 'SMITE' button for our fire control system."