Re: Some FT background stuff (guidelines for writers)
From: Alexander Williams <thantos@d...>
Date: Fri, 13 Feb 1998 01:44:32 -0500 (EST)
Subject: Re: Some FT background stuff (guidelines for writers)
On Fri, 13 Feb 1998, Jerry Han wrote:
> Mr. London, fly "Engage Enemy More Closely" (8-)
As a dedicated TOGgie, I'm smart enough to keep my firing line looping
back further away so I can stay at range and apply proper broadsides.
:)
> While you can probably reduce the options a fighter has at any given
> moment to a given set of maneuvers (which strikes me as being
limiting),
> thus simplifying the AI problem there, you then face the thousands of
> different possible combat scenarios, many of them unforseen. This is
> where the human flexibility comes in.
Which is why you build the reactive system bottom up, so that larger
'maneuver clusters' arise spontaneously from reactive acts. If you
design
the system to recognize successful results and weight them a bit more
heavily in the future, then you begin to cut down the space of
possibilities to a managable level. Remember, we're not talking about
old-style traditional AI here but reactive networks; you don't /have/
'routines' of reactions, you react to environmental conditions and work
toward goals.
> A possible compromise here would be the use of the AWACS or Ground
Control
> model, where the actual fighters are drones, but they are under the
close
At some point, of course humans will be involved, but its far more
likely
to be at the strategic level than the tactical once the amount of data
that has to be juggled surpasses human capacity. In many cases, we're
getting awfully close to that /now/, leaving more and more in the hands
of
automated systems given overall directives instead of micromanaged
orders.
> Essentially then, you're assuming that from a given engagement
scenario,
> each side will use the same actions, because those actions are, in
some
> sense 'optimal.' I'm worried about divergence though; I believe that
> combat, by nature, is a chaotic system and extremely sensitive to
> perturbations in the initial conditions.
Not necessarily; a given set of tactical choices lead to a retaliatory
set
of actions. These very well may be very different from the attacker's
set, in fact, odds are they /will/ be.
> Agreed. If my AI is supposed to design starships, I don't give a damn
> if it can appreciate Mozart. (8-)
And if my AI is supposed to fly a fighter, I don't care if it writes
haiku
in its downtime.
Of course, in an obFT sense, this sidesteps the issue: What's flying a
fightercraft doesn't matter. FT is a descriptive, not proscriptive
system. Whether piloted by a single human tucked into a command vessel
safely off the map (represented by the player) and all ships following
his
dictates or whether the fiction is that each is seperately piloted and
commanded, FT gives no difference.
(Personally, if FT is really about humans piloting, it needs, first and
foremost, /morale rules/ like SGII and DSII, to represent human foible.)
--
[ Alexander Williams {thantos@alf.dec.com/zander@photobooks.com} ]
[ Alexandrvs Vrai, Prefect 8,000,000th Experimental Strike Legion ]
[ BELLATORES INQVIETI --- Restless Warriors ]
====================================================================
"Here at Ortillery Command we have at our disposal hundred megawatt
laser beams, mach 20 titanium rods and guided thermonuclear
bombs. Some people say we think that we're God. We're not God. We
just borrowed his 'SMITE' button for our fire control system."