Re: [GZG] [OFFICIAL] Question: was Re: [SG3]: What if?
From: Samuel Penn <sam@g...>
Date: Sat, 9 Feb 2008 10:53:03 +0000
Subject: Re: [GZG] [OFFICIAL] Question: was Re: [SG3]: What if?
On Saturday 09 February 2008 06:06:43 john tailby wrote:
> ----- Original Message -----
> From: "John Atkinson" <johnmatkinson@gmail.com>
>
> > You can handwave what ever you like--although I suspect that the
> > economic costs of training infantrymen/controllers AND buying
remotes
> > for them AND the recovery and maint assets will be prohibitive for a
> > long, long time.
> >
> > Which won't stop people from designing them, putting them on TV, or
> > inserting them into wargames.
> >
> > It's really a question of what do you want to include?
>
> I could imagine the military backlash against autonomous weapons the
first
> time there is a blue on blue incident or the automated weapons get
hacked
> or electronically subverted in some way.
Does it matter if your automated weapon takes out a friendly automated
weapon? Does it matter if your automated weapon takes out a friendly
human, but that's the only friendly human casualty in the entire war?
There's always a backlash whenever anything goes wrong, but if the
advantages are seen to outweigh the risks, then they'll stay.
> If all future infantry are plugged into a datanet receiving all sorts
of
> sensor information how would you guarantee that it is 100% secure? I
don't
> think there is an unhackable network that humans have built so far so
why
> would this not continue into the future?
That's been true for at least 60 years. It's claimed we got the
Germans to bomb Dublin by interfering with their radar in WW2.
We very effectively hacked their Enigma system. Just because
something can be compromised and turned against you isn't
necessarily a good enough reason not to use it (again, it depends
on how good the benefits are, versus the risks).
A communication channel *can* be made 100% secure (one time pads),
though secure distribution of the pads can be difficult. As a
system approaches human level intelligence, it may become less
susceptible to viruses and the like as it begins to be able to
make better decisions about what data to trust.
> I think it is likely that weapons will get smarter and do more, but
humans
> will always want to be in control of the decision making process.
Though that decision making process may move further up the chain
as technology progresses.
> I agree with John Atkinson that humans are likely to need to be
involved in
> future combat for as long as it resembles infantry combat.
That's almost a tautology. The ~200 years between now and StarGrunt
is a long, long time, and personally I think we'll have AIs at least as
adaptable, unpredictable and flexible as humans by that point.
Until we get to that point though, we don't know exactly what it's
going to be right.
However, having said all that, I think Stargrunt should stay human
focused. A game fought between machines is going to be very different.
btw, the anime "Ghost in the Shell" (especially the series), as well
as being very good, concentrates a lot on man-machine interfaces,
automated military units, hacking interfaced minds and sensor systems
and what it means to be human.
--
Be seeing you, http://www.glendale.org.uk
Sam. Mail/IM (Jabber): sam@glendale.org.uk
_______________________________________________
Gzg-l mailing list
Gzg-l@lists.CSUA.Berkeley.EDU
http://mead.CSUA.Berkeley.EDU:1337/cgi-bin/mailman/listinfo/gzg-l