Re: [GZG] [OFFICIAL] Question: was Re: [SG3]: What if?
From: Samuel Penn <sam@g...>
Date: Sun, 10 Feb 2008 17:29:41 +0000
Subject: Re: [GZG] [OFFICIAL] Question: was Re: [SG3]: What if?
On Sunday 10 February 2008 14:11:38 Damo wrote:
> On Feb 10, 2008, at 5:47 AM, Samuel Penn wrote:
> > On Sunday 10 February 2008 07:51:18 Eric Foley wrote:
> >> I see two ways of balancing out bots that are going to ignore
> >> morale and
> >> suppression.
> >
> > [...snip examples of suicidal machines...]
> >
> > But why should they ignore morale and suppression? If they're
> > just drones with no AI then this makes sense, but if they're
> > controlled by an AI that is (or almost) as capable as a human,
> > then it may well have self preservation as an instinct.
> >
> > As an SF example, the Tachikomas do take cover and try to avoid
> > being shot.
>
> I think J.A. is talking about the "gamey" types out there. Ample use
> of robotics doesn't necessarily mean that you ignore morale and
> suppression if you assume these robotics are expensive and take time
> to build / maintain. Perhaps FAR into the future you can mass
> produce these things but for a certain amount of time robotics will
> be expensive -- and if played during this time no operator is going
> to willingly sacrifice a squad of robots JUST because he can.
Actually, reminds me of a story from "I, Robot", where a robot
was meant to test fly a prototype FTL ship, and something went
wrong. Humans were sent in to find out the problem, because
Susan Calvin doesn't want to risk another robot.
(But then Calvin never really liked humans).
I'm sure there's lots of examples from SF of humans being considered
expendable, and robots kept back for safer duties.
--
Be seeing you, http://www.glendale.org.uk
Sam. Mail/IM (Jabber): sam@glendale.org.uk
_______________________________________________
Gzg-l mailing list
Gzg-l@lists.CSUA.Berkeley.EDU
http://mead.CSUA.Berkeley.EDU:1337/cgi-bin/mailman/listinfo/gzg-l