• Welcome to Computer Association of SIUE - Forums.
 

My new weekly idea.

Started by Brad Nunnally, 2005-02-08T17:11:53-06:00 (Tuesday)

Previous topic - Next topic

Brad Nunnally

Ok so my weekly news topic discussion thing didn't work out all that well. So, I thought of something little more interesting, atleast I hope, type of discussion. For those who don't know I am a philosophy minor. This means I get to ask myself a whole bunch of answerless questions that keep me up to late hours of the night. With this in mind I have created my own Philosophy of Computers.

Each week I will present you with a question or situtation dealing within our field. Some will be ethical in nature, you can thank Dr. Weinberg for those(CS 321), others just different ideas dealing with computers and robots. Please offer feedback and discussion, someday these might be serious issues.

So here ya go, real simple one to start with. Should a computer or robot,without being controlled directly by a human, be allowed to kill?

Brad Ty Nunnally :idea:
CAOS Vice-Pres.

 
Brad Ty Nunnally
Business & Usabilty Consultant at Perficent
Former CAOS Hooligan

bill corcoran

no.  people are not allowed to kill either.
-bill

Chris Swingler

Absolutely positively not, for the reason bill mentioned.
Christopher Swingler
CAOS Web Administrator

Tyler

We should just follow the I, Robot rules so they end up keeping us under control and being in charge.

Honestly, I don't know.  We always have so many emotions involved in hour decisions, maybe an unbiased opinion may not be too bad.  Then, however, we have to watch for the Terminator scenario where the machines gain conciousness and end up going to war with the humans.

This is no easy question.
Retired CAOS Officer/Overachiever
SIUE Alumni Class of 2005

Brad Nunnally

I should probally have been more clear, when I talk about a computer or robot I mean in the sense that the milatary would use. Not just Joe Schmoes lil bot or pc. Sorry about that.

Brad Ty Nunnally
CAOS Vice-Pres

"The fear of life is the favorite disease of the 20th century."
William Lyon Phelps
Brad Ty Nunnally
Business & Usabilty Consultant at Perficent
Former CAOS Hooligan

William Grim

The robot should be allowed to kill under various conditions, assuming the robot has an AI complex enough to be qualified to make such decisions (something like a robot with the IQ at least that of an average human and reasoning abilities to match).

If a person is coming to harm by another person, and the only way to save the victim is to kill the aggresive human, then this is a good thing for the robot to do.

If you were the one about to be killed by a human, and you saw a thinking machine that could save you if given the willpower to do so, don't you think it should be allowed to protect you?

Under most circumstances, however, the robot should have strong disagreement with bringing humans to harm.

This basically boils down to Asimov's laws and the various ways robots can and will circumvent them, given enough intelligence and reasoning capability.
William Grim
IT Associate, Morgan Stanley

DaleDoe

One could argue that we have "robots" killing now.  The millitary uses computer-guided missles and bombs.  A pilot only identifies the target and presses a button. :gunfire:

But that isn't the root of the issue Brad is getting at.  I think the issue whether or not a computer/robot should be given the responsibility of identifying a target--who is to be destroyed and who is not.  I.e. enemy soldiers (regulars), plain-clothed combatants (irregulars), civilians, friendly soldiers, etc.

Distinguishing between these groups is an extremely complex task.  I cannot immagine a programmer being able to code such a complicated discrimination algorithm (especially since enemies would be trying to fool the algorithm), so it would likely rely on some sort of learning algorithm.  The problem with learning algorithms is that: 1. They are untraceable (you can't prove their correctness).  2. They make mistakes.  That is how they "learn."

So I say no, computers/robots should not be given the full responsibility for identifying human targets.
"If Tyranny and Oppression come to this land, it will be in the guise of fighting a foreign enemy." -James Madison

Jerry

Just a few points:

1. For those who said people are not allowed to kill. Our society recognizes certain circumstances in which killing is allowed to occur: war, self-defense, a police action, capital punishment.

2. Over the last month robots with machine guns and rocket launchers have been deployed in Iraq.  They are currently remote controlled.

3. There are different kinds of robot control. There is autonomous where the robot is making all of its own decisions, there is teleoperated (remote controlled), where the human controller is making all of the decision, and then there are a variety of shared controlled methods where the human and robot share in the decision making.

While autonomous killing robots seem a bit out of our technical reach at the moment, shared controlled killing robots are not. Consider a human who identifies a target and then leaves the rest up to the robot. It would be possible for the robot to track and hunt. Or more likely a coordinated multi-robot attack.

"Make a Little Bird House in Your Soul" - TMBG...

Jonathan Birch

For Tyler (and anyone else who thinks Asimov's laws are sufficient to have autonomous robots be OK), you should read Jack Williamson's The Humanoid Touch. He manages to posit a universe in which there are robots that effectively follow the three laws of robotics, but manage to make human life entirely unworthwhile. The problem is (and OK, this isn't too likely) that they become intelligent enough to realize that the only way to keep people from harming each other/themselves is to remove their free will.

For the main topic though, I don't really think we should have robots killing people. I'm generally opposed to people killing people to begin with. Adding robots to the mix just muddies matter further.

Also, I can't remember the specific reference, but its been said before that war becomes more horrific each time another level of abstraction is added between the person doing the killing and the person being killed. Its much easier to commit atrocities if you don't have to see the faces of the people you're killing.
...

raptor

Though this is a very complicated question my comment refers more directly to the last comment made about it being harder to commit atrocities if you have to be face to face.  And my father always makes this sort of a comment when we watch a movie based in medieval periods. Its always something along the lines of "Men fought like that for thousands of years.  Slashing limbs off, gashing bodies open, just think of the balls it takes to cut a man's arm off with a broad sword"  Back in that day you were goner even if you only lost a limb.
Not that I don't agree with greyblue's last statement. It does take a lot more to kill a man when you have to look him eye to eye.  But you also have to think of the brutal things that went on in warfare that were considered normal.  Adding robots to the scene may create a "cleaner" style of warfare.  Exact execution from precise shots, quick deaths.  I mean I'd much rather take a bullet to the head and be over with it then have my leg chopped off, suffer, and finaly bleed to death.
President of CAOS
Software Engineer NASA Nspires/Roses Grant

Elizabeth Weber

Legally speaking, I think a computer/robot should be *allowed* to kill in any circumstances in which a human is *allowed* to kill.  This includes not just the killing of humans in war, prisons, hospitals but also the killing of animals and plants.  (The original question did not state that we were strictly talking about humans here. ;-) )

From a more utopist standpoint, I would say they shouldn't be allowed to kill humans.  I think we're missing an opportunity to use the massive calculating power of a machine to find a peaceful resolution to a situation.  Instead of trying to imbue our creations with morals to match our own, I see the potential for computers/robots to be able to figure out things that we can't and problem solve without killing.

Though, this *could* lend itself to the type of distopia alluded to by greyblue.  But if the goal is the absolute respect for human life that we have set forth in our society, then it has been satisfied, has it not?


We certainly already have computers/robots that kill: Smart bombs programmed to hit an exact point.  From that extent, I find the mechanized precision to be much better than the old tactic of littering a region with ordnance to try and accomplish the same goal.
~Elizabeth Weber

bill corcoran

in all seriousness, i do think robots should and will be given the ability to put humans to death.  as dr. weinberg mentioned, there certainly are cases where we see a greater good (or a lesser evil) in ending a life.  i'm not ready to say robots should have autonomy, free will, and the ability to kill.  that idea will probably frighten me for the rest of my natural life.

when it comes to properly identifying a target, i don't see how a well designed robot would intrensically do any worse than a human.  remember pat tillman?  guy made national news because he turned down big bucks as a pro football player to go fight in afghanistan.  then he got cut in half by a machine gun, which had a us army ranger's finger on the trigger.  Tillman Killed by 'Friendly Fire' (washingtonpost.com)

i think computers have an ever increasing ability to perform calculations far better than most humans ever could.  i'd rather have an emotionless, unbiased, fearless computer identify me based on a myriad of factors that a human could never notice before deciding whether or not to turn me into a rotting corpse.

i'm also certainly in favor of taking a human out of danger when they could be replaced by a robot.  sure i feel sad when some great feat of technology goes up in flames (sometimes, that is), but it's not even on the same plane as losing a human life.

allowing robots to kill seems like a logical progression.  time goes on, history repeats itself, and bla bla bla.  nothing changes but how we treat the symptoms.  i'm pretty sure there will always be unnatural death, wrongful or condoned.  even after we find a source of immortality.  meh.
-bill

DaleDoe

I don't see how a robot could do better than a human--or at least a human assisting the robot.  I'll put it this way:  Would you want a robot whose software was written by Microsoft to be the one to distinguish whether you were hostile or just a civilian?

All us CS majors should know that as the size of the program increases, so does the number of bugs.  And target identification is a very complex task--one for which we can only make an approximation.

I've stared down the barrel of a loaded gun with a certified sociopath (who has a history of violence) at the other end.  I'm much more comfortable with that than an unfeeling, calculating, lethal robot; People have to live with themselves afterward.

Many will point out that emotion often causes people to kill through anger or poor decision-making.  But I will remind them of the greater atrocities man has committed over the ages when he ceases to feel and coldly calculates. :cry:

By the way Brad, I really like this sort of topic.  It brings the philosopher in me out to have a little fun. :-D
"If Tyranny and Oppression come to this land, it will be in the guise of fighting a foreign enemy." -James Madison

Brad Nunnally

My inspiration for this question was the fact of the little robots rolling around Baghdad with a machine gun strapped to its back. I see it as a matter of time before something more "human" like is developed and given the same capabilties. The way I see it is us not having just soldier models but even so far as police and security models.

I can appreciate the smooth, logical, and unbiased decisions that would be able to come about from a robot. But, some decisions that everyday humans make are not based on logic at all. I worry that in most cases the most logical decision would be to kill, where a human would possible have a different response. It follows the line of shoot first ask questions later. Only a robot or computer wouldn't ask the questions later, for they wouldn't see the need to second guess themselves.

In response to the way warfare was conducted back in the middle ages: I have always been against guns totally for the reason that I think it is easier to pull a trigger then to actually, with your own two hands, injure someone. Even today with our amazing technology war and killing is basically getting "easier". My fear is that with a roll out of robot soldiers that war would get too easy. Without having to fear for the loss of life huge armies could be made and thrown at each other, and in the end more innocents would be killed as one army got passed the other.

I understand this is a difficult question, and there is no real answer. But, the chances are we will see something like this in our own futures. If we atleast have some idea what to do now, it will be easier to come to some concrete answer later.

Brad Ty Nunnally
CAOS Vice-Pres.



"The way to win an atomic war is to make certain it never starts."
Omar N. Bradley
Brad Ty Nunnally
Business & Usabilty Consultant at Perficent
Former CAOS Hooligan

Brad Nunnally

Nice!!!


QuoteDaleDoe wrote:
  I'll put it this way:  Would you want a robot whose software was written by Microsoft to be the one to distinguish whether you were hostile or just a civilian?
Brad Ty Nunnally
Business & Usabilty Consultant at Perficent
Former CAOS Hooligan

bill corcoran

QuoteDaleDoe wrote:
  I'll put it this way:  Would you want a robot whose software was written by Microsoft to be the one to distinguish whether you were hostile or just a civilian?

Uh, yeah quite possibly.  If the process includes scanning me for registered weapons, some sort of electronic ID, biosigns, using infrared and enhanced optics, etc. etc.  I honestly believe a computer could make a more informed, accurate, and quicker decision.  Versus someone fearing for their life, eager to shoot anything that looks like a person with a gun for fear they are shot first.

Besides, what the hell kind of question is that?  Would you rather have a person run your computer hardware than an OS written by Microsoft?  Sure, we're imagining a human vs. a computer identifying and exterminating a human, but they would be doing it in completely different ways.  People make mistakes too, mostly because they don't know or have time to think about things they need to.  Computers are known to be able to process much more data much faster.  Here's yet another friendly-fire story: Pilots charged in friendly-fire deaths of Canadian soldiers (af.mil)

Obviously, to really approve or disapprove of robotic technology, I think we have to see it implemented.  I know computers have the potential, and I'm sure we'll see what people can come up with.  Let's not just say "people write buggy code".  People ARE buggy code.
-bill

Tyler

This gives a whole new connotation to Human Computer Interaction (HCI).   :-P
Retired CAOS Officer/Overachiever
SIUE Alumni Class of 2005

DaleDoe

Quotewhat the hell kind of question is that?

A smart-ass remark. :smartass:

Quotea whole new connotation to Human Computer Interaction (HCI)
:lol:

I know there are plenty of examples of "friendly fire".  Maybe the millitary just needs a better way of identifying their own before shooting at them.  Something like a device to "ping" a group of soldiers to see if they are friendly.

I agree that we'd have to see it implemented to know how well it would work.  That takes me to another point.  Even if it did work well at destroying enemy targets, there is still a problem:

Whoever controls this technology is given the power to wage war and kill with no threat to his life whatsoever (that's the way things are heading now).  Just pause and think about this for a minute.
:box:

How many more unprovoked wars do you think we would be waging right now if we could do so without risking the life of a single American soldier?  Iran?  North Korea?  We'd be on a bigger crusade to spread "democracy" than the Soviet Union was to spread "communism".  Can anyone say WW3?

And don't anybody give me this "If we had the technology, we would only use it for good" BS.  If any of you believe that you should study some history.
"If Tyranny and Oppression come to this land, it will be in the guise of fighting a foreign enemy." -James Madison

William Grim

But... "If we had the technology, we'd only use it for good."  We don't use technology for bad... EVAR!  :shocking:

Sorry, I had to say it.  :whistling:
William Grim
IT Associate, Morgan Stanley