• Welcome to Computer Association of SIUE - Forums.
 

ROBO Cop is here... Well in 5 years

Started by raptor, 2006-01-18T20:19:04-06:00 (Wednesday)

Previous topic - Next topic

raptor

Saw this while scrounging the web today.  Sort of interesting, I wonder what sort of "bad guy indentification" algorithm it will have.

http://times.hankooki.com/lpage/200601/kt2006011617112710160.htm

Scott
President of CAOS
Software Engineer NASA Nspires/Roses Grant

Ross Mead

I think that, on a teleoperation level, this seems like a great idea.  However, when you begin to touch on the side of artificial intelligence, I think this goes a bit awry.  I namely speak of the implications of military robots, and, as raptor put it:

Quote"bad guy indentification"

This is an area of artificial intelligence which I don't think is plausible for us to reach.  In order for a machine to make reasonably rational decisions about a person, we ourselves must be able to provide it with some method of coming to that decision.  As people, we make assumptions about the emotions and actions of others by making analogies with ourselves.  If someone smiles or cries, we assume they are happy or sad, respectively, because we generally experience that emotion when we are smiling or crying.

Recent machines have been rather successful at attempting to read these simple emotions of people, but in a case of war-time, we reach a completely different level.  Take the current situation in Iraq as an example: let's look at our robot's "terrorist detection algorithm".  How does the machine know that the person in front of them is a terrorist?  Well, is the person holding a weapon?  If so, what kind of uniform (if a uniform) is that person wearing?  Does gender or age matter?

For any given scenario, how does the machine differentiate between our soldiers with guns - vs. - the terrorist with the gun - vs. - the woman (terrorist?) with a gun - vs. - the curious child who picks up the gun - vs. - the woman protecting her child from the same people that the robot is looking for with the gun - vs. - everyone in Texas ( :-P )?  Though this method of simply "looking for the gun" is rather naive, I think that you can see the rather obvious point: asking the robot to make rational decisions about the emotions and actions of a human is like asking a dog to analyze a cat.

Sure, some things could be assumed by making analogies between the robot (and it's experiences?) and the human, but these assumptions don't truly coincide with each one another.  Of course, people make the same types of assumptions about other people in their everyday life, but are never truly sure of what is going on in the mind of any given person.  I, however, think that when people's lives are at stake that we should leave a human behind the sites of the gun.

... cool story though... :-D

raptor

On top of the "bad guy identification" issues, there is one more thing we've missed here.  These military "robots 'o' death" are going to be controlled via a WiFi network.  Just think of the possible secruity issues there.  I'm gonna have to make a trip to Korea and hijack myslef an entire Korean robot army.  

I can just see a field of army robots, it'll look like a scene right out of Star Wars.
President of CAOS
Software Engineer NASA Nspires/Roses Grant

raptor

OOOOOOoooooo AAAhhhhh..

Just made 100 posts  :beer:
President of CAOS
Software Engineer NASA Nspires/Roses Grant

Ross Mead

Sorry, I already beat you to the army 'o robots... ;-)



... :-D

... now if I could only get them to work... :-?

'Grats on the 100th post by the way... :wavetowel:

... wow, I just realized I must have hit 100 posts recently... cool! :dance:

Shaun Martin

Shaun Martin
SIUE Alumni
Associate IT Analyst, AT&T Services, Inc. St. Louis, MO.