Can robots learn the cognitive meanings underlying learned behavior?

What I mean by this is can robots or cyborgs discern the underlying meaning behind certain learned behaviors?  My case in point is this clip I offer from the film I, Robot.  Here, we see Will Smith’s character explaining the meaning of an intentional wink to the robot, “Sonny”.  Later, “Sonny” uses an intentional wink to save Smith and the scientist who helped create “Sonny” from harm.

Now, a wink is a physical gesture humans do thousands of times every day, both voluntarily and involuntarily.  “Sonny” learned one of the intentional meanings of a wink, when connected with certain other behaviors, is a subtle form of signaling to another person or, in this case, a robot to a human.  This had not been programmed into “Sonny” per se, but he (it) learned this wink’s meaning through observation and cognitive reasoning.  Hence, my initial question.  But perhaps another question will be if robots or cyborgs can successfully develop not only deductive reasoning skills, but deeper cognitive skills as well, does that put them closer to meaningful sentience and therefore allow them to be considered new life forms?  And when is it acceptable to refer to “Sonny” as “he” instead of “it”?  (This last question presented itself in a Star Trek:  The Nest Generation episode, but that will have to wait for later.)



Leave a Reply

Your email address will not be published. Required fields are marked *