Thursday, November 12, 2009

The Significance of Philip's goatee...

Phillip shaved his goatee off for November. His post here.

I had just shaved off my goatee but she didn’t say anything. That’s right. She looked at me for a second trying not to feel uncomfortable, then she looked away, and then I had to TELL her I had shaved and then she leaped into the air screaming in laughter, realizing what it was that was freaking her out…


I’m not very good at noticing details in everyday situations and things around me. I can get obsessed with them if it’s a particular thing or subject I’m interested in, but not without a conscious effort.


I was talking to Tom yesterday about this (everyone I know has problems with their hair… and you don’t want to know mine. Ugh. Wax on, wax off). I was reminded of this book I read years ago titled “On Intelligence” by Jeff Hawkins, founder of Palm Pilots and inventor of handwriting recognition software (someone else was there first, but he made a better, cheaper one). The book was about the difference between computers and the human brain and the difference in how we perceived and responded to the external world.


Basically, robots take in visual information about the object and then compare it to its database of stored objects. After which they then they tell you if it’s a table, toy car, an reconstituted beef burger or your late buddy Bruce. You can help a robot better refine it’s comparing skills, and soon it will be able to tell the difference between a table and chair, a toy car and a toy truck, it might one day be able to tell a beef kebab from a mutton one and that your friend Bruce with a goatee and without a goatee, is still the same person. (I know there is sophisticated software in CCTVs that can measure eye span, face width, nose bridge etc up to a very accurate degree, despite arbitrary disguises like beards, mustaches and facial tattoos). But it will not think like a higher order mammal.


The thing is, human beings are obnoxious, and a lot of what we ’see’ as being ‘the external world’ is what we assume it to be from past experiences. We only ever register the differences. It makes things simpler. That way we don’t have to compare every point of a particular object with the memory of another object like it in order to determine what it is.


When I look at Phillip, I expect him to look a certain way and I simply assume that he does. I have already established in my head that this is the way he looks, and it is not necessary to re-evaluate how he looks every time in order to have a conversation about other people’s beliefs (economic, spiritual or otherwise). However when there is a mismatch between what is assumed and expected and the visual stimuli actually received, something in our brains go “… hang on a second…” Now if it was a huge tash he had grown, I would probably have noticed it, but no sensible man in the 21st century would find it fashionable to have his facial hair dominate his features…


I think the first reaction we have to most things that contradict our internalized model of the world is indignation. It’s “What’s wrong with you? Why don’t you fit into my mental model?” From there, we then try and figure out what this difference is, and recalibrate this view of the world. Basically, we project our views on the world and then take in the external stimuli and compare the two if there is a mismatch. Robots take in the information, and then compare it with previous ‘experience’. We predict, they don’t.

No comments:

Post a Comment