Saturday, June 4, 2016

On Brains and Computers


It's been a while since I've posted here, but perhaps the best impetus for doing so is when I have more to say on a subject than can reasonably be covered in a Facebook post.  I had recently shared this article by famous psychologist Robert Epstein, claiming that the brain is not a computer: that it does not process information, retrieve knowledge, or store memories.  This elicited a rather mocking response from Jeffrey Shallit, a computer scientist who teaches at the University of Waterloo.  I'm afraid my credentials pale in comparison to either of them.  My degree is in sociology and anthropology, not computer science, psychology, or philosophy.  My knowledge of computers and AI comes mostly second-hand from philosophers and scientists I've read on the subject.  I am simply a philosophy enthusiast with a keen interest in issues of the philosophy of mind.  Nonetheless, however limited my knowledge of computers or mathematics, I think I may be able to offer some insight on the situation at hand.

One of the central disagreements here seems to be the definition of "information."  I'm prepared to award the point to Mr. Shallit here, I think we should at least attempt to understand what Epstein is getting at.  Humans experience inputs just as computers do, but computers, as I understand it (correct me if I'm wrong here), take in data bit-by-bit, building their models and programs from the ground up.  The human mind, as Husserl astutely pointed out, encounters not merely the sense impressions of which Locke spoke, but objects.  There is a holistic quality to perception that moves from the general to the specific, not the other way around.  I have no qualms with continuing to call this "information," so long as we understand that we are talking about very different kinds of information.



Another point of contention, and one on which I have a great deal more to say, is on the subject of memory.  Epstein makes a point about representation using an experiment with a dollar bill.  He instructed students to draw a dollar bill as detailed as possible from memory.  They were then asked to draw a dollar with one in front of them.  Suffice to say, the second drawing was much more accurate, despite the fact that the students had doubtless seen countless dollar bills in their lifetime.  Epstein says that this is because there is no representation of the dollar stored in the brain.  This is about the point where Shallit flies off the handle, calling it "utter nonsense" and pointing out that if the students had no representation of a dollar bill in their memories, they would not be able to depict it from memory at all.  What I think Epstein is getting at, though, is that what we recall is not so much a depiction of the dollar as a picture, but as an object that we use.  In Heideggerian terms, we remember its "readiness-to-hand."  The student drawings show the number of the denomination and a picture of George Washington.  These are the features that tell them that such a bill is redeemable for $1 worth of goods or services.  So the "representation" involved is one of meaning.  Memory is much more a constructive act than simply referring back to pre-existing representations.  The nature of memory is more about semantics than syntax.  It is about what something means to the person in question.

There is then the point of contention about whether memories are stored in neurons.  Epstein calls the idea "preposterous," to which Shallit responds by sharing this article about research in mice.  The researchers genetically engineered mice so that their neurons were sensitive to light.  They used electric shock to create a fear memory in the hippocampus region, and then found that they were able to trigger this memory by shining a laser on the neurons "where the memories were stored."  I use scare quotes here because that statement seems rather question-begging.  What I assume they mean is that the laser was used on the same cluster of neurons in the hippocampus that lit up during the original experience.  This is an interesting phenomenon with memory: there does not seem to be any "memory part of the brain."  Rather, the same neurons involved in the original experience light up when the memory is recalled, essentially helping to reconstruct the memory.

Memory works on the basis of self-similarity.  The laser shining upon this cluster of neurons is not so dissimilar to a familiar scent triggering a memory of home.  The conclusion that the memories are actually stored in the cells themselves seems a bit hasty to me, but if you're still convinced, I'd like to direct you to another experiment involving planarian worms who were decapitated, and upon regrowing their heads (along with their brains), were able to perform learned behaviors they had acquired prior to losing their heads.

What's going on here?  An alternative to the neural theory of memory, proposed by Henri Bergson in his 1896 book Matter and Memory, may shed some light on this issue.  He posited that in order to understand memory, we must first have a theory of time.  He looked at the affect of various lesions and types of aphasia, and noted that what is affected in these cases are not specific memories, but the ability to recall memories.  There is thus reason to see memory retrieval as a function of the brain without suggesting that the memories themselves are stored in the brain.  What he suggested is that the past is not some fleeting thing that once was and no longer is.  The past, he suggested, is available to our senses in much the same way as the present.  The difference is that the past is causally closed while the present is causally open.

Our perception of the past does not necessarily differ from our perception of the present in degree of intensity: some memories can be quite powerful and even overwhelming, but we never mistake them for the present.  Why?  Because in the present, our sense perceive what J.J. Gibson called "affordances": opportunities for action.  Our bodies evolved to move about our environment in a particular kind of way, and our senses have essentially been "tuned" to perceive the environment in a way that is consistent with the body's capabilities.  So we perceive the present in this way that our bodies can act upon.

How, then, do we perceive the past?  In a similar way, but since we cannot change the past, we simply scan it for relevant events that could have bearing on our present situation.  Much of this is unconscious, relying on implicit memory, such as knowing how to read, or type, or drive a car.  These are habits, which have been decontextualized from any specific memory by repetition.  What Bergson calls "pure memory" involves the conjuring of an image representing a past event.  This is not necessarily reliable, as the dollar experiment shows.  The image doesn't have to match the real-life event in all the details.  It is only those features that are significant for the subject that are conjured up.  In the case of the $1 bill, the denomination and figure depicted were the features that bore the most significance to the people drawing them.

Past memories, for Bergson, are not bits of data stored in the brain.  They are a kind of holographic field that is tuned into by the brain that has encountered those events before (holography had not yet been invented in Bergson's time, but theory of perception is perhaps best understood in terms of the brain as a holographic receiver).  He also talked about the flow of time, which he called "duration" as necessarily implying a kind of continuous memory.  How is it that when I finish a sentence, I remember how I started it?  If we're to take a physicalist account of memory seriously, we'd have to imagine each word being stored as a new memory, and called upon to pronounce each subsequent word.  Each distinct sound would have to be a new memory.  If I say the word "conversation," when I get to the "n," I have to remember "conversatio-" and "conversat-" and "conversa-" and "convers-" and "conver-" and so on, and each one of these would have to be stored a new memory.  Instead, we experience these things as a continuous flow.  This, Bergson argues, is no illusion, but is in fact the very essence of time and memory.  The past is right there for us, ready to be called upon for use in the present.  Our memories are not bits of data stored away in our neurons, but the field of a past that is always present to us, available to the brain to be tuned into as needed.

Moving along, Shallit cites the use of artificial neural nets to support his point about the computational nature of the brain.  From what I can see, this interesting development shows how the brain processes information differently than ordinary computers.  This network effect works rather differently than the rule-based programming that other computers use.  It is used in tasks like facial recognition, which is extremely difficult to program digitally into a computer.  There are learning algorithms involved, but rather than rule-based programming where rules are set out beforehand, the algorithms are based on the learning dynamics of the system itself.  I'm willing to award the point here to Shallit, so long as we're aware that the kind of computation going on here is a very different kind.  This is a case of computers being redesigned to be more brain-like, rather than discovering that the brain is more computer-like.

At one point, Shallit challenges Epstein to design a walking robot, let alone a running robot, without algorithms.  While on the surface level this may prove a point against Epstein, I think it actually undermines Shallit's point in the larger scheme.  It may be true that some algorithms are necessary for such a task, but it seems like Shallit thinks that such a robot must be designed with a central "brain" doing all the calculation.  That is how computationalists have generally tended to view the brain's role in movement.  Robots such as the famous ASIMO are designed on such principles, and they require a lot of energy to do such computations - far more energy than our own brains consume for performing such a routine task.  There is a different view of cognition, known as the embodied view, which suggests that the brain has no need to do so much processing, because it is situated within a body that do its own calculations.  How does a spider, with its tiny brain, manage to walk on eight legs without tripping over itself?  Try making a computer program that can do that, and I can almost guarantee it will require more processing power than an actual spider brain uses.  What does the spider do instead?  It lets its legs do the thinking for it.  The brain here is less a control center than an intermediary for all the relatively independent body functions.  The mind may be localized in the brain, but it is not confined to it.  Mind is more of an extensive brain-body-environment matrix, with feedback loops going in both directions.  Very different than the kind of robot full of pre-programmed algorithms that Shallit seems to conjure up.

In his postscript, Shallit talks about calculations that computers can do easily but which cannot be performed by the best mathematicians.  Here he probably undermines his point more than anything, for it demonstrates that things that are easy for computers, such as factoring arbitrarily large number, are extremely difficult for human brains, while things that are relatively easy for us, such as walking, require huge amounts of complicated programs for a computer to do.  They have very different operating principles, and as such have very different strengths and weaknesses.

In regard to the question of mathematical algorithms, I'd like to return to Bergson, this time to his 1907 book Creative Evolution.  In this book, he posited that there are two kinds of order.  One is spontaneous, creative, qualitative, and temporal.  It is accessed by intuition, or what Whitehead calls "presentational immediacy."  The other kind of order is geometric, pattern-seeking, quantitative, and spatial.  It is accessed through intellect.  Intellect has a certain reflective, backward-looking quality.  It looks at the products of the first type of order and abstracts them from time, turning them into a mathematical construct that can be measured and analyzed.  A whole host of philosophical problems result from confusing the second type of order for the first.  If I move my hand in a particular motion, you may be able to plot the points through which it passed and put them on a graph, but we fall into error when we assume that the motion itself is the movement from one point to another.  The motion itself was a seamless movement through time, and the mathematics describing it are a timeless abstraction.  The fact that something can be described by an algorithm does not mean that the algorithm itself is responsible for the phenomenon in question.  This is how we get into messes like Zeno's Paradox: Hercules is easily able to overtake the tortoise because his movement is through time, not a geometric movement through a recursive set of points.  The latter is the mathematical description that reflects the intuitive motion of the act itself.

So, are our brains computers?  Perhaps, but not like any computer currently known to us.  I would say that Epstein is sloppy with the details in making his point, while Shallit is technically correct on a lot of the details while missing the bigger picture.  Organisms may be computers in some sense, but they are organic computers that are best understood on their own terms, rather than in terms of the machines we have produced on a very different set of principles.

No comments:

Post a Comment