A paper that I read as part of my Intro to Cog Sci course that I found particularly intriguing was Anne Treisman’s Feature Binding, Attention and Object Perception in which she discusses The Binding Problem and proposes quite an elegant solution to it, which was backed quantitative research whose results she published in the paper. In this post I will be discussing the Binding Problem, and where previously proposed solutions have failed. In my next post, I will analyze Treisman’s proposed solution.
What is the Binding Problem?
The Binding problem is quite an insidious one, something that everyone solves every living moment of our lives and yet aren’t even aware of its existence. As Treisman notes, this in and of itself is a “testimony to how well, in general, our brains solve it”.
Think of it this way. Let’s consider our visual experience of an object as simple as an apple. What we are actually seeing is the color red, the shape round, the texture smooth, the depth, and many more characteristics put together. Of course, when we see the apple, we perceive all of these different characteristics of our visual experience immediately, so an apple is just an apple. However, the visual systems are modular; different parts of our brains code for different characteristics of the visual field. This is supported by our observations of certain medical conditions like cerebral achromatopsia in which people lose color vision whilst maintaining abilities to perceive all other characteristics of vision (such as depth or shape perception) due to brain damage in the occipital and temporal lobes. Furthermore, fMRI data show that different parts of our brains that ‘light up’ when asked to describe different characteristics of the same image such as shapes, colors, and direction of motion whilst there remain no physical (neural) interconnections between these areas.
The Binding problem consists of finding a bridge between the two aspects of our visual experience highlighted above: our unified perception of an object and our dispersed processing of the same object. Simply put, how do our brains piece together the jigsaw puzzle of our visual experience where each jigsaw is a different visual characteristic?
There’s certainly a risk of mismatching characteristics and objects, but we don’t? Why? Well, interestingly the answer lies in the question; it can be answered by studying instances when we do mismatch characteristics and objects. This is what Treisman does through her experiments. But first, let’s consider some other proposals for the solution.
Other proposed solutions to the Binding Problem.
Before I discuss some other solutions let me add a bit of specificity to the binding problem. The issue is that of different neurons or groups of neurons (we will call this a ‘unit’ thereafter) specializing in different visual characteristics. When a unit that codes for the color red and motion ‘fire’ simultaneously, how do we know if we are seeing a moving red object or a stationary red object and a moving yellow object?
Direct Conjunction Coding
Well, what if a single unit only codes for a single object? if a single neuron only attends to one object, it can simply just match all the characteristics it perceives together without worrying about mismatching. In other words, a 1-to-1 relationship between the neuron and the object to be coded eliminates any possibility of mismatching. This is the first proposal.
An early issue is that we do not only ‘see’ a single object; our visual fields are large and so we see multiple objects at the same time. However, this could be resolved if this unit-object coding occurred in the earliest stages of our visual processing, one in which our receptive fields are small enough to observe a single object.
Another issue would be, well then, a singular unit needs to be able to code for various visual characteristics for it to be able to code for a single object. There has been evidence to suggest may be true, one which Treisman considers is a study by Tanaka (1993) that shows single cells in the Inferior temporal (IT) area that code for “relatively complex combinations of features”. However, this study illustrates possibility at best; the animal subjects of the experiments were only shown singular objects so binding might have already occurred before these units in the IT ‘fire’.
The most pertinent issue with this solution (and where this solution fails) lies with the definite number of neurons that we have and the indefinite number of possible objects (or conjunctions) we can perceive. “Direct conjunction coding” would put a limit on how many different things we can see and perceive. However, we can see an infinite number of things: an infinite permutation of different characteristics coming together to form a cohesive object. If a three headed flailed on your door step, she wouldn’t be invisible. You probably just ‘saw’ her in your imagination right now. With this consideration in mind, we need a solution that offers unlimited permutations of characteristics to be perceived.
Synchronized Neural Activity
Where Direct Conjunction coding fails, Synchronized Neural Activity makes up for. This theory suggests that “units that fire together would signal the same object”. Essentially, think of particular synchrony of units as another unit. For simplicities sake, let’s name these higher units as ‘synchronies. Synchronized Neural Activity offers a possible solution to the issue of ‘limit’ that Direct Conjunction Coding faces because there are an unlimited, infinite possible ‘synchronies’ as there are infinite possible permutations of synchronized units.
Moreover, there have been experimentally observable ‘synchronies’ identified. Crick and Koch (1990) identified ‘synchronies’ in Cat’s responding to a moving bar. Furthermore, ‘Synchronies’ have even been identified consisting of units that are dispersed in different locations of the brain with no structural interconnections between them (Engel et. al… 1990).
However, there have been contradicting empirical results that reject this theory. For one, neural synchrony seems to be notable only for moving stimuli and close to unnotable in stationary stimuli. Regardless, we still successfully and accurately bind stationary objects.
More notably, as Tresiman notes, this theory doesn’t offer a complete answer to the Binding Problem in the first place. The issue of mismatching still exists when considering objects that share features. Consider an apple and an orange. They both are different objects with different colors but the same shape. As a result, while perceiving them simultaneously, the synchronies for an apple overlaps with that of an orange. So, though the theory offers a possible way of “binding within dimensions” (or characteristics) it does not answer the binding of the different synchronizations to a single object if two or more synchronizations are in place (which they are).
So, it seems that synchrony is more a way of keeping binded features of objects binded rather than actually binding them in the first place.
For length’s sake, I will continue the discussion of Treisman’s solution in my next post!
Till then,
Goodbye.
References:
- Treisman, A. (1998). Feature Binding, Attention and Object Perception. Philosophical Transactions: Biological Sciences, 353(1373), 1295-1306. Retrieved May 25, 2020, from www.jstor.org/stable/56884
- LaRock, E. (2006). Why Neural Synchrony Fails to Explain the Unity of Visual Consciousness. Behavior and Philosophy, 34, 39-58. Retrieved May 25, 2020, from www.jstor.org/stable/27759519
Comments are closed