Posted by Dave on May 8, 2010 | Comments Off
I’m here at the Vision Sciences Society 2010 convention in Naples, Florida — a fantastic meeting of vision scientists from around the world. So far the wifi is working well, so I’m going to try liveblogging a session. The session I’m at is “Motion: Perception,” moderated by Scott Stevenson.
First speaker: Albert van den Berg: The vestibular frame for visual perception of head rotation
When you move through a scene, there are two aspects of movement: translation, and rotation.
Did an fMRI study. simulating the rotation of the head while participants were still in the fMRI machine.
Participants had to wear a special contact lens to see the screen because it was extremely close to the head.
“rotated” at varying speeds, around three different axes. The display looks sort of like what you see in sci-fi movies, when a ship is about to enter into warp speed and the stars are moving by.
They were able to identify the brain activity associated with each type of rotation. For each individual, this occurred in a slightly different region of the brain.
Second Speaker: Scott Stevenson: Suppression of retinal image motion due to fixation jitter is directionally biased
Cool, a live animation of the human retina! — Extremely jerky motion, but the eye is fixated steadily on an object.
Question: why don’t we normally see the jitter in our eye? Our eyes are sensitive enough to perceive it.
Suggestion: somehow the visual system actively suppresses this jitter.
In their study, they took the motion of the eye, and fed it back into the stimulus as the viewer perceived it. A small target is moving as the viewer watches.
It’s not perfect stabilization, but it does break down when the eyes make dramatic movements.
When the target perfectly matches eye movement, it fades due to Troxler effect.
So they changed the direction of target motion relative to the eye.
Then they asked the subjects what they perceived.
As the difference between the motion and the eye motion increased, perceived motion decreased!
Third Speaker: Satoshi Shioiri, Comparing the static and flicker MAEs with a cancellation technique in adaptation stimuli
[Lost wifi during this session, sorry!]
Wow, really cool motion after-effect. The top part of the screen is moving in the opposite direction of the bottom.
Basically this is done by moving part of the display (fat vertical bars) left-to-right, and part right-to-left (skinny vertical bars). The right-to-left portion has decreasing contrast from top-to-bottom. Then when the animation is stopped, there’s a really bizarre contortion. I may see if I can contact this guy to get his demo online.
Fourth Speaker: Deborah Apthorp, The neural correlates of motion streaks: an fMRI study
Motion streaks — those streaks added by artists to show motion in a picture — may be a part of our visual system.
When you look at actual motion, your perceptual system responds in a similar way to how it responds to motion streaks in pictures.
So they placed viewers in an fMRI machine, and showed them both dots in motion and streaks (in separate blocks).
Same areas of the brain (V1-V3) show activity with fast motion and streaks. Slow motion has a different activity pattern from both.
Voxels of the brain that responded to particular orientations of an object, also responded to fast motion in the same direction.
Fifth Speaker: Peter Scarfe, Perception of motion from the combination of temporal luminance ramping and spatial luminance gradients
The researchers created a complex stimulus that led to a strong motion aftereffect, even though the stimulus itself wasn’t moving.
Very complicated stimuli, difficult to explain — but a very cool study!
Sixth Speaker: Andrew Rider, Position-variant perception of a novel ambiguous motion field
Question: We often perceive motion with limited information, but depending on information, we perceive it differently. How?
Basically, it seems that when we see a bunch of different moving things, we analyze the motion, then pick the simplest way to integrate all the motion.
But this could be done in a bunch of different ways — how do we actually do it?
They developed an interesting stimulus whose apparent motion depends on where you look. Hard to explain.
Room now filling up for Stuart Antsis’s presentation
Seventh Speaker: Stuart Anstis, Perceptual grouping of ambiguous motion
How do we detect motion of groups of spots?
He’s showing bunches of spots of different shapes, and asking when we see the motion of the spots as a big group rather than small groups.
Overall, tend to parse maximum number of items into minimum number of items!
Very cool, very funny demos.
And now, everyone sings “happy birthday” to Stuart
He says “I’m not looking forward to being 60″
Additional comments powered by BackType