Neural Dynamics of Motion Integration and Segmentation within and across Apertures


Show simple item record Grossberg, Stephen en_US Mingolla, Ennio en_US Viswanathan, Lavanya en_US 2011-11-14T19:00:16Z 2011-11-14T19:00:16Z 2000-02 en_US
dc.description.abstract A neural model is developed of how motion integration and segmentation processes, both within and across apertures, compute global motion percepts. Figure-ground properties, such as occlusion, influence which motion signals determine the percept. For visible apertures, a line's terminators do not specify true line motion. For invisible apertures, a line's intrinsic terminators create veridical feature tracking signals. Sparse feature tracking signals can be amplified before they propagate across position and are integrated with ambiguous motion signals within line interiors. This integration process determines the global percept. It is the result of several processing stages: Directional transient cells respond to image transients and input to a directional short-range filter that selectively boosts feature tracking signals with the help of competitive signals. Then a long-range filter inputs to directional cells that pool signals over multiple orientations, opposite contrast polarities, and depths. This all happens no later than cortical area MT. The directional cells activate a directional grouping network, proposed to occur within cortical area MST, within which directions compete to determine a local winner. Enhanced feature tracking signals typically win over ambiguous motion signals. Model MST cells which encode the winning direction feed back to model MT cells, where they boost directionally consistent cell activities and suppress inconsistent activities over the spatial region to which they project. This feedback accomplishes directional and depthful motion capture within that region. Model simulations include the barberpole illusion, motion capture, the spotted barberpole, the triple barberpole, the occluded translating square illusion, motion transparency and the chopsticks illusion. Qualitative explanations of illusory contours from translating terminators and plaid adaptation are also given. en_US
dc.description.sponsorship Defense Advanced Research Porjects Agency and the Office of Naval Research (N00014-95-1-0409); National Science Foundation (IRI-97-20333, IRI-94-01659); Office of Naval Research (N00014-92-J-1309, N00014-95-1-0657) en_US
dc.language.iso en_US en_US
dc.publisher Boston University Center for Adaptive Systems and Department of Cognitive and Neural Systems en_US
dc.relation.ispartofseries BU CAS/CNS Technical Reports;CAS/CNS-TR-2000-004 en_US
dc.rights Copyright 2000 Boston University. Permission to copy without fee all or part of this material is granted provided that: 1. The copies are not made or distributed for direct commercial advantage; 2. the report title, author, document number, and release date appear, and notice is given that copying is by permission of BOSTON UNIVERSITY TRUSTEES. To copy otherwise, or to republish, requires a fee and / or special permission. en_US
dc.subject Motion integration
dc.subject Motion segmentation
dc.subject Motion capture
dc.subject Aperture problem
dc.subject Feature tracking
dc.subject MT
dc.subject MST
dc.subject Neural networks
dc.title Neural Dynamics of Motion Integration and Segmentation within and across Apertures en_US
dc.type Technical Report en_US
dc.rights.holder Boston University Trustees en_US

Files in this item

This item appears in the following Collection(s)

Show simple item record

Search OpenBU


Deposit Materials