Show simple item record

dc.contributor.authorGrossberg, Stephenen_US
dc.contributor.authorMingolla, Ennioen_US
dc.contributor.authorViswanathan, Lavanyaen_US
dc.date.accessioned2011-11-14T19:00:16Z
dc.date.available2011-11-14T19:00:16Z
dc.date.issued2000-02en_US
dc.identifier.urihttp://hdl.handle.net/2144/2252
dc.description.abstractA neural model is developed of how motion integration and segmentation processes, both within and across apertures, compute global motion percepts. Figure-ground properties, such as occlusion, influence which motion signals determine the percept. For visible apertures, a line's terminators do not specify true line motion. For invisible apertures, a line's intrinsic terminators create veridical feature tracking signals. Sparse feature tracking signals can be amplified before they propagate across position and are integrated with ambiguous motion signals within line interiors. This integration process determines the global percept. It is the result of several processing stages: Directional transient cells respond to image transients and input to a directional short-range filter that selectively boosts feature tracking signals with the help of competitive signals. Then a long-range filter inputs to directional cells that pool signals over multiple orientations, opposite contrast polarities, and depths. This all happens no later than cortical area MT. The directional cells activate a directional grouping network, proposed to occur within cortical area MST, within which directions compete to determine a local winner. Enhanced feature tracking signals typically win over ambiguous motion signals. Model MST cells which encode the winning direction feed back to model MT cells, where they boost directionally consistent cell activities and suppress inconsistent activities over the spatial region to which they project. This feedback accomplishes directional and depthful motion capture within that region. Model simulations include the barberpole illusion, motion capture, the spotted barberpole, the triple barberpole, the occluded translating square illusion, motion transparency and the chopsticks illusion. Qualitative explanations of illusory contours from translating terminators and plaid adaptation are also given.en_US
dc.description.sponsorshipDefense Advanced Research Porjects Agency and the Office of Naval Research (N00014-95-1-0409); National Science Foundation (IRI-97-20333, IRI-94-01659); Office of Naval Research (N00014-92-J-1309, N00014-95-1-0657)en_US
dc.language.isoen_USen_US
dc.publisherBoston University Center for Adaptive Systems and Department of Cognitive and Neural Systemsen_US
dc.relation.ispartofseriesBU CAS/CNS Technical Reports;CAS/CNS-TR-2000-004en_US
dc.rightsCopyright 2000 Boston University. Permission to copy without fee all or part of this material is granted provided that: 1. The copies are not made or distributed for direct commercial advantage; 2. the report title, author, document number, and release date appear, and notice is given that copying is by permission of BOSTON UNIVERSITY TRUSTEES. To copy otherwise, or to republish, requires a fee and / or special permission.en_US
dc.subjectMotion integration
dc.subjectMotion segmentation
dc.subjectMotion capture
dc.subjectAperture problem
dc.subjectFeature tracking
dc.subjectMT
dc.subjectMST
dc.subjectNeural networks
dc.titleNeural Dynamics of Motion Integration and Segmentation within and across Aperturesen_US
dc.typeTechnical Reporten_US
dc.rights.holderBoston University Trusteesen_US


Files in this item

This item appears in the following Collection(s)

Show simple item record