Show simple item record

dc.contributor.authorBrowning, Andrew N.en_US
dc.contributor.authorGrossberg, Stephenen_US
dc.contributor.authorMingolla, Ennioen_US
dc.date.accessioned2011-11-14T18:50:48Z
dc.date.available2011-11-14T18:50:48Z
dc.date.issued2008-12en_US
dc.identifier.urihttp://hdl.handle.net/2144/2218
dc.description.abstractVisually guided navigation through a cluttered natural scene is a challenging problem that animals and humans accomplish with ease. The ViSTARS neural model proposes how primates use motion information to segment objects and determine heading for purposes of goal approach and obstacle avoidance in response to video inputs from real and virtual environments. The model produces trajectories similar to those of human navigators. It does so by predicting how computationally complementary processes in cortical areas MT-/MSTv and MT+/MSTd compute object motion for tracking and self-motion for navigation, respectively. The model retina responds to transients in the input stream. Model V1 generates a local speed and direction estimate. This local motion estimate is ambiguous due to the neural aperture problem. Model MT+ interacts with MSTd via an attentive feedback loop to compute accurate heading estimates in MSTd that quantitatively simulate properties of human heading estimation data. Model MT interacts with MSTv via an attentive feedback loop to compute accurate estimates of speed, direction and position of moving objects. This object information is combined with heading information to produce steering decisions wherein goals behave like attractors and obstacles behave like repellers. These steering decisions lead to navigational trajectories that closely match human performance.en_US
dc.description.sponsorshipNational Science Foundation (SBE-0354378, BCS-0235398); Office of Naval Research (N00014-01-1-0624); National Geospatial Intelligence Agency (NMA201-01-1-2016)en_US
dc.language.isoen_USen_US
dc.publisherBoston University Center for Adaptive Systems and Department of Cognitive and Neural Systemsen_US
dc.relation.ispartofseriesBU CAS/CNS Technical Reports;CAS/CNS-TR-2008-007en_US
dc.rightsCopyright 2008 Boston University. Permission to copy without fee all or part of this material is granted provided that: 1. The copies are not made or distributed for direct commercial advantage; 2. the report title, author, document number, and release date appear, and notice is given that copying is by permission of BOSTON UNIVERSITY TRUSTEES. To copy otherwise, or to republish, requires a fee and / or special permission.en_US
dc.subjectOptic flowen_US
dc.subjectNavigationen_US
dc.subjectMTen_US
dc.subjectMSTen_US
dc.subjectMotion segmentationen_US
dc.subjectObject trackingen_US
dc.titleCortical Dynamics of Navigation and Steering in Natural Scenes: Motion-Based Object Segmentation, Heading, and Obstacle Avoidanceen_US
dc.typeTechnical Reporten_US
dc.rights.holderBoston University Trusteesen_US


Files in this item

This item appears in the following Collection(s)

Show simple item record