Show simple item record

dc.contributor.authorFischl, Bruceen_US
dc.contributor.authorSchwartz, Ericen_US
dc.date.accessioned2011-11-14T19:07:13Z
dc.date.available2011-11-14T19:07:13Z
dc.date.issued1996-11en_US
dc.identifier.urihttp://hdl.handle.net/2144/2332
dc.description.abstractThe goal of many early visual filtering processes is to remove noise while at the same time sharpening contrast. An historical succession of approaches to this problem, starting with the use of simple derivative and smoothing operators, and the subsequent realization of the relationship between scale-space and the isotropic dfffusion equation, has recently resulted in the development of "geometry-driven" dfffusion. Nonlinear and anisotropic diffusion methods, as well as image-driven nonlinear filtering, have provided improved performance relative to the older isotropic and linear diffusion techniques. These techniques, which either explicitly or implicitly make use of kernels whose shape and center are functions of local image structure are too computationally expensive for use in real-time vision applications. In this paper, we show that results which are largely equivalent to those obtained from geometry-driven diffusion can be achieved by a process which is conceptually separated info two very different functions. The first involves the construction of a vector~field of "offsets", defined on a subset of the original image, at which to apply a filter. The offsets are used to displace filters away from boundaries to prevent edge blurring and destruction. The second is the (straightforward) application of the filter itself. The former function is a kind generalized image skeletonization; the latter is conventional image filtering. This formulation leads to results which are qualitatively similar to contemporary nonlinear diffusion methods, but at computation times that are roughly two orders of magnitude faster; allowing applications of this technique to real-time imaging. An additional advantage of this formulation is that it allows existing filter hardware and software implementations to be applied with no modification, since the offset step reduces to an image pixel permutation, or look-up table operation, after application of the filter.en_US
dc.language.isoen_USen_US
dc.publisherBoston University Center for Adaptive Systems and Department of Cognitive and Neural Systemsen_US
dc.relation.ispartofseriesBUCAS/CNS Technical Reports; BUCAS/CNS-TR-1996-033en_US
dc.rightsCopyright 1996 Boston University. Permission to copy without fee all or part of this material is granted provided that: 1. The copies are not made or distributed for direct commercial advantage; 2. the report title, author, document number, and release date appear, and notice is given that copying is by permission of BOSTON UNIVERSITY TRUSTEES. To copy otherwise, or to republish, requires a fee and / or special permission.en_US
dc.subjectAnisotropic diffusionen_US
dc.subjectNonlinear adaptive filteringen_US
dc.subjectImage enhancementen_US
dc.subjectActive and real-time visionen_US
dc.titleAdaptive Nonlocal Filtering: A Fast Alternative to Anisotropic Diffusion for Image Enhancementen_US
dc.typeTechnical Reporten_US
dc.rights.holderBoston University Trusteesen_US


Files in this item

This item appears in the following Collection(s)

Show simple item record