Open Access Open Access  Restricted Access Subscription Access

Amalgamation of Medical Image using Wavelet Theory

Simi P Thomas

Abstract


Image fusion has become a common term used within medical diagnostics and treatment. The term is used when multiple patient images are registered and overlaid or merged to provide additional information. Fused images may be created from multiple images from the same imaging modality, or by combining information from multiple modalities, such as magnetic resonance image (MRI), computed tomography (CT) etc. In radiology and radiation these images serve different purposes. For example, CT images are used more often to ascertain differences in tissue density while MRI images are typically used to diagnose brain tumors. For accurate diagnoses, radiologists must integrate information from multiple image formats. The fusion criterion is to minimize different error between the fused image and the input images. With respect to the medical diagnosis, the edges and outlines of the interested objects is more important than other information. As we know, the images with higher contrast contain more edge-like features. In term of this view, a new medical image fusion scheme based on an improved wavelet coefficient contrast, which is defined as the ratio of the maximum of detail components to the local mean of the corresponding approximate component. The visual experiments and quantitative assessments demonstrate the effectiveness of this method compared to present image fusion schemes, especially for medical diagnosis.

 

Cite this Article:

Thomas Simi P. Amalgamation of Medical Image using Wavelet Theory. Research & Reviews: Discrete Mathematical Structures. 2015; 2(1):
9-14p.


Keywords


Modality, magnetic resonance image, computed tomography

Full Text:

PDF

References


Burt PJ, Adelson EH. The Laplacian Pyramid as a Compact Image Code. IEEE Trans Commun. 1983; 31(4): 532–540p.

Li H, Munjanath S, Mitra S. Multisensor Image Fusion Using the Wavelet Transform. Graphical Models Image Proc. 1995; 57(3): 235–245p.

Chibani Y, Houacine A. Redundant versus Orthogonal Wavelet Decomposition for Multisensor Image Fusion. J Pattern Recogn. 2003; 36: 879–887p.

Tian PU, Fang Qing Zhe, Guo Qiang NI. Contrast-Based Multiresolution Image Fusion. Acta Electronica Sinica.2000; 12: 116–118p.

Petrovic V, Xydeas CS. Objective Image Fusion Performance Measure. Electron Lett. 2000; 36(4): 308–309p.

Burt PJ, Kolczynski RJ. Enhanced Image Capture through Fusion. Proceedings Fourth Int Conf. on Computer Vision, Berlin, IEEE. 1993; 173–182p.

Toet A. Image Fusion by a Ratio of Low-Pass Pyramid. Pattern Recogn Lett. 1989; 9(4): 245–253p.

Petrovic′ V. Subjective Tests for Image Fusion Evaluation and Objective Metric Validation. Inf Fusion. 2005.

Piella G. A General Framework for Multiresolution Image Fusion: From Pixels to Regions. Inf Fusion. 2003; 4: 258–280p.


Refbacks

  • There are currently no refbacks.


This site has been shifted to https://stmcomputers.stmjournals.com/