Total views : 484

Analysis of Image Fusion Techniques based on Quality Assessment Metrics

Affiliations

  • School of Computing Science and Engineering, VIT University, Chennai - 600127, Tamil Nadu, India

Abstract


Objective: The objective of Image Fusion is to combine the relevant and essential information from several images into a single image, which is highly informative than any of the source images such that the resultant fused image will be more appropriate for human visual perception and for image processing tasks like segmentation, feature extraction and object recognition. Methods: This paper presents the basic concepts, various types and levels of fusion, literature review of non-transform and transform based image fusion techniques from the perspective of their applications, advantages and limitations. Findings: The performance of existing image fusion methods along with various assessment metrics that determine the quality of fused images are evaluated and theoretically analyzed. It is found that the computational complexity is considerably reduced in Discrete Cosine Transformation based methods. Applications: Image Fusion has been effectively applied to many fields such as Remote Sensing, Military affairs, Machine Vision, Medical imaging, and so on.

Keywords

Frequency Domain, Image Fusion, Multi-Focus, Quality Assessment Metrics, Spatial Domain.

Full Text:

 |  (PDF views: 474)

References


  • Vivone G, Alparone L, Chanussot J, Mura MD, Garzelli A, Member S et al. A Critical Comparison Among Pansharpening Algorithms. IEEE Transactions on Geoscience and Remote Sensing. 2015; 53(5):2565–86.
  • Leckie DG. Synergism of synthetic aperture radar and visible/infrared data for forest type discrimination. PE&RS;, Photogrammetric Engineering and Remote Sensing. 1990; 56(9):1237–46.
  • Raskar R, Ilie A, Yu J. Image fusion for context enhancement and video surrealism. In: ACM SIGGRAPH 2005 Courses. ACM. 2005; 4 pp.
  • Sheoran A, Haack B. Classification of California agriculture using quad polarization radar data and Landsat Thematic Mapper data. GIScience and Remote Sensing. 2013; 50(1):50–63.
  • Zhu Z, Woodcock CE, Rogan J, Kellndorfer J. Assessment of spectral, polarimetric, temporal, and spatial dimensions for urban and peri-urban land cover classification using Landsat and SAR data. Remote Sensing of Environment. 2012; 117:72–82.
  • Bloom AL, Fielding EJ, Fu X-Y. A demonstration of stereophotogrammetry with combined SIR-B and Landsat TM images. International Journal of Remote Sensing. 1988; 9(5):1023–38.
  • Toutin T. SPOT and Landsat stereo fusion for data extraction over mountainous areas. Photogrammetric Engineering and Remote Sensing. 1998; 64(2):109–13.
  • Gudmundsson SA, Aanaes H, Larsen R. Fusion of stereo vision and time-of-flight imaging for improved 3D estimation. International Journal of Intelligent Systems Technologies and Applications. 2008; 5(3-4):425–33.
  • Franke U, Rabe C, Badino H, Gehrig S. 6d-vision: Fusion of stereo and motion for robust environment perception. In: Pattern Recognition Letters, Elsevier. Springer. 2005; 216–23.
  • Thamarai M, Mohanbabu K. An Improved Image Fusion and Segmentation using FLICM with GA for Medical Diagonosis. Indian Journal of Science and Technology. 2016; 9(12).
  • Nichol J, Wong MS. Satellite remote sensing for detailed landslide inventories using change detection and image fusion. International Journal of Remote Sensing. 2005; 26(9):1913–26.
  • Gong M, Zhou Z, Ma J. Change detection in synthetic aperture radar images based on image fusion and fuzzy clustering. IEEE Transactions on Image Processing. 2012; 21(4):2141–51.
  • Bu S, Cheng S, Liu Z, Han J. Multimodal Feature Fusion for 3D Shape Recognition and Retrieval. IEEE MultiMedia. 2014; 21(4):38–46.
  • Annabattula J, Koteswara Rao S, Sampath Dakshina Murthy A, Srikanth KS, Das RP. Multi-sensor submarine surveillance system using MGBEKF. Indian Journal of Science and Technology. 2015; 8(35):1–5.
  • Li Z, Wang K, Meng D, Xu C. Multi-view stereo via depth map fusion: A coordinate decent optimization method. Neurocomputing. 2016; 178:46–61.
  • Sun B, Li L, Wu X, Zuo T, Chen Y, Zhou G et al. Combining feature-level and decision-level fusion in a hierarchical classifier for emotion recognition in the wild. Journal on Multimodal User Interfaces. 2015; 1–13.
  • Jiang Y, Wang M. Image fusion with morphological component analysis. Information Fusion. 2014; 18:107–18.
  • Toet A, Franken EM. Perceptual evaluation of different image fusion schemes. Displays, Elsevier. 2003; 24(1):25–37.
  • Pohl C, Van Genderen JL. Multisensor image fusion in remote sensing: Concepts, methods and applications. International Journal of Remote Sensing. 1998; 19(5):823–54.
  • Mitchell HB. Image fusion: Theories, techniques and applications. Springer, 2010.
  • Ludusan C, Lavialle O. Multifocus image fusion and denoising: a variational approach. Pattern Recognition Letters. 2012; 33(10):1388–96.
  • Hong G. Image Fusion, Image Registration, and Radiometric Normalization for High Resolution Image Processing. 2007.
  • Mitianoudis N, Stathaki T. Optimal contrast correction for ICA-based fusion of multimodal images. Sensors Journal. 2008; 8(12):2016–26.
  • Li H, Manjunath BS, Mitra SK. Multisensor image fusion using the wavelet transform. Graphical models and image processing. 1995; 57(3):235–45.
  • Zhang Z, Blum RS. A categorization of multiscale decomposition-based image fusion schemes with a performance study for a digital camera application. Proceedings of the IEEE. 1999; 87(8):1315–26.
  • Burt PJ, Kolczynski RJ. Enhanced image capture through fusion. Proceedings of IEEE Fourth International Conference on Computer Vision. 1993. p. 173–82.
  • Sharma RK, Leen TK, Pavel M. Probabilistic Image Sensor Fusion. Image (Rochester, NY). 1999; 1.
  • Amro I, Mateos J, Vega M, Molina R, Katsaggelos AK. A survey of classical methods and new trends in pansharpening of multispectral images. EURASIP Journal on Advances in Signal Processing. 2011; 2011(1):79.
  • Li S, Kwok JT, Wang Y. Combination of images with diverse focuses using the spatial frequency. Information Fusion, Elsevier. 2001; 2(3):169–76.
  • Li S, Yang B. Multifocus image fusion using region segmentation and spatial frequency. Image and Vision Computing, Elsevier. 2008; 26(7):971–9.
  • Kong J, Zheng K, Zhang J, Feng X. Multi-focus image fusion using spatial frequency and genetic algorithm. International Journal of Computer Science and Network Security. 2008; 8(2):220.
  • Yang B, Li S. Multifocus Image Fusion and Restoration with Sparse Representation. IEEE Transactions on Instrumentation and Measurement. 2010; 59(4):884–92.
  • Singhai DAJ. Multifocus image fusion using modified pulse coupled neural network for improved image quality. IET Image Processing. 2010; 4(March):443–51.
  • Djamel S, Mouldi B. Image compression via embedded coder in the transform domain. Asian Journal of Information Technology. 2006; 5(6):633–9.
  • Wallace GK. The Jpeg Still Picture Compression Standard. IEEE Transactions on Consumer Electronics. 1992; 38(1).
  • Tang J. A contrast based image fusion technique in the DCT domain. Digital Signal Processing, Elsevier. 2004; 14(3):218–26.
  • Haghighat MBA, Aghagolzadeh A, Seyedarabi H. Multi-focus image fusion for visual sensor networks in DCT domain. Computers and Electrical Engineering. 2011; 37(5):789–97.
  • Phamila YAV, Amutha R. Discrete Cosine Transform based fusion of multi-focus images for visual sensor networks. Signal Processing, Elsevier. 2014; 95:161–70.
  • Liu C, Longxu J, Hongjiang T, Guoning L. Multi-focus image fusion based on spatial frequency in discrete cosine transform domain. IEEE Signal Processing Letters. 2015; 22(2):220–4.
  • Yang Z-Z, Yang Z. Novel multifocus image fusion and reconstruction framework based on compressed sensing. IET Image Processing. 2013; 7(9):837–47.
  • Jia-zheng Y, Qing L, Bo-xuan S. Multifocus Image Fusion Based on Region Selection. TELKOMNIKA Indonesian Journal of Electrical Engineering. 2013; 11(11):400–4.
  • Zhang Q, Guo B long. Multifocus image fusion using the nonsubsampled contourlet transform. Signal Processing, Elsevier. 2009; 89(7):1334–46.
  • Guo L, Dai M, Zhu M. Multifocus color image fusion based on quaternion curvelet transform. Optics Express. 2012; 20(17):18846.
  • Wang Z, Ziou D, Armenakis C, Li D, Li Q. A Comparative Analysis of Image Fusion Methods. IEEE Transactions on Geoscience and Remote Sensing. 2005; 43(6):1391–402.
  • Phamila YAV, Amutha R. Low complexity multifocus image fusion in discrete cosine transform domain. Optica Applicata. 2013; 43(4).
  • Dataset of Standard Gray scale test images, Computer Vision Group, University of Granada. 2003. Available from: http://decsai.ugr.es/cvg/CG/base.htm
  • Image Repository, Fractal Coding and Analysis Group. University of Waterloo. 2009. Available from: http://links.uwaterloo.ca/Repository.html
  • Test Images, University of Southern California, 1981. Available from: http://sipi.usc.edu/database.
  • Naidu VPS. Multi focus image fusion using the measure of focus. Journal of Optics. Springer. 2012; 41(2):117–25.
  • Rockinger O. Image Sequence Fusion Using a ShiftInvariant Wavelet Transform. Proceedings of International Conference on Image Processing. 1997; 3. p. 288–91.
  • Wang Z, Bovik AC. A universal image quality index. IEEE Signal Processing Letters. 2002; 9(3):81–4.
  • Drajic D, Cvejic N. Adaptive fusion of multimodal surveillance image sequences in visual sensor networks. IEEE Transactions on Consumer Electronics. 2007; 53(4):1456–62.
  • Xydeas CS. Objective Image Fusion Performance Measure. Electronic Letters. 2000; 36(4):308–9.
  • Wan T, Zhu C, Qin Z. Multifocus image fusion based on robust principal component analysis. Pattern Recognition Letters, Elsevier. 2013; 34(9):1001–8.
  • Wang Z, Bovik AC, Simoncelli EP. Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing. 2004; 13(4):600–12.
  • Piella G, Heijmans H. A new quality metric for image fusion. Proceedings of IEEE International Conference on Image Processing. 2003.
  • Piella G. A general framework for multiresolution image fusion: from pixels to regions. Information fusion. 2003; 4(4):259–80.
  • Bai X, Zhou F, Xue B. Edge preserved image fusion based on multiscale toggle contrast operator. Image and Vision Computing. 2011; 29(12):829–39.
  • Hossny M, Nahavandi S, Creighton D, Bhatti A. Image fusion performance metric based on mutual information and entropy driven quadtree decomposition. Electronics letters. 2010; 46(18):1266–8.
  • Haghighat MBA, Aghagolzadeh A, Seyedarabi H. A nonreference image fusion metric based on mutual information of image features. Computers and Electrical Engineering. 2011; 37(5):744–56.
  • Yang Y, Tong S, Huang S, Lin P. Multifocus Image Fusion Based on NSCT and Focused Area Detection. IEEE Sensors Journal. 2015; 15(5):2824–38.
  • Huang W, Jing Z. Evaluation of focus measures in multi-focus image fusion. Pattern Recognition Letters. 2007; 28(4):493–500.
  • Hassen R, Wang Z, Salama MMA. Objective Quality Assessment for Multiexposure Multifocus Image Fusion. IEEE Transactions on Image Processing. 2015; 24(9):2712–24.
  • Mangai UG, Samanta S, Das S, Chowdhury PR. A survey of decision fusion and feature fusion strategies for pattern classification. IETE Technical Review. 2010; 27(4):293–307.
  • De I, Chanda B. Multi-focus image fusion using a morphology-based focus measure in a quad-tree structure. Information Fusion. 2013; 14(2):136–46.
  • Aslantas V, Kurban R. Fusion of multi-focus images using differential evolution algorithm. Expert Systems with Applications. 2010; 37(12):8861–70.
  • Tian J, Chen L, Ma L, Yu W. Multi-focus image fusion using a bilateral gradient-based sharpness criterion. Optics communications. 2011; 284(1):80–7.
  • Li S, Kwok J, Wang Y. Multifocus image fusion using artificial neural networks. Pattern Recognition Letters, Elsevier. 2002; 23(8):985–97.
  • Kannan K, Perumal SA, Arulmozhi K. Area level fusion of multi-focused images using multi-stationary wavelet packet transform. International Journal of Computer Applications. 2010; 2(1):88–95.
  • Li S, Kwok JT, Wang Y. Using the discrete wavelet frame transform to merge Landsat TM and SPOT panchromatic images. Information Fusion. 2002; 3(1):17–23.
  • Do MN, Vetterli M. The contourlet transform: an efficient directional multiresolution image representation. IEEE Transactions on Image Processing. 2005; 14(12):2091–106.
  • Da Cunha AL, Zhou J, Do MN. The nonsubsampled contourlet transform: theory, design, and applications. IEEE Transactions on Image Processing. 2006; 15(10):3089–101.
  • Qu G, Zhang D, Yan P. Information measure for performance of image fusion. Electronics letters. 2002; 38(7):1.

Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.