Enhancing Multi Exposure Images Using Convolution Neural Network

Authors

  • Sunitha Nandhini A  Assistant Professor, Department of Computer Science and Engineering, Sri Krishna College of Technology, Coimbatore, Tamil Nadu, India
  • Anjani A L  Department of Computer Science and Engineering Sri Krishna College of Technology, Coimbatore, Tamil Nadu, India
  • Indhuja R  Department of Computer Science and Engineering Sri Krishna College of Technology, Coimbatore, Tamil Nadu, India
  • Jeevitha D  Department of Computer Science and Engineering Sri Krishna College of Technology, Coimbatore, Tamil Nadu, India

DOI:

https://doi.org//10.32628/CSEIT195242

Keywords:

Single Image Contrast Enhancement, Multi-Exposure Image Fusion, Convolutional Neural Network.

Abstract

Due to the poor lighting condition and restricted dynamic vary of digital imaging devices, the recorded photos are usually under-/over-exposed and with low distinction. Most of the previous single image distinction improvement (SICE) strategies modify the tone curve to correct the distinction of an associated input image. Those strategies, however, typically fail in revealing image details due to the restricted data in a very single image. On the opposite hand, the SICE task is often higher accomplished if we will learn additional info from suitably collected coaching information. In this paper, we have a tendency to propose to use the convolutional neural network (CNN) to coach SICE attention. One key issue is the way to construct a coaching information set of low-contrast and high-contrast image pairs for end-to-end CNN learning. To this finish, we have a tendency to build a large-scale multi-exposure image knowledge set, that contains 589 in an elaborate way chosen high-resolution multi-exposure sequences with four, 413 images. Thirteen representatives multi-exposure image fusion and stack-based high dynamic vary imaging algorithms are accustomed urge the excellence enhanced footage for each sequence, and subjective experiments are conducted to screen the best quality one because of the reference image of every scene. With the constructed data set, a CNN can be easily trained as the SICE enhancer to improve the contrast of an under-/over-exposure image. Experimental results demonstrate the benefits of our methodology over existing SICE strategies with a major margin.

References

  1. T. Arici, S. Dikbas, and Y. Altunbasak, “A histogram modification framework and its application for image contrast enhancement,” IEEE Trans. Image Process., vol. 18, no. 9, pp. 1921-1935, Sep. 2009.
  2. T. Celik and T. Tjahjadi, “Contextual and variational contrast enhancement,” IEEE Trans. Image Process., vol. 20, no. 12, pp. 3431-3441, Dec. 2011.
  3. D. J. Jobson, Z.-U. Rahman, and G. A. Woodell, “A multiscale retinex for bridging the gap between color images and the human observation of scenes,” IEEE Trans. Image Process., vol. 6, no. 7, pp. 965-976, Jul. 1997.
  4. L. Yuan and J. Sun, “Automatic exposure correction of consume photographs,” in Proc. Comput. Vis.-ECCV, 2012, pp. 771-785.
  5. Z. Li, J. Zheng, Z. Zhu, W. Yao, and S. Wu, “Weighted guided image filtering,” IEEE Trans. Image Process., vol. 24, no. 1, pp. 120-129, Jan. 2015.
  6. G. Eilertsen, J. Kronander, G. Denes, R. K. Mantiuk, and J. Unger, “HDR image reconstruction from a single exposure using deep CNNS,” ACM Trans. Graph., vol. 36, no. 6, p. 178, 2017.
  7. M. Gharbi, J. Chen, J. T. Barron, S. W. Hasinoff, and F. Durand, “Deep bilateral learning for real-time image enhancement,” ACM Trans. Graph., vol. 36, no. 4, p. 118, 2017.
  8. Y. Endo, Y. Kanamori, and J. Mitani, “Deep reverse tone mapping,” ACM Trans. Graph., vol. 36, no. 6, Nov. 2017, Art. no. 177.
  9. D. J. Jobson, Z.-U. Rahman, and G. A. Woodell, “Properties and performance of a center/surround Retinex,” IEEE Trans. Image Process., vol. 6, no. 3, pp. 451-462, Mar. 1997.
  10. D. Guo, Y. Cheng, S. Zhuo, and T. Sim, “Correcting overexposure in photographs,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2010, pp. 515-521.2062
  11. K. Ma, K. Zeng, and Z. Wang, “Perceptual quality assessment for multi-exposure image fusion,” IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3345-3356, 2015.
  12. K. Zeng, K. Ma, R. Hassen, and Z. Wang, “Perceptual evaluation of multi-exposure image fusion algorithms,” in Quality of Multimedia Experience (QoMEX), 2014 Sixth International Workshop on. IEEE, pp. 7-12, 2014.
  13. C. Lee, C. Lee, and C.-S. Kim, “Contrast enhancement based on layered difference representation,” in Image Processing (ICIP), 2012 19th IEEE International Conference on. IEEE, pp. 965-968, 2012.
  14. G. Schaefer and M. Stich, “Ucid: An uncompressed color image database,” in Storage and Retrieval Methods and Applications for Multimedia 2004, vol. 5307. International Society for Optics and Photonics, 2003, pp. 472-481
  15. P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik, “Contour detection and hierarchical image segmentation,” IEEE transactions on pattern analysis and machine intelligence, vol. 33, no. 5, pp. 898-916, 2011.
  16. L. Shen, Z. Yue, F. Feng, Q. Chen, S. Liu, and J. Ma, “MSR-net: Low-light image enhancement using the deep convolutional network,” arXiv preprint arXiv:1711.02488, 2017.
  17. X. Guo, Y. Li, and H. Ling, “Lime: Low-light image enhancement via illumination map estimation,” IEEE Transactions on Image Processing, vol. 26, no. 2, pp. 982-993, 2017.
  18. Chollet and Francois. “Keras,” GitHub repository: https://github.com/fchollet/keras, 2015.
  19. L. Li, R. W. Wang, W. Gao, A low-light image enhancement method for both denoising and contrast enlarging, in IEEE Conference on Image Processing, 2015, pp. 3730-3734 .
  20. X. Dong, G. Wang, Y. Pang, W. Li, J. Wen, Y. Lu, Fast efficient algorithm for enhancement of low lighting video, in IEEE Conference on Multimedia and Expo, 2011, pp. 1-6.
  21. S.M. Pizer, E.P. Amburn, J.D. Austin, R. Cromartie, A. Geselowitz, T. Greer, B.H. Romney, J.B. K. Zuiderveld, Adaptive histogram equalization and its variations, Comput. Vis. Graph. Image Process. 39 (1987) 355-368 .
  22. K. Fotiadou, G. Tsagkatakis, P. Tsakalides, Low light image enhancement via sparse representations, in IEEE Conference on Image Analysis and Recognition, 2014, pp. 84-93 .
  23. X. D. Zeng, Y. X. Ding, J. Paisley, A fusion-based enhancing method for weakly illuminated images, Signal Process. 129 (2016) 82-96 .
  24. X. Guo, Y. Li, H. Ling, Lime: low-light image enhancement via illumination map estimation, IEEE Trans. Image Process. 26 (2017) 982-993 .
  25. S. Wang, J. Zheng, H.M. Liu, Naturalness preserved enhancement algorithm for non-uniform illumination images, IEEE Trans. Image Process. 22 (2013) 3538-3548 .
  26. X. Zhang, P. Shen, L. Luo, L. Zhang, J. Song, Enhancement and noise reduction of very low light level images, in IEEE Conference on Pattern Recognition, 2012, pp. 2034-2037 .

Downloads

Published

2019-04-30

Issue

Section

Research Articles

How to Cite

[1]
Sunitha Nandhini A, Anjani A L, Indhuja R, Jeevitha D, " Enhancing Multi Exposure Images Using Convolution Neural Network, IInternational Journal of Scientific Research in Computer Science, Engineering and Information Technology(IJSRCSEIT), ISSN : 2456-3307, Volume 5, Issue 2, pp.223-228, March-April-2019. Available at doi : https://doi.org/10.32628/CSEIT195242