A Hybrid Deep Learning Approach for Explicit Content Detection in Images on Social Media and Internet

Authors

  • Pradeep N. Fale  Research Scholar, Department of Computer Science and Engineering, Bhagwant University, Ajmer, Rajasthan, India
  • Dr. Krishan Kumar Goyal  Dean, Faculty of Computer Application, RBSMTC, Agra, India
  • Dr. Shivani  Bhagwant University, Ajmer, Rajasthan, India

Keywords:

Abusive Content Detection, Natural language processing, Social Media, Deep Learning, Machine Learning

Abstract

The exponential increase in the amount of explicit content has presented numerous obstacles to the current way of life. This is especially true in situations wherein children and minors have unrestricted access to the internet. This process of screening the image features of all the TV channels in Malaysia imposes a huge censorship cost on the service providers like Unifi TV because all films, both local and foreign, are required to obtain the suitability approval in Malaysia before they can be distributed or shown to the public. This paper proposes the use of a hybrid model of Deep Learning (DL) approaches, specifically CNN+SVM and CNN+XGBOOST, in order to further improve the process of explicit images recognition in visual contents. The goal of this paper is to use this issue to our advantage. Transfer learning was performed using the previously trained model in order to find a solution to a new binary classification problem including explicit and non-explicit images. The effectiveness of the model that has been developed is examined using a dataset that has recently been compiled and contains more than number of examples of explicit non-explicit images photographs. The CNN+XGBOOST approach was able to acquire the best performance in terms of accuracy of 99.81 % after tests were run on the dataset. This was in comparison to the 76.49% accuracy attained by the CNN+SVM model. For better evaluation we also present the comparison of the proposed systems with standard state of art mechanisms viz. NB, LDA, SVM, RF, KNN, DT, and LR.

References

  1. Y. Lecun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015.
  2. A. Krizhevsky, “ImageNet Classification with Deep Convolutional Neural Networks,” NIPS, vol. 4, no. 4, pp. 253–262, 2012.
  3. A. Nurhadiyatna, S. Cahyadi, F. Damatraseta, and Y. Rianto, “Adult content classification through deep convolution neural network,” Proc. - 2017 Int. Conf. Comput. Control. Informatics its Appl. Emerg. Trends Comput. Sci. Eng. IC3INA 2017, vol. 2018-Janua, pp. 106–110, 2018.
  4. X. Jin, Y. Wang, and X. Tan, “Pornographic Image Recognition via Weighted Multiple Instance Learning,” IEEE Trans. Cybern., vol. PP, pp. 1–9, 2018.
  5. F. Nian, T. Li, Y. Wang, M. Xu, and J. Wu, “Pornographic image detection utilizing deep convolutional neural networks,” Neurocomputing, vol. 210, pp. 283–293, 2016.
  6. K. Zhou, L. Zhuo, Z. Geng, J. Zhang, and X. G. Li, “Convolutional neural networks based pornographic image classification,” Proc. - 2016 IEEE 2nd Int. Conf. Multimed. Big Data, BigMM 2016, pp. 206–209, 2016.
  7. D. Moreira et al., “Pornography classification: The hidden clues in video space–time,” Forensic Sci. Int., vol. 268, pp. 46–61, 2016.
  8. M. D. More, D. M. Souza, and R. C. Barros, “Seamless Explicit images Censorship : an Image-to-Image Translation Approach based on Adversarial Training,” IEEE Int. Jt. Conf. Neural Networks, 2018.
  9. A. P. B. Lopes, S. E. F. De Avila, A. N. A. Peixoto, R. S. Oliveira, M. D. M. Coelho, and A. D. A. Araújo, “Nude detection in video using bag-of-visual-features,” Proc. SIBGRAPI 2009 - 22nd Brazilian Symp. Comput. Graph. Image Process., pp. 224–231, 2009.
  10. R. Shen, F. Zou, J. Song, K. Yan, and K. Zhou, “EFUI: An ensemble framework using uncertain inference for pornographic image recognition,” Neurocomputing, vol. 322, pp. 166–176, 2018.
  11. A. P. B. Lopes, S. E. F. De Avila, A. N. A. Peixoto, R. S. Oliveira, and A. De A. Araújo, “A bag-of-features approach based on Hue-SIFT descriptor for nude detection,” Eur. Signal Process. Conf., no. Eusipco, pp. 1552–1556, 2009.
  12. W. Zhou, A. Ahrary, and S. I. Kamata, “Image description with local patterns: An application to face recognition,” IEICE Trans. Inf. Syst., vol. E95-D, no. 5, pp. 1494–1505, 2012.
  13. D. G. Lowe, “Object recognition from local scale-invariant features,” Proc. Seventh IEEE Int. Conf. Comput. Vis., pp. 1150–1157 vol.2, 1999.
  14. N. Dalal, B. Triggs, and D. Europe, “Histograms of Oriented Gradients for Human Detection,” 2005.
  15. L. Lv, C. Zhao, H. Lv, J. Shang, Y. Yang, and J. Wang, “Pornographic images detection using high-level semantic features,” Proc. - 2011 7th Int. Conf. Nat. Comput. ICNC 2011, vol. 2, pp. 1015–1018, 2011.
  16. J. Zhang, L. Sui, L. Zhuo, Z. Li, and Y. Yang, “An approach of bag-of-words based on visual attention model for pornographic images recognition in compressed domain,” Neurocomputing, vol. 110, no. July 2012, pp. 145–152, 2013.
  17. C. Caetano, S. Avila, S. Guimar, and A. D. A. Ara, “Pornography Detection using BOSSANOVA Video Descripor,” pp. 2–6, 2014.
  18. S. Avila, N. Thome, M. Cord, E. Valle, and A. De A. Araújo, “Pooling in image representation: The visual codeword point of view,” Comput. Vis. Image Underst., vol. 117, no. 5, pp. 453–465, 2013.
  19. C. Caetano, S. Avila, W. R. Schwartz, S. J. F. Guimarães, and A. de A. Araújo, “A mid-level video representation based on binary descriptors: A case study for pornography detection,” Neurocomputing, vol. 213, pp. 102–114, 2016.
  20. D. Li, N. Li, J. Wang, and T. Zhu, “Pornographic images recognition based on spatial pyramid partition and multi-instance ensemble learning,” Knowledge-Based Syst., vol. 84, pp. 214–223, 2015.
  21. C. X. Ries and R. Lienhart, “A survey on visual adult image recognition,” Multimed. Tools Appl., vol. 69, no. 3, pp. 661–688, 2014.
  22. Z. Geng, L. Zhuo, J. Zhang, and X. Li, “A comparative study of local feature extraction algorithms for Web pornographic image recognition,” Proc. 2015 IEEE Int. Conf. Prog. Informatics Comput. PIC 2015, pp. 87–92, 2016.
  23. R. Nejad, Elaheh Mahraban and Affendey, Lilly Suriani and Latip, Rohaya Binti and Ishak, Iskandar Bin and Banaeeyan, “Transferred Semantic Scores for Scalable Retrieval of Histopathological Breast Cancer Images,” pp. 1–8, 2018.
  24. R. Banaeeyan, H. Lye, M. F. Ahmad Fauzi, H. Abdul Karim, and J. See, “Semantic facial scores and compact deep transferred descriptors for scalable face image retrieval,” Neurocomputing, 2018.
  25. K. Fernandes, J. S. Cardoso, and B. S. Astrup, “A deep learning approach for the forensic evaluation of sexual assault,” Pattern Anal. Appl., vol. 21, no. 3, pp. 629–640, 2018.
  26. R. G. Crane, “Deep Residual Learning for Image Recognition 2015,” no. (ed.), Oxford, U.K., Pergamon Press PLC, 1989, Section 3, pp.111-120. (ISBN 0-08-036148-X), pp. 1–9, 1989.

Downloads

Published

2022-06-30

Issue

Section

Research Articles

How to Cite

[1]
Pradeep N. Fale, Dr. Krishan Kumar Goyal, Dr. Shivani, " A Hybrid Deep Learning Approach for Explicit Content Detection in Images on Social Media and Internet, IInternational Journal of Scientific Research in Computer Science, Engineering and Information Technology(IJSRCSEIT), ISSN : 2456-3307, Volume 8, Issue 3, pp.126-134, May-June-2022.