Video Classification for Video Service Providers : A Survey

Authors

  • Sourav Joshi  Computer Engineering Department, SavitriBai Phule University, Maharashtra, India
  • Ameya Karhadkar  Computer Engineering Department, SavitriBai Phule University, Maharashtra, India
  • Niranjan Thatte  Computer Engineering Department, SavitriBai Phule University, Maharashtra, India
  • Kunwar Chopra  Computer Engineering Department, SavitriBai Phule University, Maharashtra, India
  • Tanaji Khadtare  Computer Engineering Department, SavitriBai Phule University, Maharashtra, India

DOI:

https://doi.org//10.32628/CSEIT2062109

Keywords:

Deep Learning, Analog Signals, Online Television, Artificial Intelligence, CNN, Restnet50, Transfer-learning

Abstract

One of the very interesting data modalities is video. From a dimensionality and size perspective, videos are one of the most interesting and intuitive data types which enable fast and easy object recognition and learning. Video classification is an important task for archiving digital contents for various video service providers. Video uploading platforms such as YouTube are collecting enormous datasets, empowering Deep Learning research. Videos being an important source to recognize any activity by the humans, video classification becomes an important and critical job for video service providers. The survey paper studies various deep learning, transfer learning and hybrid model approaches. Video data normally occurs as continuous, analog signals In order for a computer to process this video data, the analog signals must be converted to a non-continuous, digital format. In a digital format, the video data can be stored as a series of bits on a hard disk or in computer memory. A video sequence is displayed as a series of frames. Each frame is a snapshot of a moment in time of the motion-video data, and is very similar to a still image. When the frames are played back in sequence on a display device, a rendering of the original video data is created. In real-time video the playback rate is 30 frames per second. This is the minimum rate necessary for the human eye to successfully blend each video frame together into a continuous, smoothly moving image. A single frame of video data can be quite large in size. A video frame with a resolution of 512 x 482 will contain 246,784 pixels. If each pixel contains 24 bits of color information, the frame will require 740,352 bytes of memory or disk space to store. Assuming there are 30 frames per second for real-time video, a 10-second video sequence would be more than 222 megabytes in size! It is clear there can be no computer video without at least one efficient method of video data compression.

References

  1. Andrej Karpathy , George Toderici, Sanketh Shetty Large - Scale video classification with Convolutional Neural Network (CNN)" : 2018, Stanford University
  2. Sami Abu-El-Haija, Nisarg Kothari, Joonseok Lee, Paul Natsev, George Toderici, Balakrishnan Varadarajan, Sudheendra Vijayanarasimhan “YouTube-8M: A Large-Scale Video Classification Benchmark", 2016
  3. Joe Yue-Hei Ng, Matthew Hausknecht, Sudheendra Vijayanarasimhan, Oriol Vinyals, Rajat Monga, George Toderici “Beyond Short Snippets: Deep Networks for Video Classification", 2016
  4. Dillon Graham, Seyed Hamed Fatemi Langroudi, Christopher Kanan, Dhireesha Kudithipudi “Convolutional Drift Networks for Video Classification", 2017, Published in IEEE rebooting computing
  5. Sorin Liviu Jurj, Flavius Opritoiu, Mircea Vladutiu, "Identification of Traditional Motifs using Convolutional Neural Networks", 2018 IEEE 24th International Symposium for Design and Technology in ElectronicPackaging (SIITME)
  6. A. Sai Bharadwaj Reddy and D. Sujitha Juliet , "Transfer Learning with ResNet-50 for Malaria Cell-Image Classification.", 2019, International Conference on Communication and Signal Processing,
  7. Mohammad Ashraf Russo, Alexander Filonenko, Kang-Hyun Jo, "Sport Classification in Sequential Frames Using CNN and RNN", 2018, Graduate School of Electrical Engineering, University of Ulsan, Ulsan, Republic of Korea
  8. Ou Ye, Yao Li, Guimin Li, Zhanli Li, Tong Gao, Tian Ma, "Video scene classification with complex background algorithm based on improved CNNs.", 2018, School of Computer Science and Technology Xi’an University of Science and Technology, Xi'an , China
  9. Ling Shao, Senior Member, IEEE, Fan Zhu, Student Member, IEEE, and Xuelong Li, "Transfer Learning for Visual Categorization: A Survey", 2015, IEEE Conference
  10. Tian, H., Cen Zheng, H., & Chen, S.-C, "Sequential Deep Learning for Disaster-Related Video Classification", 2018, IEEE conference
  11. Jing Li, "Parallel Two-Class 3D-CNN Classifiers for Video Classification", 2017, International Symposium on Intelligent Signal Processing and Communication System, Shandong Management University, Jinan, China
  12. Mohammad Ashraf Russo, Laksono Kurnianggoro, & Kang-Hyon Jo, " Classification of sports videos with combination of deep learning models and transfer learning", 2019, International conference on Electrical, Computer and Communication Engineering
  13. Mounira Hmyada., Ridha Ejbali & Mourad Zaied, "Program Classification in a stream TV using Deep Learning", 2017, International conference on Parallel and Distributed Computing, Application and Technologies
  14. Jungheon Lee, Youngsan Koh & Jihoon Yang, "A Deep Learning based Video Classification System using Multimodality Correlation Approach", 2017, in Sogang University, South Korea
  15. Yuxi Hong, Chen Ling, & Zuochng Ye, "End-to-End Soccer Video Scene and Event Classification with Deep Transfer Learning", 2018, Tsinghua University, China
  16. Inad Aljarrah & Duaa Mohammad, "Video Content Analysis using CNN", 2018, Jordan University of Science and Technology
  17. Jiajun Wu, Yinan Wu, & Kai Yu, "Deep Multiple Instance Learning for Image Classification and Auto-annotation", 2015, Massachussets Institute of Technology
  18. Ifat Abramovich, Tomer Ben-Yehuda & Rami Cohen "Low Complexity Video Classification using RNNs", 2018, Israel Institute of Technology
  19. H.wei, M. Laszewski & N Kehtarnavaz "Deep Learning based Person Detection and Classification for Far Field Video Sveillance", 2018, University of Texas at Dallas
  20. Aaron Chaddha, Alhabib Abbas & Yiannis Andreopoulos "Video Classification with CNNs:Using the Codec as a Spatio-Temoral Activity Sensor", 2019, IEEE transactions on Circuits and Systems for Video Technolgies

Downloads

Published

2020-05-30

Issue

Section

Research Articles

How to Cite

[1]
Sourav Joshi, Ameya Karhadkar, Niranjan Thatte, Kunwar Chopra, Tanaji Khadtare, " Video Classification for Video Service Providers : A Survey, IInternational Journal of Scientific Research in Computer Science, Engineering and Information Technology(IJSRCSEIT), ISSN : 2456-3307, Volume 6, Issue 3, pp.242-248, May-June-2020. Available at doi : https://doi.org/10.32628/CSEIT2062109