AI and RFID Based Multipurpose Authentication and Surveillance System

Authors

  • S. I. Ranjitha Department of Computer Science and Engineering, Excel Engineering College, Komarapalayam Namakkal, Tamilnadu, India Author
  • E. Deepankumar M.E., Department of Computer Science and Engineering, Excel Engineering College, Komarapalayam Namakkal, Tamilnadu, India Author

DOI:

https://doi.org/10.32628/CSEIT2410273

Keywords:

Face Recognition, Deep Learning, Attendance System, Features Extraction, Notification

Abstract

One of the main goals for face recognition in surveillance settings is recognising a person recorded on camera or in an image. This means that faces in still photos and video clips must match. High-quality still picture automatic face identification can perform satisfactorily, while video-based face recognition is difficult to do at a comparable level. Video sequences have a number of drawbacks over still image face recognition. First off, the majority of the time, CCTV cameras provides low-quality photos. There is more background noise, and moving objects or out-of-focus subjects might cause photos to become blurry. Second, video sequences often have lesser picture resolution. The resolution of the real facial image may be as low as 64 by 64 pixels if the subject is extremely far away from the camera. Lastly, in video sequences, differences in facial image, including lighting, emotion, position, occlusion, and motion, are more significant. By creating many "bridges" to link the still image and video frames, the method may effectively handle the uneven distributions between still photos and films. In order to match photos with videos and identify unknown matches, the Grassmann algorithm may be used in this research to develop a still-to-video matching strategy with RFID technologies. Matching feature vectors based on deep learning techniques and reading the feature vectors using the Grassmann method. Additionally, when an unknown face is detected, send out an SMS and email notice. After that, provide reports for the attendance system.

Downloads

Download data is not yet available.

References

M. Ayazoglu, B. Li, C. Dicle, M. Sznaier, and O. Camps. Dynamic subspace-based coordinated multicamera tracking. In 2011 IEEE International Conference on Computer Vision(ICCV), pages 2462–2469, Nov. 2011. DOI: https://doi.org/10.1109/ICCV.2011.6126531

D. Baltieri, R. Vezzani, and R. Cucchiara. Learning articulated body models for people re-identification. In Proceedingsof the 21st ACM International Conference on Multimedia, MM ’13, pages 557–560, New York, NY, USA, 2013. ACM. DOI: https://doi.org/10.1145/2502081.2502147

D. Baltieri, R. Vezzani, and R. Cucchiara. Mapping appearance descriptors on 3d body models for people reidentification. International Journal of Computer Vision, 111(3):345–364, 2015. DOI: https://doi.org/10.1007/s11263-014-0747-z

I. B. Barbosa, M. Cristani, B. Caputo, A. Rognhaugen, and T. Theoharis. Looking beyond appearances: Synthetic training data for deep cnns in re-identification. arXiv preprintarXiv:1701.03153, 2017. DOI: https://doi.org/10.1016/j.cviu.2017.12.002

A. Bedagkar-Gala and S. Shah. Multiple person reidentification using part based spatio-temporal color appearance model. In Computer Vision Workshops (ICCVWorkshops), 2011 IEEE International Conference on, pages 1721–1728, Nov 2011. DOI: https://doi.org/10.1109/ICCVW.2011.6130457

Y. Yan, F. Nie, W. Li, C. Gao, Y. Yang, and D. Xu, “Image classification by cross-media active learning with privileged information,” IEEE Transactions on Multimedia, vol. 18, no. 12, pp. 2494–2502, 2016 DOI: https://doi.org/10.1109/TMM.2016.2602938

Y. Yang, Z. Ma, A. G. Hauptmann, and N. Sebe, “Feature selection for multimedia analysis by sharing information among multiple tasks,” IEEE Transactions on Multimedia, vol. 15, no. 3, pp. 661–669, 2013. DOI: https://doi.org/10.1109/TMM.2012.2237023

X. Chang, F. Nie, S.Wang, Y. Yang, X. Zhou, and C. Zhang, “Compound rank-k projections for bilinear analysis,” IEEE Transactions on Neural Networks and Learning Systems, vol. 27, no. 7, pp. 1502–1513, 2016. DOI: https://doi.org/10.1109/TNNLS.2015.2441735

Y. Yang, F. Nie, D. Xu, J. Luo, Y. Zhuang, and Y. Pan, “A multimedia retrieval framework based on semi-supervised ranking and relevance feedback,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 4, pp. 723–742, 2012. DOI: https://doi.org/10.1109/TPAMI.2011.170

W. Li, R. Zhao, T. Xiao, and X. Wang, “Deepreid: Deep filter pairing neural network for person re-identification,” in Proc. CVPR, 2014, pp. 152–159. DOI: https://doi.org/10.1109/CVPR.2014.27

A. Bedagkar-Gala and S. K. Shah. Part-based spatiotemporal model for multi-person re-identification. PatternRecognition Letters, 33(14):1908 – 1915, 2012. Novel Pattern Recognition-Based Methods for Re-identification in Biometric Context. DOI: https://doi.org/10.1016/j.patrec.2011.09.005

J. Berclaz, F. Fleuret, E. T¨uretken, and P. Fua. Multiple object tracking using k-shortest paths optimization. IEEETransactions on Pattern Analysis and Machine Intelligence, 2011. DOI: https://doi.org/10.1109/TPAMI.2011.21

K. Bernardin and R. Stiefelhagen. Evaluating multiple object tracking performance: the CLEAR MOT metrics. EURASIPJournal on Image and Video Processing, (246309):1–10, 2008. DOI: https://doi.org/10.1155/2008/246309

L. Beyer, S. Breuers, V. Kurin, and B. Leibe. Towards a principled integration of multi-camera re-identification and tracking through optimal bayes filters. CVPRWS, 2017. DOI: https://doi.org/10.1109/CVPRW.2017.187

M. Bredereck, X. Jiang, M. Korner, and J. Denzler. Data association for multi-object Tracking-by-Detection in multicamera networks. In 2012 Sixth International Conference onDistributed Smart Cameras (ICDSC), pages 1–6, Oct. 2012.

A. A. Butt and R. T. Collins. Multiple target tracking using frame triplets. In Computer Vision–ACCV 2012, pages 163– 176. Springer, 2013. DOI: https://doi.org/10.1007/978-3-642-37431-9_13

Y. Cai and G. Medioni. Exploring context information for inter-camera multiple target tracking. In 2014 IEEE WinterConference on Applications of Computer Vision (WACV), pages 761–768, Mar. 2014. DOI: https://doi.org/10.1109/WACV.2014.6836026

S. Calderara, R. Cucchiara, and A. Prati. Bayesiancompetitive consistent labeling for people surveillance. PatternAnalysis and Machine Intelligence, IEEE Transactionson, 30(2):354–360, Feb 2008. DOI: https://doi.org/10.1109/TPAMI.2007.70814

L. Cao, W. Chen, X. Chen, S. Zheng, and K. Huang. An equalised global graphical model-based approach for multicamera object tracking. ArXiv:11502.03532 [cs], Feb. 2015.

Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh. Realtime multiperson 2d pose estimation using part affinity fields. In CVPR, 2017. DOI: https://doi.org/10.1109/CVPR.2017.143

Downloads

Published

19-07-2024

Issue

Section

Research Articles

How to Cite

[1]
S. I. Ranjitha and E. Deepankumar, “AI and RFID Based Multipurpose Authentication and Surveillance System”, Int. J. Sci. Res. Comput. Sci. Eng. Inf. Technol, vol. 10, no. 2, pp. 589–601, Jul. 2024, doi: 10.32628/CSEIT2410273.

Similar Articles

1-10 of 90

You may also start an advanced similarity search for this article.