Long Short-Term Memory Recurrent Neural Network Architectures

Authors

  • S. Kavitha  Assistant Professor, Department of Computer Science, Tamil University (Established By The Govt. of. Tamilnadu), Thanjavur, Tamil Nadu, India
  • A. Senthil Kumar  Research Scholar, Department of Computer Science, Tamil University, Thanjavur, Tamil Nadu, India

Keywords:

Prime factoring,Cryptography, PI, DNA, Cipher Text, Plain Text, Key, Encryption, Decryption.

Abstract

Quickly explaining the information of a picture may be considered a essential disadvantage in design technology that attaches pc perspective and language method. In that report, we have got an inclination to surprise a generative product reinforced a heavy traditional structure that combinations new improvements in pc perspective and AI that's applied to create organic phrases explaining an image. LSTM RNN architectures for big degree traditional modeling in presentation recognition. we have got an inclination to lately revealed that LSTM RNNs product a lot of powerful than DNNs and standard RNNs for traditional modeling, contemplating moderately-sized designs experienced on a single machine. Here, we have got an inclination to add the initial spread employment of LSTM RNNs victimization asynchronous arbitrary gradient ancestry optimization on link amount outsized bunch of machines. The EMD approach is applied to decompose the National state fill to many intrinsic setting features (IMFs) and residual. Divided LSTM neural sites were furthermore applied to estimate each UN firm and residual. Last but not least, the foretelling prices from each LSTM product were reconstructed. Exact screening illustrates that the SD-EMD-LSTM approach may effectively estimate the electric load.

References

  1. Nazar, Nasrin Banu, and Radha Senthilkumar. “An online approach for feature selection for classification in big data.” Turkish Journal of Electrical Engineering & Computer Sciences 25.1 (2017): 163-171. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 05 Issue: 03 | Mar-2018 www.irjet.net p-ISSN: 2395-0072 © 2018, IRJET | Impact Factor value: 6.171 | ISO 9001:2008 Certified Journal | Page 3348 
  2. Abraham A. (2002), EvoNF: A Framework for Optimization of Fuzzy Inference Systems Using Neural Network Learning and Evolutionary Computation, In Proceedings of 17th IEEE International Symposium on Intelligent Control, IEEE Press, pp. 327-332.
  3. Suresh, Harini, et al. “Clinical Intervention Prediction and Understanding using Deep Networks.” arXiv preprint arXiv:1705.08498 (2017).
  4. Pascanu, Razvan, Tomas Mikolov, and Yoshua Bengio. “On the difficulty of training recurrent neural networks.” International Conference on Machine Learning. 2013.
  5. Zhu, Maohua, et al. “Training Long Short-Term Memory With Sparsified Stochastic Gradient Descent.” (2016)
  6. Ruder, Sebastian. “An overview of gradient descent optimization algorithms.” arXiv preprint arXiv:1609.04747.(2016).
  7. Recht, Benjamin, et al. “Hogwild: A lock-free approach to parallelizing stochastic gradient descent.” Advances in neural information processing systems. 2011.
  8. Ding, Y., Zhao, P., Hoi, S. C., Ong, Y. S. “An Adaptive Gradient Method for Online AUC Maximization” In AAAI (pp. 2568-2574). (2015, January).
  9. Zhao, P., Hoi, S. C., Wang, J., Li, B. “Online transfer learning”. Artificial Intelligence, 216, 76-102. (2014)
  10. Altinbas, H., Biskin, O. T. “Selecting macroeconomic influencers on stock markets by using feature selection algorithms”. Procedia Economics and Finance, 30, 22-29. (2015).

Downloads

Published

2019-06-30

Issue

Section

Research Articles

How to Cite

[1]
S. Kavitha, A. Senthil Kumar, " Long Short-Term Memory Recurrent Neural Network Architectures, IInternational Journal of Scientific Research in Computer Science, Engineering and Information Technology(IJSRCSEIT), ISSN : 2456-3307, Volume 5, Issue 3, pp.390-394, May-June-2019.