Comparison of Back Propagation Algorithms and Fusion Methodology Using Dempster-Shafer Rule in Medical Application

Authors(1) :-B. Sumathi

This paper presents various Back Propagation algorithms for the diagnosis of hypertension. Parameters such as learning rate and momentum coefficients are used to improve the rate of convergence and controls the feedback loop of Back Propagation algorithm. The value for learning rate and momentum factors are varied instead of using fixed value to make the learning more effectively during the training process. The primary classifiers used in this paper are Quasi-Newton (QN), Gradient Descent (GD) and Levenberg-Marquardt (LM) Back Propagation training algorithms, each using different learning function. The Dempster-Shafer's rule has been adopted to combine the output of these three Back Propagation neural networks into single one to enhance the target result. The experimental result shows that the fusion method would provide a significantly higher accuracy for the diagnosis of hypertension.

Authors and Affiliations

B. Sumathi
Department of Computer Science, CMS College of Science and Commerce, Coimbatore, Tamil Nadu, India

Back Propagation, Dempster-Shafer, Accuracy, Learning Rate, Momentum

  1. Zarth. A, Ludermir. T.B, Optimization of Neural Networks Weights and Architecture: A Multimodal Methodology, IEEE International Conference on Intelligent Systems Design and Applications, 2009, pp. 209-214.
  2. Bengio. Y, Mori Renato, Flammia. G Kompe. R, Global Optimization of A Neural Network-Hidden Markov Model Hybrid, IEEE Transaction on Neural Networks, 1992, Vol. 3, No. 2, pp. 252-259.
  3. Behera L, Kumar S, Patnaik A, (2006) "On Adaptive Learning Rate that Guarantees Convergence in Feed Forward Networks," IEEE Transaction on Neural Networks, Vol. 17, pp. 1116-1125.
  4. Norhamreeza Abdul Hamid, Nazri Mohd Nawi, Rozaida Ghazali, Mohd Najib Mohd Salleh, " Accelerating Learning Performance of Back Propagation Algorithm by Using Adaptive Gain Together with Adaptive Momentum and Adaptive Learning Rate on Classification Problems", International Journal of Software Engineering and its Applications, 2011, Vol. 5 No. 4, pp. 31-44.
  5. Rehman M. Z., Nawi N. M., "Improving the Accuracy of Gradient Descent Back Propagation Algorithm (GDAM) on Classification Problems", International Journal on New Computer Architectures and Their Applications, 2011, pp. 861-870.
  6. Yu L., Wang S., and Lai K., "An Adaptive BP Algorithm with Optimal Learning Rates and Directional Error Correction for Foreign Exchange Market Trend Prediction", Advances in Neural Networks - Springer, Berlin Heidelberg, 2006, pp. 498-503.
  7. Yu X.H, Chen G.A, Cheng S.X, (1993) "Acceleration of Back propagation of Learning Using Optimized Learning Rate and Momentum," Electron. Lett, 29(14) pp. 1288-1289.
  8. Minai, A. A., and Williams, R. D., Acceleration of Back-Propagation Through Learning Rate and Momentum Adaptation, in International Joint Conference on Neural Networks, IEEE, pp 676-679, 1990.
  9. Ampazis, N., and Perantonis, S. J., "Levenberg-Marquardt Algorithm with Adaptive Momentum for the Effiecient Training of Feedforward Networks, in International Joint Conference on Neural Networks, IEEE, 2000.
  10. Swanston, D. J., Bishop, J. M., and Mitchell, R. J., "Simple Adaptive momentum: new algorithm for training multiplayer perceptrons", Electronics Letters, Vol. 30, pp 1498-1500, 1994.
  11. Qui, G., Varley, M. R., and Terrell, T. J., Accelerated Training of Backpropagation Networks by Using Adaptive Momentum Step, IEE Electronics Letters, Vol. 28, No. 4, pp 377-379, 1992.
  12. Wilson, D. Randall, and Tony R. Martinez, The Inefficiency of Batch Training for Large Training Sets, In Proceedings of the International Joint Conference on Neural Networks (IJCNN2000), Vol. II, pp. 113-117, July 2000.
  13. Rui Quan, Shuhai Quan, Lang Huang, Changjun Xie and Qihong Chen, "Information fusion in Fault Diagnosis for Automotive Fuel Cell System Based on D-S Evidence Theory," Journal of Computational Information Systems, 2011, pp. 97-105.
  14. Devi Parikh and Robi Polikar, "An Ensemble-Based Incremental Learning Approach to Data Fusion", IEEE Transactions on Systems, Man and Cybernetics-Part B: CYBERNETICS, 2007, Vol.37, pp. 437-450.
  15. B. Sumathi etal., Input Feature Selection using Neural Network for the Classification of Hypertension, National Conference on Scientific Computing and Applied Mathematics NCSCAM-2011 at V.L.B. Janakiammal College of Engineering and Technology.
  16. B. Sumathi etal., Neural Networks Evidence Combination for the Diagnosis of Hypertension, National Conference on Emerging Trends in Information and Computer Technology NCETICT-2013 at Kingston Engineering College.
  17. B. Sumathi, Increasing the Rate of Convergence of Back Propagation Algorithm Using Dempster-Shafer Theory for the Diagnosis of Hypertension, Dr. N.G.P. College of Arts and Science, February 2016.

The online HTML, CSS and JS cleaner will take care of your dirty web code. They are all free online programs.

Publication Details

Published in : Volume 3 | Issue 7 | September-October 2018
Date of Publication : 2018-09-30
License:  This work is licensed under a Creative Commons Attribution 4.0 International License.
Page(s) : 165-171
Manuscript Number : CSEIT183727
Publisher : Technoscience Academy

ISSN : 2456-3307

Cite This Article :

B. Sumathi, "Comparison of Back Propagation Algorithms and Fusion Methodology Using Dempster-Shafer Rule in Medical Application", International Journal of Scientific Research in Computer Science, Engineering and Information Technology (IJSRCSEIT), ISSN : 2456-3307, Volume 3, Issue 7, pp.165-171, September-October-2018.
Journal URL :

Article Preview

Follow Us

Contact Us