Mapping Computer Virus Reports to Relevant Documents : A Ranking Version A First-Class Grained Benchmark and Function Evaluation

Authors

  • Yenumala Sankara Rao  Department of MCA , St. Mary's Group of Institutions, Guntur, Andhra Pradesh, India
  • Tripuraneni Balakrishna  Department of MCA , St. Mary's Group of Institutions, Guntur, Andhra Pradesh, India

Keywords:

Bug Reports, Software Maintenance, Learning to Rank.

Abstract

Once a novel bug report is received, developers generally have to be compelled to be compelled to breed the bug and perform code reviews to hunt out the cause, a way that will be tedious and time overwhelming. A tool for ranking all the provision files with relation to but in all probability they are to contain the rationale for the bug would modify developers to slender down their search and improve productivity. This paper introduces associate degree adaptive ranking approach that leverages project data through purposeful decomposition of computer code computer file, API descriptions of library parts, the bug-fixing history, the code modification history, and so the file dependency graph. Given a bug report, the ranking score of each offer file is computed as a weighted combination of associate degree array of choices, where the weights unit of measurement trained automatically on antecedently solved bug reports using a learning-to-rank technique. we've an inclination to worth the ranking system on six huge scale open offer Java comes, exploitation the before-fix version of the project for every bug report. The experimental results show that the learning-to-rank approach outperforms three recent progressive ways. specially, our technique makes correct recommendations at intervals the best ten stratified offer files for over seventy p.c of the bug reports at intervals the Eclipse Platform and Felis domesticus comes.

References

  1. G. Antoniol and Y.-G. Gueheneuc, "Feature identification: A novel approach and a case study," in Proc. 21st IEEE Int. Conf. Softw. Maintenance,Washington, DC, USA, 2005, pp. 357-366.
  2. G. Antoniol and Y.-G. Gueheneuc, "Feature identification: An epidemiological metaphor," IEEE Trans. Softw. Eng., vol. 32, no. 9, pp. 627-641, Sep. 2006.
  3. B. Ashok, J. Joy, H. Liang, S. K. Rajamani, G. Srinivasa, and V. Vangala, "Debugadvisor: A recommender system for debugging," in Proc. 7th Joint Meeting Eur. Softw. Eng. Conf. ACM SIGSOFT Symp. Found. Softw. Eng., New York, NY, USA, 2009, pp. 373-382.
  4. A. Bacchelli and C. Bird, "Expectations, outcomes, and challenges of modern code review," in Proc. Int. Conf. Softw. Eng., Piscataway, NJ, USA, 2013, pp. 712-721.
  5. S. K. Bajracharya, J. Ossher, and C. V. Lopes, "Leveraging usage similarity for effective retrieval of examples in code repositories," in Proc. 18th ACM SIGSOFT Int. Symp. Found. Softw. Eng., New York, NY, USA, 2010 pp. 157-166.
  6. R. M. Bell, T. J. Ostrand, and E. J. Weyuker, "Looking for bugs in all the right places," in Proc. Int. Symp. Softw. Testing Anal., New York, NY, USA, 2006, pp. 61-72.
  7. N. Bettenburg, S. Just, A. Schr€oter, C. Weiss, R. Premraj, and T. Zimmermann, "What makes a good bug report?" in Proc. 16th ACM SIGSOFT Int. Symp. Found. Softw. Eng., New York, NY, USA, 2008, pp. 308-318.
  8. T. J. Biggerstaff, B. G. Mitbander, and D. Webster, "The concept assignment problem in program understanding," in Proc. 15th Int. Conf. Softw. Eng., Los Alamitos, CA, USA, 1993, pp. 482-498.
  9. D. Binkley and D. Lawrie, "Learning to rank improves IR in SE," in Proc. IEEE Int. Conf. Softw. Maintenance Evol., Washington, DC, USA, 2014, pp. 441 445.
  10. D. M. Blei, A. Y. Ng, and M. I. Jordan, "Latent Dirichlet allocation," J. Mach. Learn. Res., vol. 3, pp. 993-1022 Mar. 2003.
  11. S. Breu, R. Premraj, J. Sillito, and T. Zimmermann, "Information needs in bug reports: Improving cooperation between developers and users," in Proc. ACM Conf. Comput. Supported Cooperative Work, New York, NY, USA, 2010, pp.301-310.
  12. B. Bruegge and A. H. Dutoit, Object-Oriented Software Engineering Using UML, Patterns, and Java, 3rd ed. Upper Saddle River, NJ, USA, Prentice-Hall, 2009.
  13. Y. Brun and M. D. Ernst, "Finding latent code errors via machine learning over program executions," in Proc. 26th Int. Conf. Softw. Eng.,Washington, DC, USA, 2004, pp. 480-490.
  14. M. Burger and A. Zeller, "Minimizing reproduction of software failures," in Proc. Int. Symp. Softw. Testing Anal., New York, NY, USA, 2011 pp. 221-231.
  15. P. L. Buse and T. Zimmermann, "Information needs for software development analytics," in Proc. Int. Conf. Softw. Eng., Piscataway, NJ, USA, 2012, pp. 987-996

Downloads

Published

2017-08-31

Issue

Section

Research Articles

How to Cite

[1]
Yenumala Sankara Rao, Tripuraneni Balakrishna, " Mapping Computer Virus Reports to Relevant Documents : A Ranking Version A First-Class Grained Benchmark and Function Evaluation, IInternational Journal of Scientific Research in Computer Science, Engineering and Information Technology(IJSRCSEIT), ISSN : 2456-3307, Volume 2, Issue 4, pp.447-450, July-August-2017.