Multi-Objective Optimization Approach to Generate String Test Data
Keywords:
Random Testing (RT), Benford distribution, String Test CasesAbstract
String test cases are required by applications to recognize deformities and security issues. Nonetheless, its viability isn't tasteful. In this paper, discovery string test case generation techniques are investigated. Two objective functions are acquainted to produce compelling test cases. The optimization of the test cases is the primary objective, where it very well may be estimated through string distance functions. The second objective is controlling the string length distribution into a Benford distribution which suggests shorter strings have, all in all, a higher shot of discontent location. At the point when the two objectives are connected by means of a multi-objective advancement algorithm, predominant string test sets are delivered.
References
- P. E. Ammann and J. C. Knight, “Data diversity: an approach to software fault tolerance,” Comput. IEEE Trans., vol. 37, no. 4, pp. 418–425, Apr. 1988.
- G. B. Finelli, “NASA Software failure characterization experiments,” Reliab. Eng. Syst. Saf., vol. 32, no. 1–2, pp. 155–169, 1991.
- L. J. White and E. I. Cohen, “A Domain Strategy for Computer Program Testing,” Softw. Eng. IEEE Trans., vol. SE-6, no. 3, pp. 247–257, May 1980.
- P. G. Bishop, “The variation of software survival time for different operational input profiles (or why you can wait a long time for a big bug to fail),” in Fault-Tolerant Computing, 1993. FTCS-23. Digest of Papers., The Twenty-Third International Symposium on, 1993, pp. 98–107.
- C. Schneckenburger and J. Mayer, “Towards the determination of typical failure patterns,” in Fourth international workshop on Software quality assurance: in conjunction with the 6th ESEC/FSE joint meeting, 2007, pp. 90–93.
- T. Y. Chen, T. H. Tse, and Y. T. Yu, “Proportional sampling strategy: a compendium and some insights,” J. Syst. Softw., vol. 58, no. 1, pp. 65–81, 2001.
- T. Y. Chen, H. Leung, and I. K. Mak, “Adaptive Random Testing,” in Advances in Computer Science - ASIAN 2004, vol. 3321, M. Maher, Ed. Springer Berlin / Heidelberg, 2005, pp. 3156–3157.
- I. Ciupa, A. Leitner, M. Oriol, and B. Meyer, “ARTOO: Adaptive Random Testing for Object-Oriented Software,” in Software Engineering, 2008. ICSE ’08. ACM/IEEE 30th International Conference on, 2008, pp. 71–80.
- J. Mayer and C. Schneckenburger, “An empirical analysis and comparison of random testing techniques,” in Proceedings of the 2006 ACM/IEEE international symposium on Empirical software engineering, 2006, pp. 105–114.
- C. Durtschi, W. Hillison, and C. Pacini, “The effective use of Benford’s law to assist in detecting fraud in accounting data,” J. forensic Account., vol. 5, no. 1, pp. 17–34, 2004.
- H. Hemmati, A. Arcuri, and L. Briand, “Achieving Scalable Model-based Testing Through Test Case Diversity,” ACM Trans. Softw. Eng. Methodol., vol. 22, no. 1, pp. 6:1–6:42, Mar. 2013.
- Y. Ledru, A. Petrenko, S. Boroday, and N. Mandran, “Prioritizing test cases with string distances,” Autom. Softw. Eng., vol. 19, no. 1, pp. 65–95, 2012.
- D. Whitley, “A genetic algorithm tutorial,” Stat. Comput., vol. 4, no. 2, pp. 65–85, 1994.
- M. Harman and B. F. Jones, “Search-based software engineering,” Inf. Softw. Technol., vol. 43, no. 14, pp. 833–839, 2001.
- S. Ali, L. C. Briand, H. Hemmati, and R. K. Panesar-Walawege, “A Systematic Review of the Application and Empirical Investigation of Search-Based Test Case Generation,” Softw. Eng. IEEE Trans., vol. 36, no. 6, pp. 742–762, Nov. 2010.
Downloads
Published
Issue
Section
License
Copyright (c) IJSRCSEIT

This work is licensed under a Creative Commons Attribution 4.0 International License.