基于模型比较的软件测试用例生成方法研究
详细信息    本馆镜像全文|  推荐本文 |  |   获取CNKI官网全文
摘要
随着计算机应用范围不断扩大,关键领域的计算机应用越来越多。从而对软件质量和可靠性提出了更高的要求。作为软件质量和可靠性保证最重要的技术手段,软件测试在软件开发中的地位日益重要。
     软件工程相关技术的发展不断对软件测试技术研究提出了新的要求。软件工程技术对软件测试研究的影响表现在三个方面:软件开发过程模型决定了软件测试过程模型;软件体系结构决定了软件测试的层次划分;软件模型决定了软件测试用例生成方法。
     软件测试用例生成是软件测试的核心问题。选择较好的测试准则,从而提高测试用例集合的发现软件错误的效率具有重要意义。为比较不同测试准则之间的优劣,研究人员引入了分割测试方法,并使用至少发现一个错误的概率来判定测试准则的优劣。然而分割测试无法解释实践中采用等价类方法、路径覆盖等常用的测试用例生成方法的优势所在。事实上,这些常用测试方法优于随机测试以及其他的分割测试方法的原因是,这些方法表达了测试人员对被测软件的错误预期。在测试时这些方法对输入域的分割使得处于同一子域的输入之间的测试相关性较高,不同子域间的测试相关性较低。这使得所产生的测试用例在分布和检错效果上更接近于理想测试,从而能够提高测试效率。
     考虑到大多数软件错误可以归结为开发中需求与实现之间的差异,提出了基于模型比较的测试方法。该方法将软件需求和软件实现转换为用同一模型语言描述的模型,称从软件需求中得到的为需求模型,从软件实现中得到的为实现模型,用于描述它们的模型语言为基准模型。通过比较需求模型和实现模型,得到需求和实现之间的差异,并根据差异生成测试用例。
     采用不同的基准模型可以产生不同的基于模型比较的测试方法。在使用等价类作为基准模型的基于模型比较的测试方法中,需求模型需要测试人员手工生成。而实现模型可以通过符号执行技术,分析被测程序的源代码,得到各执行路径的路径约束和处理函数,然后将路径约束和处理函数转换成为等价类模型。比较需求模型和实现模型可以得到的差异模型,可以用于测试用例生成。
     在使用EFSM(扩展有限状态机,Extended Finite State Machine)模型作为基准模型的基于模型比较的测试方法中,通过将UML(统一建模语言,the Unified Model Language)模型状态图转换成为EFSM模型得到需求模型。而实现模型则需要先生成每一成员方法的每一执行路径的状态变迁对,然后生成类的状态集合以及状态变迁集合。状态集合以及状态变迁集合构成了原始状态图。得到原始状态图后,需要对原始状态图进行优化,以保证最后的实现模型是确定的、一致的。
     实验结果表明,基于等价类的基于模型比较的测试方法和基于EFSM的基于模型比较的测试方法都是可行的。与相应的软件测试用例生成方法相比,具有较高的测试效率。
The score of the computer application is enlarging, and there are more and more computer applications used in key domain. Thus, higher software quality and reliability are required. As the most important technology to assure quality, software testing is more and more important in software development.
     With the development of relative technologies in software engineering, new requirements are emergying constantly to software testing; meanwhile, incur the improvement of software testing. The software engineering technologies affect software testing in three aspects. The development process models decide the process models of software testing. The software architectures affect the software testing levels. The software models, especially dynamic models determine the methods of test suite generation.
     The test suite generation is the key issue of software testing. It is important to select testing criteria in order to generate test suite more efficient. Partition testing is a kind of theory used to compare the efficiency of testing criterias. In addition, we use the probability of finding at least one fault to measure the effenciency. Thus, it cannot explain the adventage of some partition testing methods we used pervasively, such as equivalence testing and path-based testing methods. In fact, the adventage of the partition testing methods over random testing lies in that the partition testing methods can express the testers’anticipation of the errors in software under testing more exactly, and in that the partition testing methods partition the input domain into the subdomains of which the inputs are more relative. These make the distributeon and error-finding efficiency of the test suite very close to the ideal testing.
     Taking the adventage of partition testing into account, we present software testing method base on model compare. This method transforms the specification and the implementation into two models expressesd by the same model language, which we call base-model language. The model transformed from specification is specification-model. The other model transformed from implementation-model. We get the differentce between them, and then test suite is generated from this difference. The test suite generated by our method has much more test efficiency.
     With different base-model language, the testing method based on model compare is different.
     When using equivalence model as base-model, the specification-model should be created by tester manually, while the implementation-model can be created automatically——at first, we can get the path constrains and the calculation functions of each path of every member method through symbol execution technology and analyzing the originonal code of the software under testing, then we transform those path constrains and the calculate functions into an equivalence model.
     In the testing method using EFSM (Extended Finite State Machine) model as base-model, we create the specification-model by transforming the UML (the Unified Model Language) Statecharts into EFSM model. To create the implementation-model, we should get all path constrains and the calculate functions of methods from source code by symbol execution technology firstly, and then transform them into the set of states-transforming pairs, from which all possible state sets and the state transfer sets are generated.. These possible state sets and the state transfer sets is the original EFSM. Finally, optimizing the original EFSM to ensure the final implementation-model will be determinate and consistent.
     The experiment results show that software testing methods based on model compare are feasible, whether using equivalence model or EFSM model as base-model language. Moreover, these software testing methods are more efficient than other corresponding test suite generate methods.
引文
[1] Research Triangle Institute. NIST Planning Report 02-3: The Economic Impacts of Inadequate Infrastructure for Software Testing. March 5, 2003, 2003. http://www.nist.gov/director/progofc/report02-3.pdf
    [2] ANSI/IEEE Std 1042-1987. IEEE Guide to Software Configuration Management. New York: IEEE Press,, Sept. 1988,
    [3] M. D. Smith, D. J. Robson. Object-Oriented Programming -- The Problems of Validation. In: the 6th Int'l Conf. on Software Maintenance(ICSM'90). San Diego, CA, USA. IEEE Computer Society Press Press, 1990, 272~281
    [4] D. Kung, J. Gao, P. Hsia. Developing an Object-Oriented Software Testing and Maintenance Environment. Communications of the ACM, 1995, 38(10): 75~87
    [5] R.V.Binder. Testing Object-Oriented Systems Models,Patterns and Tools. Addison Wesley, 2000
    [6] D.Pan. Y.Wu. Techniques for Testing Component-based Software. In: Proceedings of the 7th International Conference on Engineering of Complex Computer Systems. 2001, 222~232
    [7] M.J.Harrold A.Orso, D.Rosenblum. Component Metadata for Software Engineering Tasks. In: Procedings of 2nd International Workshop on Engineering Distributed Objects (EDO 2000). Davis, USA. November, 2000. 129~144
    [8] V. Grugn S.Beydeda. Testing Commercial-off-the-shelf Component and Sytems. In: Spinger Verlag. 2004, 123~212
    [9] A. Bertolino. Knowledge Area Description of Component Testing. In: Guide to the Software Engineering Body of Knowledge SWEBOK (v.0.7), April,Software Engineering Coordianted Committee (Joint IEEE Computer Society–ACM Committee). 2000, 213~223
    [10] Karl R.P.H. Leung, Joseph K-Y Ng, W.L. Yeung, "Embedded Program Testing in Untestable Mobile Environment:An Experience of Trustworthiness Approach," in Proceedings of the 11th Asia-Pacific Software Engineering Conference(APSEC’04), 2004
    [11] Suarez-Cabal M.J., Tuya J. Coverage Measurement for SQL Queries. Latin America Transactions, 2005, 3(1): 1-1
    [12] de Sa Leitao, P. Jr., Vilela P.R.S., et al., "Mapping faults to failures in SQL manipulation commands," in The 3rd ACS/IEEE International Conference on Computer Systems and Applications, 2005
    [13] Roger S. Pressman.软件工程-实践者的研究方法.机械工业出版社, 1999
    [14] Jim Highsmith.敏捷软件开发生态系统.北京:机械工业出版社, 2004
    [15] P. Rook Mcdermid J. Software Development Process Models. In: software engneer's Reference book: CRC press Press, 1933. 15/26-15/28
    [16] Boehm.B. A spiral Model for software Development and Enhancement. computer, 21(10)
    [17] Nierstrasz. Component-Oriented Software Development. CACM, 35(9): 160-165
    [18] P.Sitaram Davis A. A concurrent Process Model for software Development. Software Engineering Notes, 1994, 19(2): 38-51
    [19] D.S. Janzen, H. Saiedian, "On the Influence of Test-Driven Development on Software Design," in Proceedings of 19th Conference on Software Engineering Education and Training, 2006
    [20] B Meyer. Object-Oriented Software Constraction. Prentice-Hall, 1988
    [21] Paul C.Jorgensen. Software testing. 2003
    [22] D. Kung, J. Gao, P. Hsia. On Regression testing of Object-Oriented Program. Joural of system and software 1996, 32(1): 21~40
    [23] Jiun-Liang Chen, Feng-Jian Wang. Flow Analysis of Class Relationships for Object-Oriented Programs. Journals of Information Science and Engineering, 2000, 16: 619~647
    [24]谢晓东,卢炎生.数据库应用系统理解冲突测试研究.小型微型计算机系统, 2007, (7)
    [25] Joseph K-Y Ng Karl R.P.H. Leung, W.L. Yeung, "Embedded Program Testing in Untestable Mobile Environment:An Experience of Trustworthiness Approach," in Proceedings of the 11th Asia-Pacific Software Engineering Conference(APSEC’04), 2004
    [26] Robert V. Binder.面向对象系统的测试.北京:人民邮电出版社, 2001
    [27] D. E. Perry, G. E. Kaiser. Adequate Testing and Object-Oriented Programming. Journal of Object-Oriented Programming, 1990, 2(5): 13~19
    [28] E. J. Weyuker. The Evaluation of Program-based Software Test Data Aadequacy Criteria. Communications of the ACM, 1988, 31(6): 668~675
    [29] N. Wilde, R. Huitt. Maintenance Support for Object-Oriented Programs. EEE Trans. on Software Eng., 1992, 18(12): 1038~1044
    [30] M. D. Smith, D. J. Robson. A Framework for Testing Object-Oriented Programs. Journal of Object-Oriented Programming, 1992, 5(3): 45~53
    [31] Parrish, S. Allen, B. B. Richard. Automated Flow Graph-based Testing of Object-Oriented Software Modules. Journal of Systems and Software, 1993, 23(2): 95~109
    [32] M. J. Harrold, G. Rothermel. Performing Data Flow Testing on Classes. In: D. S. Wile Ed. Proc. of the 2nd ACM SIGSOFT Symp. on Foundations of Software Engineering (FSE). New Orleans, Louisiana, USA. Dec 6-9, 1994. New York:ACM Press Press, 1994, 154~163
    [33] B. Y. Tsai, S. Stobart, N. Parrington. Employing Data Flow Testing on Object-Oriented Classes. In: IEE Proc. of Software. 2001, 56~64
    [34] C. D. Turner, D. J. Robson. The State-based Testing of Object-oriented Programs. In: D. N. Card Ed. Proc. of ICSM'93. Montréal, Quebec, Canada. Los Alamitos, CA: IEEE Computer Society Press Press, 1993, 302~310
    [35] D. Kung, N. Suchak, J. Gao. On Object State Testing. In: D. H. Bae Ed. Proc. of the IEEE 18th Annual Int'l Computer Software and Applications Conference (COMPSAC'94). Taipei. Los Alamitos, CA: IEEE Computer Society Press Press, 1994, 222~227
    [36] D. Hoffman, P. Strooper. ClassBench: A Framework for Automated Class Testing. Software Practice and Experience, 1997, 27(5): 573~597
    [37]李留英,王戟,齐治昌. UML statecharts的测试用例生成方法.计算机研究与发展, 2001, 38(6): 691~697
    [38] O. Bosman, H. Schmidt. Object Test Coverage using Finite State Machines. Australian National University. Sept. 1995. 1995
    [39] C. J. Wang, M. T. Liu. Generating Test Cases for EFSM with Given Fault Model. In: Proc. of IEEE Conf. on Computer Communications (INFOCOM'93). San Francisco,CA, USA. Los Alamitos, CA: IEEE Computer Society Press Press, 1993, 774~781
    [40] A. Gargantini, E. Riccobene. ASM-based Testing: Coverage Criteria and Automatic Test Sequence Generation. Journal of Universal Computer Science,, 2001, 7(11): 1050~1067
    [41] H. S. Hong, Y. R. Kwon, S. D. Cha. Testing of Object-Oriented Programs Based on Finite State Machines. In: R. S. Sipple Ed. Proc. of the 2nd Asia-Pacific Software Engineering Conference (APSEC'95). Brisbane, Queensland, Australia. Los Alamitos, CA: IEEE Computer Society Press, Press, 1995, 234~241
    [42] R. K. Doong, P. G. Frankl. The ASTOOT Approach to Testing Object-Oriented Programs. ACM Trans. on Software Eng. and Meth., 1994, 3(2): 101~130
    [43] T. H. Tse, F.T. Chan, H.Y. Chen. An Axiom-based Test Case Selection Strategy for Object-Oriented Programs. Software Quality and Productivity, 1994: 107~114
    [44]兰毓华,毛法尧,曹化工.基于Z规格说明的软件测试用例自动生成.计算机学报, 1999, 22(9): 963~969
    [45] J. McDonald, P. Strooper, D.Hoffman. Tool Support for Generating Passive C++ Test Oracles from Object-Z Specifications. In: D. Azada Ed. Proc. of APSEC'03. Chiang Mai, Thailand. Los Alamitos, CA: IEEE Computer Society Press Press, 2003, 322~331
    [46] H. Kim, C. Wu. A Class Testing Technique Based on Data Bindings. In: R. S. Sipple Ed. Proc. of APSEC'96. Seoul, Korea. Los Alamitos, CA: IEEE Computer Society Press Press, 1996, 104~109
    [47] I. S. Chung, M. Munro, W. K. Lee. Applying Conventional Testing Techniques for Class Testing. In: P. Storms Ed. Proc. of COMPSAC'96. Seoul, Korea. Los Alamitos, CA: IEEE Computer Society Press Press, 1996, 447~454
    [48] S. Beydeda, V. Gruhn. Integration White- and Black-box Techniques forClass-level Regression Testing. In: Proc. of the 4th IASTED Int'l Conf. on Software Engineering and Applications (SEA 2000). Las Vegas, USA. Calgary, Canada:ACTA Press Press, 2000, 23~28
    [49] M. J. Harrold, J. D. McGregor, K. J. Fitzpatrick. Incremental Testing of Object-Oriented Class Structures. In: T. Montgomery Ed. Proc. of 14th Int'l Conf. on Software Engineering (ICSE'92). Melbourne, Australia. New York:ACM Press Press, 1992, 68~80
    [50] S.D. Miller, R.A. DeCarlo, A.P Mathur. Modeling and control of the incremental software test process. In: Proceedings of the 28th Annual International Computer Software and Applications Conference, 2004. COMPSAC 2004. 2004, 156 - 159
    [51] H. Y. Chen, T. H. Tse, F. T. Chan. In Black and White: An Integrated Approach to Class Level Testing of Object-Oriented Programs. ACM Trans. on Software Eng. and Meth., 1998, 7(3): 250~295
    [52] R. Bodik, R. Gupta, M. L. Soffa. Refining Data Flow Information using Infeasible Paths. In: M. J. a. H. Schauer Ed. Proc. of FSE'97, Lecture Notes in Computer Science. Zurich, Switzerland. Berlin:Springer-Verlag Press, 1997, 361~377
    [53] S. Sinha, M. J. Harrold, G. Rothermal. Interprocedural Control Dependence. ACM Trans. on Software Eng. and Meth., 2000, 10(2): 1~38
    [54] V. Martena, A. Orso, M. Pezze. Interclass Testing of Object Oriented Software. In: D. C. Martin Ed. Proc. of the 8th IEEE Int'l Conf. on Engineering of Complex Computer Systems (ICECCS'02). Greenbelt, MD, USA. Los Alamitos, CA:IEEE Computer Society Press Press, 2002, 135~144
    [55] Y. Kim, C. R. Carlson. Scenario Based Integration Testing for Object-Oriented Software Development. In: Proc. of the 8th Asian Test Symposium (ATS'99). Shanghai, China. Los Alamitos, CA: IEEE Computer Society Press Press, 1999, 283~288
    [56] Y. L. Traon, T. Jeron, J. M. Jezequel. Efficient Object-Oriented Integration and Regression Testing. IEEE Trans. on Reliability, 2000, 49(1): 12~25
    [57] G. Pour. Component-based Software Development Approach: New Opportunities and Challenges. In: M. S. R. Ege, and B. Meyer Ed. Proc. of TOOLS'98. Santa Barbara, California. Los Alamitos, CA: IEEE Computer Society Press Press, 1998,375~383
    [58] A. Polini A. Bertolino. A Framework for Component Deployment Testing. In: A. J. a. F. Titsworth Ed. Proc. of ICSE'03. Portland, Oregon USA. Los Alamitos, CA: IEEE Computer Society Press, Press, 2003, 221~231
    [59] J. M. Zaha, M. Geisenberger, M. Groth. Compatibility Test and Adapter Generation for Interfaces of Software Components. In: R. K. G. a. H. Mohanty Ed. Proc. of the 1st Int'l Conf. Distributed Computing and Internet Technology (ICDCIT 2004),. Bhubaneswar, India. Berlin: Springer-Verlag, Press, 2004, 318~328
    [60] Y. Wu, J. Offutt. Maintaining Evolving Component-based Software with UML. In: D. C. Martin Ed. Proc. of the 7th European Conf. on Software Maintenance and Reengineering (CSRM'03). Benevento, Italy. Los Alamitos, CA:IEEE Computer Society Press Press, 2003, 133~142
    [61] A. S. M. Sajeev, B. Wibowo. Regression Test Selection Based on Version Changes of Components. In: D. Azada Ed. Proc. of APSEC'03. Chiang Mai, Thailand. Los Alamitos, CA: IEEE Computer Society Press, Press, 2003, 78~85
    [62] A. Orso, M. J. Harrold, D. Rosenblum, et al. Using Component Metacontent to Support the Regression Testing of Component-based Software. In: B. Werner Ed. Proc. of ICSM'01. Florence, Italy. Los Alamitos, CA: IEEE Computer Society Press Press, 2001, 716~725
    [63] L. Mariani. Behavior Capture and Test for Verifying Evolving Component-based Systems. In: F. Titsworth Ed. Proc. of ICSE'04. Edinburgh, UK. Los Alamitos, CA: IEEE Computer Society Press, Press, 2004, 78~80
    [64] M. J. Harrold. Testing: A Roadmap. In: A. Finkelstein Ed. Proc. of the Future of Software Engineering (Special Volume of the Proc. of ICSE). Limerick, Ireland. New York: ACM Press, Press, 2000, 63~72
    [65]景涛,白成刚,胡庆培.构件软件的测试问题综述.计算机工程与应用, 2002, 2002(24): 1~6
    [66] et al. Kiczales G. Aspect-Oriented Programming. . In: In: Proc. of the European Conf. on Object-Oriented Programming(ECOOP). June 1997.,
    [67] Kiczales G. An Overview of AspectJ. In: In proceedings of 15th ECOOP, . 2001,
    [68] M. Mortensen and R. Alexander. Adequate Testing of Aspect-Oriented Programs. Colorado State University, Fort Collins, Colorado, USA, Technical report CS 04-110. December 2004
    [69] D. Richardson Y. Zhou, and H. Ziv, . Towards a Practical Approach to test aspect-oriented software. In: In Proc. 2004 Workshop on Testing Component-based Systems (TECOS 2004),. September 2004.,
    [70] McEachen N., Alexander R.T. Distributing classes with woven concerns: an exploration of potential fault scenario. In: In Proc. of the Fourth International Conference on Aspect-Oriented Software Development (AOSD’05). 2005, 192-200
    [71] J. Zhao T. Xie, D. Notkin,. Automated TestGeneration for AspectJ Programs,. In: In Proceedings of the AOSD 05 Workshop on Testing Aspect-Oriented Programs (WTAOP 05),. Chicago. March 2005,
    [72] J. Zhao T. Xie, . ,,, . A Framework and Tool Supports for Generating Test Inputs of AspectJ Programs. In: in the Proceedings of the 5th international conference on Aspect-oriented software development. Bonn, Germany. March 2006.,
    [73] J. Zhao, M. Rinard. System dependence graph construction for aspect-oriented programs. Laboratory for Computer Science, MIT. MIT-LCS-TR-891. 2003
    [74] J. Zhao. Tool support for unit testing of aspectoriented software. In: In Proceedings OOPSLA’Workshop on Tools for Aspect-Oriented Software Development,. November 2002. 2002,
    [75] D. Xu, W. Xu. State-Based Incremental Testing of Aspect-Oriented Programs. In: In Proc. of the Fourth International Conference on Aspect-Oriented Software Development (AOSD’06). Bonn, Germany. 2006,
    [76] W. Xu, D. Xu. A model-based approach to test generation for aspect-oriented programs. In: AOSD 2005 Workshop on Testing Aspect-Oriented Programs. Chicago. 2005,
    [77] T. Garwin, M.P. Crozat, B.L. Merrell, et al., "Metrics-Based Test and Evaluation of Group Detection Software for Counter-Terrorism," in 2006 IEEE Aerospace Conference, 2006
    [78] J.Goodenough, S Gerhart. Toward a theory of testing data selection. IEEE Trans. on Software Engineering, 1975, SE-1: 156-173
    [79] Howden W. E. Reliability of the path analysis testing strategy. IEEE Trans. Software Eng., 1976, 2(2): 208~215
    [80] Weyuker E. J. , O strand T. J. Theories of proper testing and the application of revealing subdomains. IEEE Trans. Software Eng., 1980, 6(5): 236~246
    [81] Frankl P.G., Weyuker E.J. An analytical comparison of the fault-detecting ability of data flow testing techniques. In: Proceedings of 15th International Conference on Software Engineering. 1993, 415 - 424
    [82] E.J. Weyuker, Jeng B. Analyzing partition testing strategies. IEEE Transactions on Software Engineering 1991, 17(7): 703 - 711
    [83] P.G. Frankl, E.J. Weyuker. A formal analysis of the fault-detecting ability of testing methods. IEEE Transactions on Software Engineering 1993, 19(3 ): 202 - 213
    [84] Hong Zhu. A Formal Analysis of the Subsume Relation between Software Test Adequacy Criteria. IEEE Trans. on Software Eng., 1996, 22(4): 248~255
    [85] Dick Hamlet, Ross Taylor. Partition testing does not inspire confidence. IEEE Transactions on Software Engineering 1988,
    [86]暴建民,杨孝宗,杨楠, et al.测试数据选择理论20年.软件学报, 1996, 7(12): 744-750
    [87] E.J. Weyuker. Can we measure software testing effectiveness? In: Proceedings of First International Software Metrics Symposium. 1993, 100 - 107
    [88] Frankl P.G., Weyuker E.J. An applicable family of data flow testing criteria. IEEE Transactions on Software Engineering, 1988, 14(10): 1483 - 1498
    [89]方木云.在测试用例不回放时比较随机测试和分割测试.软件学报, 2001, (12)
    [90] K.; Poore Sayre, J.H.;. Partition testing with usage models. In: Science and Engineering for Software Development: A Recognition of Harlan D. Mills' Legacy, 1999. Proceedings. 1999, 24 - 30
    [91] E. J. Weyuker. Axiomatizing Software Test Data Adequacy. IEEE Trans. on Software Eng., 1986, 12(12): 1128~1138
    [92] Ruilian Zhao, M.R. Lyu, Yinghua Min. A new software testing approach basedon domain analysis of specifications and programs. In: Software Reliability Engineering, 2003. ISSRE 2003. 14th International Symposium on 17-20 Nov. 2003. 2003, 60 - 70
    [93] Dennis Jeffrey, Neelam Gupta, "Test Suite Reduction with Selective Redundancy," in Proceedings of the 21st IEEE International Conference on Software Maintenance, 2005
    [94]李必信,郑国梁.软件理解研究与进展.计算机研究与发展, 1999 36(8): 897~906
    [95] P. Runeson. A Survey of Unit Testing Practices. IEEE Software, 2006, 23(4): 22 - 29
    [96] P.D. Coward. Symbolic execution systems-a review. Software Engineering Journal, 1988, 3(6): 229 - 239
    [97] C. Koutsikas, N. Malevris, "A unified symbolic execution system," in Computer Systems and Applications, ACS/IEEE International Conference on, 2001
    [98] A.J. Offutt, E.J. Seaman, "Using symbolic execution to aid automatic test data generation," in Computer Assurance, 1990. COMPASS '90, 'Systems Integrity, Software Safety and Process Security'., Proceedings of the Fifth Annual Conference on, 1990
    [99] Jian Zhang, X. Chen, Xiaoliang Wang. Path-oriented test data generation using symbolic execution and constraint solving techniques. In: Proceedings of the Second International Conference on Software Engineering and Formal Methods, 2004. SEFM 2004. . 2004, 242 - 250
    [100]张龙祥. UML与系统分析设计.人民邮电出版社, 2001
    [101] T.S.Chow. Testing Software Design Model by Finite-state Machine. IEEE Trans. on Software Engineering, 1978, SE-4(3): 178-187

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700