Thus, providing development teams more problems during the testing process.
Lee J. White, in his study Domain Testing and Several Outstanding Research Problems in Program Testing indicates that in the area of program testing, there are several significant problems which need to be addressed.
White indicates some of these problems include the following.
The determination of a scientifically sound basis for the selection of test data
The development of program specifications which can be used to both generate test data and also ascertain the correctness of program output.
The development of relationships between program testing and formal verification.
Jorgensen and Erickson (1999), on the other hand, focused on software integration as a cause of problems in software testing. Such integration presents problems because new test relationships need to be established between the integrated modules. Unlike in a per module software testing, which limits a test on the specific functionalities of a module, testing integrated modules require an establishment and a collection of relationships of the different modules.
Moreover, Jorgensen and Erickson (1999) indicated that object-oriented programming has been a trend in coding for some years now, yet there are times when the process of software testing is done in a traditional manner. Their study examined the traditional styles of testing and they suggested several enhancements and modifications. According to Jorgensen and Erickson (1999), an object-oriented style of programming is based on improving the structure of codes and data resources. Object-oriented introduces a condensed yet efficient code structure. However, if a development team would use the same style of testing, as it is being done in a traditional manner of testing, the objective of achieving success for the software product may not happen. This is because softwares are seen by customers based on their behavior, how they function and work, and how they make things automated. Jorgensen and Erickson (1999) stated that Software testing is fundamentally concerned with behavior (what it does), and not structure (what it is). Customers understand software in terms of its behavior, not its structure.
Software testing involves the development of test cases that will must the structure of a software roduct to ensure that every functionalities and components are tested. Hence, it is important that the architecture and structure of a software is clear and well-organized. However, during the process of software development, there were times when the developers find it hard to solve some coding problems. To enable them to quickly solve difficulties, oftentimes, the developers code the solutions in any style as long as it allows the software to run without errors, even if the solution causes the structure of the software to become formless and destructed. Thus, such spoiled structure of software architecture sometimes makes it hard to design test cases that ensures the excellent reliability and performance of software products. Having this view on software testing, Richard McDonell (2003), indicates that Designing efficient test systems requires a modular software architecture and development tools optimized for test. To develop test systems faster and more cost effectively, it is critical that you evaluate your test software architecture to maximize code reuse. Understanding the importance of modular test-software architectures and how to develop your tests as modules rather than building stand-alone applications will significantly improve test-software reuse.
In relation to the importance of fine-tuned software architecture in software testing, one element in the software development lifecycle where software testing depends is the technical documentation of a software product. However, as with any problems in software testing, this component is usually taken for granted. Thus, causing problems in the development of reliable test cases. Before test cases are developed, it is necessary for test developers to refer on technical documentations because such provides them information on the structure, architecture, definition of functionalities, and program flow of the software. Just imagine how test engineers will create test and validation procedures if they do not have the right idea of the "INs" and "OUTs" of the software they are testing. Hailpern and Santhanam (2002) shows this problem in software testing by indicating the following in their study Software Debugging, Testing, and Verification.
Consequently, in most software organizations, neither the requirements nor the resulting specifications are documented in any formal manner. Even if written once, the documents are not kept up-to-date as the software evolves (the required manual effort is too burdensome in an already busy schedule). Specifications captured in natural languages are not easily amenable to machine processing.
Finally, the most common among the obstacles in software testing is the long process and long period of time that software testing may demand. Software testing does not only involve a one-time process of developing test cases, employing the test cases to a software, and finding the right solutions should problems exists. Rather, software testing involves a repetition of these processes and other process within. For instance, after a test case finds an error in a software, the software product will be returned to the developers for them to resolve the problem. After some modifications, the software will be delivered again for testing. This process usually loops until a software defect is resolved. Imagine how many repetitions of such process would occur in the entire phase of software testing, considering that testing involves different test cases and considering that debugging a software may take time. As a consequence to this obstacle, the overall cost of developing a software product becomes expensive due to long man-hour work and delay in the deployment of the software product in the market. In support on this view, Rankin
2002) indicates that Software testing is an integral, costly, and time-consuming activity in the software development life cycle.
Butcher, M., Munro, H., Kratschmer, T. (2002). Improving Software Testing Via ODC: Three Case Studies. IBM Systems Journal, Vol 41, Issue 1, pp. 31-34.
Hailpern, B. (2002). Software Debugging, Testing, and Verification.
IBM Systems Journal, Vol. 41, Issue 1, pp. 4-12.
Jorgensen, P., Erickson, C. (1999). Object-oriented Integration Testing.
Communications of ACM, Vol. 38, pp. 30-38.
Lang, M. (2004). Software Testing: Beta Wars.
Design News, Vol 59, Issue 7, pp. 44-46.
McDonell, R. (2003). Maximizing Code Reuse Speeds Test Development.
Design News, Vol. 58, Issue 17, p. 80.
Poston, R. (1999). Automated Testing from Object Models.
Communications of ACM, Vol. 38, pp. 48-58.
Rankin, C. (2002). The Software Testing Automation Framework.
IBM Systems Journal, Vol. 41, Issue 1, pp. 126-139.
2002). Accessing Industry's Needs to Improve Software Testing.
Advanced Manufacturing Technology, Vol. 23, Issue 8, p. 5-6.
Fujiwara, T., Yamada, S. (2002). C0 Coverage-Measure and Testing-Domain Metrics Based on a Software Reliability Growth Model. International Journal of Reliability, Quality and Safety Engineering. Vol. 9, No. 4, pp 329-340.