The point at which it is generally considered acceptable to stop testing has as its basis two criteria for stop-testing criteria which are those of: (1) when a threshold has been reached with the reliability; and (2) when the testing costs are not justified by reliability gains.
V. Test Automation Overview
The work of Carl J. Nagle states the fact that: "When developing our test strategy, we must minimize the impact caused by changes in the applications we are testing, and changes in the tools we use to test them." (nd) Nagle additionally relates in the work: "Test Automation Frameworks" that in the present environment: "...of plummeting cycle times, test automatic becomes an increasingly critical and strategic necessity." (nd) Even making the assumption that the historical levels of testing was deemed to be sufficient, and rarely ever has this been the actual case, the question is presented as to how the pace of new development and deployment can be maintained while simultaneously ensuring the reduction of risk and the retention of satisfactory testing results?
In the past test automation has not been as successful as it positional might have been due to the early death of test automation development. Limiting the potential of test automation as well is the fact that: "otherwise fully capable testers are seldom given the time required to gain the appropriate software development skills. For the most part, tests have been testers, not programmers. Consequently, the 'simple' commercial solutions have been far to complex to implement and maintain; and they become shelfware." The statement of Nagle (nd) that is critically important to comprehend is the fact that: "Test automation MUST be approached as a full-blown software development effort in its own right. Without this it is most likely destined to failure in the long-term." (Nagle,
Some of the lessons learned relating to test automation are the following principles to guide the test development strategy:
Test automation is a fulltime effort, not a sideline.
The test design and the test framework are totally separate entities.
The test framework should be application-independent.
The test framework must be easy to expand, maintain, and perpetuate.
The test strategy/design vocabulary should be framework independent.
The test strategy/design should remove most testers from the complexities of the test framework." (Nagel, nd)
VI. Product Life Cycle Test Automation
The work of Dave Kelly entitled: "Software Test Automation and the Product Life Cycle: Implementing Software Test in the Product Life Cycle" states that the product life cycle (PLC) is the stages of development through which the product transitions. Kelly affirms other reports concerning the debate surrounding the usefulness of automated testing stating that that is where the 'product life cycle' (PLC) "comes in because the effectiveness of ones use of PLC will generally be "dependent upon...programming resources and the length of the PLC.' (Kelly, 2007))
VII. Product Life Cycle
The first phase of the product life cycle (PLC) is the "Design Phase" which is the phase for planning and formulation of ideas of the product. The second phase is the 'code complete phase' which is the phase in which the code is likely to be written but the bugs have not been fixed in the system as of yet. One important aspect of the product life cycle is the 'Automation Checklist' in this area it is related that affirmation to the following indicates the need for serious consideration of automation of the test:
Can the test sequence of actions be defined?
Is it necessary to repeat the sequence of actions many times?
Is it possible to automate the sequence of actions?
Is it possible to "semi-automate" a test?
Is the behavior of the software under test the same with automation as without?
Are you testing non-UI aspects of the program?
Do you need to run the same tests on multiple hardware configurations? (Kelly, 2007)
The next phase is referred to as the 'Alpha Phase' which identifies the precise time when the product is adjudged stable and in its' complete form by development and quality assurance teams. The product has transitioned from only 'basically functional' to 'a finely tuned product' in the Alpha Phase. In order to attain the status of the Alpha Stage the components of compatibility, interoperability, and performance tests are all in a state of being complete and a state of automation to the greatest extent possible. The next phase is the 'Beta Phase' which is a stage characterized by the system being for the most part, 'bug-free'. Kelly states that bugs not yet fixed at this point or in the 'Beta Phase' will "almost definitely be a slip in the shipping schedule." (Kelly, 2007)
The 'Zero Defect Build Phase' is a period "of stability where no new serious defects are discovered" with the product being stabilized and ready for shipment. The potential of automation during the 'zero defect build stage' includes regression tests which serve to make verification of correction of system bugs. The Green Master phase is the final inspection prior to shipping. Automation during the Green Master Phase includes: "...running general acceptance tests, running regression tests" (Kelly, 2007) saving time during this phase. The work entitled: "Software Performance and Testing" published by LioNBRIDGE states that qualify may be improved through the entire process of the application development lifecycle.
The work of Ram Chillarege entitled: "Software Testing Best Practices" states that the report identifies 28 'best practices' that provide contribution to improvement in testing of software. Listed as 'best practices' are the following:
Reviews and Inspection
Formal entry and exit criteria;
Automated Test Execution;
Beta Programs; and Nightly Builds (1999)
Functional specifications are considered 'key' components in the development processes. Inspection and reviews " are one of the most efficient methods of debugging code." Formal entry and exit criteria expresses a process in which the idea is that:..."every process step, be it inspection, functional test, or software design, has a precise entry and precise exit criteria." (Chillarege, 1999)
VIII. Application Test Tools
Application test tools are inclusive of the following examples in Figure 2.
Examples of Application Test Tools
Coverage, static and dynamic software testing for Ada Apps
C++ analysis tool
Logiscope DMS Source code analysis tools
PolySpace Suite PolySpace Detects run-time errors and non-deterministic constructs in applications at compilation time
TCMON Testwell Test coverage and execution profiling tool for Ada code.
Source: Software QA Testing and Test Tool Resources (2007)
IX. Classic Testing Mistakes
The work of Brian Marick (1997) entitled: "Classic Testing Mistakes" states that the mistakes of testing "cluster usefully into five groups" which are those of:
The Role of Testing: who does the testing team serve, and how does it do that?
Planning the Testing Effort: how should the whole team's work be organized?
Personnel Issues: who should test? And the Tester at Work: designing, writing, and maintaining individual tests (Marick, 1997)
X. Developing a Team of Testers is Key in Software Development and Testing
The team that is responsible for testing is a team that must critically possess certain qualities and characteristics in order for the maximum potential to be realized in the area of software development in the testing phase. In regards to the testing team staff, Marick states that a good tester has the following qualities and characteristics:
good tester is "methodical and systematic" good tester is "tactful and diplomatic" good tester is "skeptical, especially about assumptions, and wants to see concrete evidence;
good tester is able to notice and pursue odd details;
good tester has "good written and verbal skills (for explaining bugs clearly and concisely)" good tester has: "a knack for anticipating what others are likely to misunderstand"; and good tester has "a willingness to get one's hands dirty, to experiment, to try something to see what happens." (Marick, 1997)
Marick states that one should be particularly careful "to avoid the trap of testers who are not domain experts." (1997) the work of Ben-Yaacov and Gazlay entitled; "Real World Software Testing at a Silicon Valley High-Tech Software Company" states that the: "Silicon Valley high-tech software products teams face a troubling paradox on a daily basis - How to introduce new technology and features faster than ever, while simultaneously improving product quality and responsiveness to customer quality issues." (2001) Stated additionally are the following goals of the Silicon Valley high-tech software team:
Minimize time-to-market by delivering new technology as soon as possible
Maximize customer satisfaction by delivering a specific set of features
Minimize number of known defect in the shipped product releases ("built-in quality")
Maximize level of support to customer quality issues. (Ben-Yaacov and Gazlay, 2001)
Developed from customer input concerning satisfaction criteria are the following, which are stated to "...drive the development and quality assurance strategies are stated to be:
The ultra-rapidly evolving technology market. This…