Software Testing Class Identification Date Of Submittal Research Paper

PAGES
6
WORDS
1736
Cite

¶ … Software Testing Class Identification

Date of Submittal

Current Trends in Software Testing

The continued growth of Cloud Computing, Software-as-a-Service (SaaS) and virtualization technologies in conjunction continual improvement in the automating of the Deming's Plan-Do-Check-Act (PDCA) processes are defining the future of software testing and quality assurance. The most rapidly evolving trends in software testing include the defining of PDCA-based automated testing networks and Testing-as-a-Service predicated Cloud Computing-based platforms (Nakagawa, Ferrari, Sasaki, Maldonado, 2011). The legacy of software testing and quality assurance defined by Six Sigma methodologies are being automated into Web Services-based and SaaS platforms for quicker deployment and greater accuracy of results (Jones, Parast, Adams, 2010). Software-as-a-Service (SaaS) shows potential to reduce software testing cycles and improve quality to a level not seen in previous testing and quality management approaches (Watson, DeYong, 2010). The intent of this analysis is to analyze the current trends in software testing and how they impact overall development of enterprise software.

Background

Relying only on manually-based processes of software testing that are heavily dependent use on a single quality assurance methodology are providing only limited results and breadth of validation (Mattiello-Francisco, Martins, Cavalli, Yano, 2012). One of these quality assurance methodologies, the Deming PDCA Model, continues to be used for streamlining quality testing for enterprise-wide, broadly distributed software applications (Jones, Parast, Adams, 2010). Manually-based methodologies however are proving only to be only partially able to scale to the emerging global application development needs of software companies and enterprise building their own software internally as well (Yang, Onita, Zhang, Dhaliwal, 2010). To overcome this limitation, many enterprises are working towards the development of SaaS-based testing and quality assurance platforms including the fine-tuning and quantification of testing and verification sequences (Yang, Onita, Zhang, Dhaliwal, 2010).

Of the several approaches of integrating software testing and quality assurance into development methodologies moves from the manual to the automated (Ivanovic, Majstorovic, 2006) (Yilmaz, Chatterjee, 1997) the context of a hosted testing service is showing the potential to increase the accuracy and productivity of application testing for globally-deployed Web applications (Yang, Onita, Zhang, Dhaliwal, 2010). The time and cost constraints on the development of Web-based, global applications is straining the existing manually-based approaches to software quality management, analysis and release criterion (Watson, DeYong, 2010). Time and cost constraints of manual approaches are also slowing down in their delivery cycles, in many instances leading to a reduction in enterprise revenue from not having software out of testing fast enough to meet market needs. To counter and overcome these limitations, software testing is now beginning to become more integrated to the PDCA framework for creating faster time-to-market (Nakagawa, Ferrari, Sasaki, Maldonado, 2011). What is fueling the development of automated software quality testing and the use of SaaS platform is the integration of PDCA models and frameworks across global platforms (Nakagawa, Ferrari, Sasaki, Maldonado, 2011). This is a very significant trend and it is analyzed in the next section as it sets the pace and direction of Testing-as-a-Service.

The Integration of PDCA Modeling As A Foundation of Testing-as-a-Service

The transition form manually-based testing and quality assurance methodologies to those that are automated continues to dominate the trends occurring in this area. Relying increasingly on Cloud-based and SaaS platforms is giving development teams increasingly greater flexibility in anticipating and responding to market and organizational changes over time (Watson, DeYong, 2010). The PDCA cycle that is automated as part of a software testing lifecycle approach has been able to align and create greater levels of collaboration as well across departments and divisions of companies. This is one of the more powerful catalysts driving the trend of SaaS-based software testing and quality assurance. The need for coordinating across a broad range of product and service divisions, continually strengthened by new insight and intelligence, is proving to be highly effective in driving up the quality level of software (Nakagawa, Ferrari, Sasaki, Maldonado, 2011). Collaboration across departments is critical during the planning phase of any software testing project to ensure that engineering, development, quality assurance, product management, planning and services are all kept informed as to the direction and progress of software quality management. Software testing becomes the fuel or catalyst that keeps a software project continually moving forward from development to launch and eventual use. Software testing and quality assurance in software development is not often seen as the unifying factor in ensuring cross-functional team performance however (Yang, Onita, Zhang, Dhaliwal, 2010). The continued development of software testing on the SaaS platform...

...

Metrics and KPIs are used for effectively managing the entire breadth of the development cycle, with specific focus on measuring the effectiveness and speed of collaboration across functional departments. The focus on how to create the greatest level of shared accountability in software testing and quality assurance is also an evolving best practice, enabling the Cloud-based architectures (Nakagawa, Ferrari, Sasaki, Maldonado, 2011). Software quality assurance and testing methodologies have metrics and KPIs that are highly dependent on the code alone, not measuring the impact of each code base from specific departments and development groups. What's evolving is a shared accountability model of performance, where the aggregate quality score is defined and the specific code level quality assessed, with specific focus on the areas of software quality stability, reliability and performance over a wide variety of use scenarios (Watson, DeYong, 2010). The use of analytics as part of the software testing and quality assurance process is discussed in the next section of this report. The role of analytics serving as a foundation for measuring software productivity has long been a best practice, and the integration of real-time data in the form of SaaS-based platform analytics is emerging and growing fast due to its low cost to capture and rapid use to analyze and correct the direction of software projects over time. The real time role of analytics in software testing is next evaluated in this paper.
The Real-Time Role of Analytics in Software Testing

In conjunction with the trend of Cloud Computing platforms being used as the foundation for testing services, creating the emerging area of Testing-as-a-Service applications and platforms, there is the additional benefit of analytics being more easily tracked and managed with digital platforms. The use of Cloud Computing and SaaS-based platforms is further accelerating the development of real-time analytics, dashboards, KPIs and metrics, making the quantification of shared software quality performance measurable and quantifiable (Jones, Parast, Adams, 2010). SaaS-based architectures allow software testing to also become more measurable over longer periods of time suing a common data set. Using longitudinal analysis and Six Sigma approaches to measuring variation in data, development teams are able to gain greater insights into how their software development methodologies are impacting overall software quality (Mattiello-Francisco, Martins, Cavalli, Yano, 2012). The reliance on these application development approaches and the integration to Cloud-based Testing-as-a-Service initiatives are also making the shared accountability and responsibility metrics so critical to overall program success more visible across an organization as well. Not only is the real-time development of these metrics changing software quality, the change management initiatives attributable to having so much data available is also forcing significant change in company cultures.

Of the many methodologies used for capturing, analyzing and continually improving the quality of software, one of the more promising is a comprising series of tests and suite of metrics called TESTQUAL (Yang, Onita, Zhang, Dhaliwal, 2010). The goal of TESTQUAL is to measure the functionality, reliability, usability, efficiency, maintainability and portability of software quality assurance and testing programs (Yang, Onita, Zhang, Dhaliwal, 2010). TESTQUAL has shown initial positive results in streamlining PDCA-based methodologies that have been automated to take into account globally-based software development projects and programs. This approach to also has shown significant potential in streamlining and simplifying dozens of metrics and KPIs into a single manageable set as well. The use of PDCA cycle-based measurements in the many variations of Agile development approaches have also shown significant performance value over time as well. Testing-as-a-Service platforms all today include some form of analytics and KPI measurement, with each promising to deliver insights to shorten development timeframes. He use of SERVQUAL however has been instrumental in initial tests to increase the cycle times of large-scale enterprise software using Agile development methodologies to further guide development (Yang, Onita, Zhang, Dhaliwal, 2010). The future of analytics in software testing and quality assurance will certainly continue to accelerate with Agile-based development methodologies and their benefits being quantified, and failure analysis used for measuring points of failure throughout the development process. Only by quantifying each phase of the software development process can overall quality be improved and the speed of development itself increased.

Conclusion

Software testing has rapidly progressed from manually-based and script-based approaches to ascertaining quality, to today being focused…

Sources Used in Documents:

References

Jones, E., Parast, M., & Adams, S.. (2010). A Framework for effective Six Sigma implementation. Total Quality Management & Business Excellence, 21(4), 415.

Mattiello-Francisco, F., Martins, E., Cavalli, A., & Yano, E.. (2012). InRob: An approach for testing interoperability and robustness of real-time embedded software. The Journal of Systems and Software, 85(1), 3.

Nakagawa, E., Ferrari, F., Sasaki, M., & Maldonado, J.. (2011). An aspect-oriented reference architecture for Software Engineering Environments. The Journal of Systems and Software, 84(10), 1670.

Gregory H. Watson, & Camille F. DeYong. (2010). Design for Six Sigma: caveat emptor. International Journal of Lean Six Sigma, 1(1), 66-84.


Cite this Document:

"Software Testing Class Identification Date Of Submittal" (2011, December 15) Retrieved April 24, 2024, from
https://www.paperdue.com/essay/software-testing-class-identification-date-53383

"Software Testing Class Identification Date Of Submittal" 15 December 2011. Web.24 April. 2024. <
https://www.paperdue.com/essay/software-testing-class-identification-date-53383>

"Software Testing Class Identification Date Of Submittal", 15 December 2011, Accessed.24 April. 2024,
https://www.paperdue.com/essay/software-testing-class-identification-date-53383

Related Documents

Making each segment of the PDCA cycle configurable as part of a Software Testing as a Service, virtual teams would also have much greater autonomy in meeting reporting requirements and individual programmers could set specific quality management goals for their code. The benchmarking aspects of the PDCA cycle applied to individual code segments, and measured using Six Sigma-based methodologies would reduce design and verification cycles by nearly half or