Introduction
Testing plan will help implement the MPC system platform that will enable;
- Removal of legacy office systems which they want to upgrade
- Processing of online customers orders
- No constraint on location of capture and retain customer contacts
- Enable capture of transactions for other processing systems
- New web based system
- The system will in future reward faithful customers can they can buy discounted spare parts
MPC system will result in significant changes to the entire system and customers can place their orders from anywhere . The functionality will be delivered on a phased basis.
Phase 1 will incorporate the following facilities :
- Replacement of the legacy System A
- New Reconciliation System
- Outsourcing system for departments in different countries.
- New/Revised Audit Trail & Query Facilities
Purpose of this test Plan:
Preparation for this test consists of three major stages:-
- The Test Approach sets the scope of system testing, the overall strategy to be adopted, the activities to be completed, the general resources required and the methods and processes to be used to test the release. It also details the activities, dependencies and effort required to conduct the System Test.
- Test Planning details the activities, dependencies and effort required to conduct the System Test.
- Test Conditions/Cases documents the tests to be applied, the data to be processed, the automated testing coverage and the expected results.
Description and objectives
At a high level, this System Test intends to prove that :-
The functionality, delivered by the development team, is as specified by the business in the Business Design Specification Document and the Requirements Documentation.
The software is of high quality; the software will replace/support the intended business functions and achieves the standards required by the company for the development of new systems.
The software delivered interfaces correctly with existing systems, including Windows 98
Criteria for Pass/Fail:
All tests conducted in which the system demonstrates that it has attained a 85% level of having met the functional and non-functional requirements of the client as stated in the user and system specification document will be considered as passed. Otherwise, if less than the 85% level of the system requirements is achieved the tests will be considered as having been failed by the system, i.e. the system is not performing correctly. These requirements will not be inclusive of those considered to be critical functions according to the prioritization and associated risk assessment of these items. All critical requirements must meet a 90% level otherwise the system will be deemed unfit for its purpose.
Approach used- Bottom-up approach:
The team proposes to use a bottom-up approach where the common infrastructural system framework is developed and components are added to it as required (Sommerville, 2007, p.541) or where sub units are first tested and later integrated with main/parent units. The various components of the system will be created based on functionality (e.g. order component and modify order component) and will be individually tested before being combined according to function where each combination will be tested until the entire system has been built and satisfies all test criteria. It is believed that this methodology will lead to easier creation of test cases and observation of the output, as well as, allowing errors to be discovered faster, especially for critical components (Suresh, 2010). All of the modules/function mentioned earlier in the plan will be created and tested according to this procedure.
Tasks to be performed:
The following tasks will need to be performed to complete the testing phase of the development:
- Design test cases
- Create criteria list for software inspection
- Perform software inspection
- Prepare test data (for black-box testing)
- Build and design test procedures (including error management, status reporting and data tables for automated tools) (Bazman, 1997)
- Create test environment (secure testing area; source/set up the required hardware and software, assign personnel resources etc.)
- Run Program with test data (Component based testing & Integration testing)
- Compare results to test cases for evaluation
- Document results
- Make necessary modifications
- Re-run tests after modifications
- Sign off tests (all tests must be signed off as acceptable by the designated test controller from the team and an MRP Information Technology representative before a test could be considered fully completed)
The tests to be performed will include:
Functional testing – these tests will verify that the system conforms to the requirements state in the functional specification document
Component testing – testing of individual components/units/modules/functions to ensure that they perform as expected and that they meet specification criteria (user, quality).
Integration testing – testing the interfaces between units to ensure a smooth data flow, i.e. that the units interact satisfactorily. Once all units have been connected this test will show that the system as a whole functions correctly.
Regression testing – after modification has been made to previously tested components or sub-systems, previous tests are re-run on the components or sub-units to verify that the modification has improved system functionality. Further, tests will be repeated with each new component to a sub-system to ensure that the addition does not negatively affect the sub-system. If the new addition affects the sub-system then error detection will begin with the newly added component.
Performance testing – these tests will be done to prove that the system meets the response time outlined in the non-functional requirements section of the design document.
User Acceptance testing -will involve high level tests to ensure that the user is satisfied with the operation of the system before release. The user will be given the opportunity to interact with the system and explore its features. These tests will begin with the prototyping stage of the process.
Load testing – this will involve tests to prove the stability of the system, in terms of functionality and speed, when multiple users access it for the various services it offers
The test conditions and expected results will be developed by the test team. All test plans and conditions will be based on the system specification document. Errors will be ranked as having high or low priority based on prioritization and risk assessment schedule developed by the testing team, quality team and the client with respect to system functionality. High priority errors must be rectified by the team before tests are signed off. Low priority errors may or may not be fixed depending on the level of risk assessed to be associated with the error with respect to system quality. The non-rectification of such errors must be justified before sign off occurs (Bazman, 1997)
To further expand on the testing techniques, the use of both static and dynamic testing methods, i.e. software technical reviews and software testing will be applied. The software quality team will perform software inspections during the development process beginning from the user requirements stage using checklists that have define criteria for quality standards, user and system requirements and user satisfaction. It is important to note that the performance of software inspection includes document inspection as well. Static analysis will also be conducted by designated testers as part of the verification process for developed code. Tools such as Execution Flow Summarizers, Program Verifiers and Test Data Generators (University of Liverpool/Laureate Online Education, 2010) will be employed to assist the process.
Schedule:
The following screen shots demonstrate the schedule for completion of the tasks for the testing stage of the development of the software. Some tasks, namely software inspection and dynamic program testing, are recurring as they will be repeated as necessary to achieve the desired standard of the product. The process will occur in phases where several acceptance testing will be performed as the system is pieced together. Regression testing will also follow each addition as previously explained.
Metric and Measuring
Response Time – MCP will take the amount of time system takes to process a request after it has received one as part of its response plan. Using a simple Stop Watch technique the system starts its process and starts the clock and stops it after the process returns. The duration arrived by the calculation may be quite small so a preferred practice is to do a sequential loop around 1000 times to get a measurement that we can track and measure.
Latency – MCP will measure the remote response time for all its web services and access to web pages by utilizing some network simulation tools. To ensure to avoid issues from data centre management due to latency caused by customer distance from the hosting data centre.
Throughput – MCP will measure the transactions per second our application can handle (motivation / result of load testing). A typical enterprise application will have lots of users performing lots of different transactions. You should ensure that your application meets the required capacity of enterprise before it hits production. Load testing is the solution for that. Strategy here is to pick up a mix of transactions (frequent, critical, and intensive) and see how many pass successfully in an acceptable time frame governed by your SLAs. We will try to make the testing to be as close as possible to real world with live data.
Scalability – MCP is the measure how our system responds when additional hardware is added as this is quite important when considering the growth projections of the business, this can be done using a load balancing tool with software and hardware simulations to ensure that the system is able to take on new loads without any issue.
Stress testing – MCP will run the relate load tests for more than 24 hours to conduct a detailed stress test. This will help us understand how easily our system can recover from over loaded (stressed) condition and thus test the robustness as an attribute that can be measured.
REFERENCES
Bazman (1997) Test plan [Online]. Available from: http://bazman.tripod.com/testplan/chapter2.html. (Accessed: May 25 2010).
Sommerville, I. (2007) Software Engineering 8th ed. London: Addison-Wesley.
Suresh (2010) Software Testing Integration and BottomUp Approach [Online]. Available from: http://ezinearticles.com/?Software-Testing-Integration-and-Bottom-Up-Approach&id=1226498. (Accessed: May 25 2010).
University of Liverpool/Laureate Online Education (2010). Software Engineering Module, Seminar 8 [Online]. Available from: University of Liverpool/Laureate Online Education VLE (Accessed: May 21 2010).
Bazman (1997-2007) Test plan [Online]. Available from: http://bazman.tripod.com/planframe.html (Accessed: 25 May 2010).