Some software development models, such as Agile, integrate testing in the development phase itself, doing away with the need for a separate process. The more common waterfall model, or sequential design process where progress flows steadily downwards through Conception, Initiation, Analysis, Design, Construction, Testing, Implementation and Maintenance phases includes testing as a separate phase, after completion of coding. Regardless of where the software testing procedure takes place in the product life cycle, it constitutes an important step in the software development process, and indispensable to ensure that the developed code meets its purpose.
Black Box Testing
Black box testing is a data and input/output driven approach that involves checking for compliance to the specified functional specifications without regard to the program structure. The broad method is considering various inputs to the process, and comparing the outputs to such inputs against functional specification to validate for correctness.
One sample black box testing procedure is boundary value analysis, and the goal is to check whether the inputs and outputs match the given specifications. To undertake such testing:
- Extract the expected input and output values from the specifications
- Group the extracted values into sets with identifiable boundaries, with each set containing values supposedly processed the same way. The boundary between two groups is the value at which behavior changes, and is either the minimum or the maximum value of the group.
- Generate inputs and outputs that fall on either side of each boundary values to create test cases, and validate the reliability of the code by incrementing such test case values with the smallest increment possible.
Follow some best practices to enhance the procedure:
- Execute the test case on all platforms, or multi-platform testing
- Launch a beta version, either in-house or involving a small group of selected customers to gain feedback during actual running and fix errors, before launching the full-blown version.
- Keep track of upgrades to reverse to a previous state if regression occurs owing to recent changes
Robustness testing entails checking for problems such as machine crashes, process hangs, and abnormal termination. The software testing procedures under this type of tests includes:
- Stress testing or load testing, or subjecting the software to stressors such as resource exhaustion, sudden activity bursts, and sustained high loads, to test for resilience
- Security testing to identify bugs that compromise computer security. The procedures to do so are many. For instance, the Department of Homeland Security in collaboration with MITRE Corporation offer bug squashing tools with a list of 25 most common software errors that help businesses identify common bugs or vulnerabilities in most software.
The white box, or glass box testing, method analyzes the structure and flow of the software to unearth design problems that increase bandwidth and CPU usage, delay stimulus-response time and queue lengths, and more. This elements considered are programming language, logic, and styles rather than input and output values.
Some procedures to undertake white-box testing include:
- Control-flow testing, loop testing, and data-flow testing, all which entail mapping the corresponding flow structure of the software to a directed graph. Test cases derived from the program structure cover all the paths at least once and running such tests unearth "dead" or never executed redundant code.
- Mutation testing, or testing to make test cases fail and thereby demonstrate the adequacy of the software. Copy the original code to create many mutated programs, and perturb each copy to create mutants containing one fault, such as a change in syntax or some other error. Apply test cases to both original and each mutant software, and evaluate results. If both original and mutant software returns the same result, the test case is inadequate. If the test case is adequate, it detects some fault in the software, or one mutant code generates a different output than the result of the original software.
Software testing is a costly process, and the method adopted depends on a trade-off between budget, time, and required quality levels. Automated procedures help save time and cost but is difficult to achieve for testing tools lack generic applicability and scalability. Testing stops when reliability meets requirement, or when the benefit from further testing does not justify testing cost.
A good test reveals the usability, robustness, reliability, and overall quality of the product to the stakeholder, and the extent to which it fulfills the project charter. It generates data that allows developing an estimation model on the present reliability and predicts future reliability of the software. Developers and project managers use such tests as a basis to make further improvements or determine the quality level of the software, and security specialists use such tests to identify possible vulnerabilities when preparing a security master plan.
- Pan, Jiantao (Spring 1999). "Software Testing" Electrical and Computer Engineering Department, Carnegie Mellon University. Retrieved from https://www.ece.cmu.edu/~koopman/des_s99/sw_testing/ on July 08, 2011.
- Chillarege, Ram. "Software Testing Best Practices." Retrieved from https://www.chillarege.com/authwork/TestingBestPractice.pdf on July 08, 2011.
Image Credit: freedigitalphotos.net/nuttakit