Notes composed while reading Nick Jenkin’s A Software Testing Primer.
Software was originally developed using the Waterfall Model, which used principles taken from engineering. The five steps used in the model include analysing the requirements of the project, designing a solution to meet these requirements, implementing the design, verifying the product against the design and requirements, and finally to maintain the project as necessary.
In reality these steps may be completed numerous times. Even as often as at every phase of the project. Larger projects with longer development cycles may result in large amounts of time passing between when the requirements are defined and the implementation is under way. During that time the requirements may change.
Jenkins references three newer models that have made improvements, but never at the “quantum improvements they have promised.” The Iterative or Rapid Development model (e.g. Rapid Applications Development), in which smaller chunks of functionality are delivered in a step-by-step manner. The Incremental Development model uses a modular approach, in which units are developed and combined during the conclusion of the production phase.
Even newer models have had some success, but none have reached an order-of- magnitude improvement in productivity, reliability or simplicity. In 1986 Fred Brooks argued for a disciplined, ordered model that delivered step-wise improvements to the process.
Unlike planning, design and QA, testing reduces the number of defects within a product. Testing reduces risk and achieves this goal best when implemented throughout the development cycle. This is achieved with unit, integration and system testing.
Testing can also be used to measure the progress of a project.
A good tester assumes that the product they are approaching is already flawed and it is their job to illuminate the flaws by asking questions.
Developers often disregard ambiguities in order to expedite the development cycle, but the assumptions they make may not meet the end user’s needs.
The cost of bugs increases as the development cycle progresses. Therefore more passes of testing at each phase decreases the total cost. Each pass represents a retest, while testing parallel areas is known as regression testing. Developers can not typically specify the exact scope and effects of their changes.
Black box testing verifies the input and output of a module without knowledge of the internal process. This is exemplified by User Acceptance Testing and Systems Testing.
White box testing verifies the internal logic and the code itself. This is exemplified by technical testing and code reviews within the scope of unit testing.
The majority of testing is focused on verifying the product against requirements, but that does not ensure that those requirements may not meet the user goals. Validation uses external sources to ensure that the requirements meet the users’ goals. This is exemplified by usability testing.
Functional testing (i.e. White Box testing) compares the product to the requirements outlined by during the second step of the cycle and those of the end user. Discrepancies may arise when the product has additional functionality.
The Alpha development milestone may include numerous bugs and an incomplete interface, but allows for end-to-end system testing. Once a product has reached the Beta milestone the focus turns to performance, eliminating defects and cosmetic work. The first end-user tests are made at this stage.
White Box testing uses static analysis (e.g. code inspection), which includes human reviews of uncompiled code and automated checks for semantic and logical errors. Additionally, dynamic analysis reviews compiled code by testing performance.
During unit and integration testing sub- and super-units can be tested with simulations. A simulated sub-unit is called a sub, while a simulated super-unit is referred to as a driver. Once the product is assembled, systems testing checks all possible input conditions and handles exceptions.
Acceptance testing is an optional final phase of testing that is more common of larger projects. The client often uses end-users to complete User Acceptance Testing (UAT). Acceptance testing may include documentation, process changes, training material, operational procedures, and operational performance measures.
Test automation does provide an important set of tools, but it will not find more bugs than an experienced tester, fix problems with the development process, and may not even be faster. There is a substantial fixed cost and also high marginal costs. Also, changing requirements over time may make automation impractical.
Stable interfaces reduce the substantial hidden cost of maintaining software, but their stability is often impossible to control.
Automated testing excels at load and performance testing, smoke testing as a base test for successful implementation, regression testing to make sure there have been no unintended changes in functionality, create input/output data or test conditions, and repetitive testing.
Automation testing code should follow the same cycle as normal development, but with more rigorous coding standards.
The project requirements, design, and technical specifications can also be tested just like the implementation is tested. This process consists of verifying the accuracy and consistency, validating real user exceptions, and checking consistency with earlier and later phases.
These tests should make sure the terminology is specific, measurable, and testable. The prose should be consistent with other requirements, clear and concise, and exclusive by stating what should not be done.
Requirements focus on why to do something, while design documents focus on how to accomplish the requirements.
Usability testing uses a prototype to evaluate the end-user’s needs. User input can help select one of various solutions, but they should not define the solutions themselves. It is almost always a design problem, not a user problem. End-users should be carefully selected to be representative of the user base. Since usability testing is a design tool it should be conducted early in the development cycle with the design process.
Performance testing is ideally done on live systems instead of test systems, but that process can be more expensive and risky if the product fails. Automated tests like capture-and-playback tools should include as much variation as possible. Special focus should be given to weak points (i.e. bottlenecks).
Tests should be planned because complete coverage of all aspects of a product is impossible. Similarly, not all issues can be resolved before a product should be released. The users will uncover the last issues at a lower cost and over less time then a dedicated team.
Start planning by outlining all categories and sub-categories that may impact the development. Examples include, functionality, code structure, user interface, internal interfaces, external interfaces, possible inputs, possible outputs, physical components, data storage, environment (e.g. operating system) hardware platform, user configuration, and use case scenarios.
Test cases may validate a single part of a category or multiple multiple parts of various categories. The test cases should cover as many categories as possible, but be organized by their ability to test identified levels of risks.
Focus should be given to tests that are successful at finding bugs. Often bugs will lurk in common areas like unnecessary complexity, historical occurrences of defects, stress under load and reuse of code.
There are two general ways to approach test scripting. Risk averse industries, like defense and finance, focus on verifiable test preparation, while other commercial software development, when employing an experienced tester may just give that person the freedom to track down bugs how he or she sees fit, what is known as exploratory testing.
There should be at least one test case for each requirement to check the positive outcome with correct input. Checking the negative outcome with incorrect input is hard to measure because of the often massive number of potential negative test cases.
Test cases can be managed with a simple spreadsheet, but larger projects will likely require a specialized management tool.
A tester should be prepared to modify the testing procedure if unexpected events occur. For example, if the scripts fail to find any bugs after several weeks of development.
The best way to deal with a time crunch is to prioritise your tasks at the beginning of the process.
What are the most common bug tracking systems in commercial and open source environments? What makes a good bug report? They should be objective, specific, concise reproducible, and persuasive. Only one report should be written for every bug and only one bug mentioned in a single report. The bug should be isolated and then generalized.
Status flags for a bug include: new, assigned, rejected, fixed, ready for test, railed retest, and closed.
A classification system for the root cause of bugs allows a tester to find more general problems with the project. Categories may include: requirements, design error, code error, testing error, configuration, and exiting bug.
Comparing categories with flags and other variables, like time, allow for improved analysis of a program.