December 2, 2011 | By Rey Villar
The best executed system testing happens prior to the project ‘test phase.’ If you think that’s a catch-22, you’re right. How can system testing happen prior to the completion of development, before the ‘test phase’ even begins? In fact, if you read my last six blogs and followed my lead, you already have the prerequisities in the form of carefully crafted system test sets built for rapid execution and validation. For more guidelines, read on.
Move the code into the test environment after it has been thoroughly system tested.
I’ve never understood why projects rush to get code into the test environment. Think of the constraints on our working environments. Development is owned and controlled by the developers. Test, on the other hand, is locked down, typically by the QA team (and by all rights should be). Working through functional canary tests in development is quicker, easier, and cheaper.
Execute the vast majority of system tests on the development integration platform.
A common practice is to migrate work from developer “sandboxes” to a development “integration” area, where code is assembled into a BI solution. The integration area is the ideal test platform. That move is an at least an order of magnitude easier than moving the code out of your freedom loving development environment to the lockdown on test.
Make system testing a formality.
Development of system test materials, scripts, and the test harness needs to coincide with the arrival of unit code into the development integration environment. Require the development team to demonstrate successful system testing within the development integration area BEFORE moving the code into the test area. The goal is to make system testing a formality. (Mindful readers will note that I stated just the opposite in part #5. The rule only applies if you complete the majority of system testing on development).
Regression test by design.
Too many projects focus on passing failed tests while ignoring previously passed tests. The fallacy is that testing is complete when code fixes pass a retest. The reality is that code fixes can impact code that ran successfully in the past, causing test failure. Every set of code fixes mandates a test rerun. Fortunately, our automated and fast-running functional canary test datasets means you can afford lots of test cycles (see part #6).
Every incremental code change requires the addition of a small set of tests to the functional canary test dataset. The entire test dataset runs together to verify that the incremental change and all prior code perform as expected. Each functional canary test set run validates the entire code base. This ‘black box’ treatment of the code (see part #2) ensures that every logic path – old and new – is retested each time the code is changed.
This is regression testing by design.
System testing expands to encompass new rules as the code base matures. Our functional canary data sets and scripts expand in scope and precision as the code base matures. The code base and system testing mature in lockstep, all outside the confines of the test platform.
Rerun your test scripts on the system test environment to validate code promotion.
Typical project plans deliver system test scripts and data sets just in time for system testing. Initial migration from development to test uncovers defects related to migration. Errors are found in the setup of system test data sets and test scripts – even if the code is working perfectly. This happens when neither the migration process, the test setup, or the code base are isolated. Discovering the source of issues is going to take time – and yet more migrations. This is an expensive way to validate testing and migration.
Let’s say the code passes all system tests prior to migration to the system test environment. You will know (in development) how well the code is working. The test scripts will be debugged before they reach the test environment. After promotion, repoint the test harness to the new environment and the right datasets and you’re ready to rerun system testing on the test box. The system tests will now focus on the validity of migration and setup of the test environment. Any differences can be attributed to those issues, not bugs in the code or test scripts. This can reduce from weeks to days the time it takes to validate that the migration process works correctly. This is a cheap and expeditious way to validate migration.
Success is in the timing and coordination.
System testing done this way requires some TLC and more attention from management. Development of the test scripts and the test harness need to be timed with the development and integration of the solution components. More coordination is required between developers writing code, creating test cases and test data.
Rethinking system testing
The testing process described in the seven parts of this blog may be seem counter-intuitive, given that what is proposed is the completion of most system testing before code touches the test box. You’ll need to rethink how your team conducts testing, to consider the content and timing of test scripts and data sets, to offer guidance to the development and testing teams, and to find a skilled developer to tie the tests and validations together with appropriate automation (see part #6). None of these are beyond the skills of an experienced development team working with a good project manager.
This approach to system testing can help hit project objectives and deliver the project sooner, to the kudos of management and the end user community.
Good luck with your system testing efforts.
Please share your thoughts about this blog, or relate your own experiences.