No matter what type of testing you might be doing, it helps to have as part of your process a list of scripted steps to follow. Once identified, the list of steps will help make sure that you have consistency between test runs, and if automated, helps to get you back into a position to test again as soon as possible.
The following template is one that I've used for several years with success. What really helps get the extra mileage out of it is if you automate all, or most, of it using whatever scripting technologies work for you. My current personal favorite is NANT as the execution engine and then a suite of VBScript, or even .BAT files, that actually execute each specific step.
NANT's foreach construct makes it very easy to loop over a server list, getting consistent results across the board.
PRE-Test
- Initialize Repository
- Create folder(s) that will be repository for information from this test run. This will probably be something like a test label plus a run number. If your doing a lot of different types of testing, then it will make sense to have more meaningful labels. Otherwise, 'TESTRUN1', will likely work just fine. The important take away is that you need a friendly way to refer back to all the data that you will be collecting.
- Capture Configuration
- Copy all configuration files to the repository. The purpose of capturing the configuration is that you may need to correlate test results back to a specific configuration that was in effect during a particular test run. Examples of configuration files are: machine.config, web.config, app.config, .INI files. Consider exporting any registry settings to .REG files. If your using non .NET applications and services, then look for those files as well. Apache and WebSphere are two very file centric server applications.
- Clear, or delete, any log files. Make sure that any event notification received should be from this particular test run.
- Start Services
- Identify the order that the services need to be started. Start services in preparation of the test run. These services are anything that your application needs to function: IIS, BizTalk, ERP or WMS services.
- Start Monitoring
- Make sure that your monitoring scripts fire up as well and that you are collecting data from the run. Unless your specifically measuring startup activity, you may wish to discard, or ignore, the 'warm up' period of your application.
Test
- Begin any test scripts that will be driving the load for your applications. This could be automated testing such as SilkPerformer, LoadRunner, LoadGen, or even Windows Task Scheduler.
POST-Test
- Stop Services
- Most likely, if there is an order that the services needed to be started in, then you will want to reverse that order for shutdown. Many times you will need to stop services in order to execute the data collection.
- Data Collection
- Copy all log files to the test repository (e.g. application and server's such as IIS)
- Copy all Windows event logs to the repository
- Copy all performance (Perfmon) logs to the repository
- Clean up
- You probably have lots of 'data bits' floating around the test environment. Examples of this might be files in folder locations, MSMQ messages, rows in databases queues, etc.
- Reset your data tier to a known good state. If your data lends itself to just flipping a status bit to get back to a known good state, then execute the SQL to do this. If not, then you will want to restore to your applications 'golden' backup.