Saturday, August 25, 2007

Test Process Template

No matter what type of testing you might be doing, it helps to have as part of your process a list of scripted steps to follow.  Once identified, the list of steps will help make sure that you have consistency between test runs, and if automated, helps to get you back into a position to test again as soon as possible.

The following template is one that I've used for several years with success.  What really helps get the extra mileage out of it is if you automate all, or most, of it using whatever scripting technologies work for you.  My current personal favorite is NANT as the execution engine and then a suite of VBScript, or even .BAT files, that actually execute each specific step.

NANT's foreach construct makes it very easy to loop over a server list, getting consistent results across the board. 


  1. Initialize Repository
    1. Create folder(s) that will be repository for information from this test run.  This will probably be something like a test label plus a run number.  If your doing a lot of different types of testing, then it will make sense to have more meaningful labels.  Otherwise, 'TESTRUN1', will likely work just fine.  The important take away is that you need a friendly way to refer back to all the data that you will be collecting.
  2. Capture Configuration
    1. Copy all configuration files to the repository.  The purpose of capturing the configuration is that you may need to correlate test results back to a specific configuration that was in effect during a particular test run. Examples of configuration files are: machine.config, web.config, app.config, .INI files.  Consider exporting any registry settings to .REG files.  If your using non .NET applications and services, then look for those files as well.  Apache and WebSphere are two very file centric server applications.
    2. Clear, or delete, any log files.  Make sure that any event notification received should be from this particular test run.
  3. Start Services
    1. Identify the order that the services need to be started.  Start services in preparation of the test run.   These services are anything that your application needs to function: IIS, BizTalk, ERP or WMS services.
  4. Start Monitoring
    1. Make sure that your monitoring scripts fire up as well and that you are collecting data from the run.  Unless your specifically measuring startup activity, you may wish to discard, or ignore, the 'warm  up' period of your application.   


  1. Begin any test scripts that will be driving the load for your applications.  This could be automated testing such as SilkPerformer, LoadRunner, LoadGen, or even Windows Task Scheduler.   


  1. Stop Services
    1. Most likely, if there is an order that the services needed to be started in, then you will want to reverse that order for shutdown.  Many times you will need to stop services in order to execute the data collection.
  2. Data Collection
    1. Copy all log files to the test repository (e.g. application and server's such as IIS)
    2. Copy all Windows event logs to the repository
    3. Copy all performance (Perfmon) logs to the repository
  3. Clean up
    1. You probably have lots of 'data bits' floating around the test environment.  Examples of this might be files in folder locations, MSMQ messages, rows in databases queues, etc.
    2. Reset your data tier to a known good state.  If your data lends itself to just flipping a status bit to get back to a known good state, then execute the SQL to do this.  If not, then you will want to restore to your applications 'golden' backup.

Friday, August 24, 2007

Identifying BizTalk Hosts

Nick Heppleston has a post on how to identify the BizTalk host by correlating the process identifier (PID) to the results of executing tasklist.  Check out his post for the details.

I'd like to offer up an alternate method that is pretty much the method our team uses: using Process Explorer found in the newly christened Microsoft SysInternals Suite.

From an operational perspective, its pretty easy to fire up Process Explorer in place of Task Manager to get a wealth of information about the processes running on your system.  Check out the Windows SysInternals Suite site for more information and downloads.

Each BizTalk host is started with a set of command line parameters passed to it.  One of these parameters will be the name of the BizTalk Host that the process is an instance of.  You can view these command line parameters via Process Explorer and identify which hosts are consuming the most CPU, Memory, etc.

Once your running Process Explorer, add the command line column (right click column-header -> check command line option), and that view will become saved as part of your user preferences. 

Here is a snapshot:

BizTalk host instance in Process Explorer

Notice that you can see the two BizTalk host instances 'Tracking' and 'MessagingReceive' that are running on this server.

Thursday, August 23, 2007

BizTalk Testing - Part 2: Mocking an Endpoint

Last time, we talked about a couple of different methods to create the load for your BizTalk solutions.  Check out BizTalk Testing - Part 1: Generating Load for some ideas.

BizTalk Server is an integration platform at its core, so it makes sense that you will have requirements to test the integration between two, or more, systems.  When I say test, I'm also referring to a volume test, not a functional, or validation test.

Unless you've lucked out and are only dealing with the most simplest of application integration scenarios, that of one way delivery to some sort of durable storage (File, MSMQ, SQL), your likely to have to correlate your request message to some sort of application level response.  I'm not talking about a protocol acknowledgement (TCP/IP, HTTP, or MSMQ).  I'm talking about an application level acknowledgement where the target system acknowledges the successful receipt, and sometimes processing, of a message.  This is pretty standard behavior with many key enterprise systems (CRM, ERP, WMS). 

For instance, if you have an orchestration which subscribes to the Item message, which executes the following itinerary:

  1. Receive Item
  2. Transforms Item to a format the WMS system can understand, e.g. WMS Item
  3. Send WMS Item
  4. Receive Correlated Application Acknowledgement
  5. On Error call a common error handling strategy

As long as the target system responds, your process gracefully finishes executing.  However, if it never responds with the acknowledgement, then the BizTalk process will continue to have messages that stack up in the Message Box and you'll have a bunch of half completed orchestrations.  What are we suppose to do then?

What we need is a way for BizTalk to get the messages it needs to keep the heart of your messaging bus in a healthy pumping state.

This is where mocking the endpoint becomes crucial to being able to test the integration tier.  Wikipedia defines a reason to use a mock as "mock objects can simulate the behavior of complex, real (non-mock) objects and are therefore useful when a real object is difficult or impossible to incorporate into a unit volume, or load, test. If an object has any of the following characteristics, it may be useful to use a mock object in its place: ...when the real object does not yet exist or may change behavior."

While I think that developing a WMS system would be a grand challenge, that's usually not what I'm thinking about when I'm trying to test integration between an ERP and WMS system.  I'm less concerned with writing the code to actually receive and put away inventory as I am in being able to generating the appropriate response to the originating message request.   Sometimes, the messages from the endpoint system actually originate there.  e.g. inventory transactions notifying ERP of inventory changes (breakage, spillage, quality holds, etc) and are not actually a response to a request, but is still something to consider when 'mocking' an endpoint.

Using Xml, XSLT, a little custom code and Windows Task Scheduler you can quickly build a 'responder' to allow you to focus on testing what's important to you: the integration.  Its actually easier than you might think.  Of course, if it is your job to actually test all those WMS processes, then you've got your work cut out for you. 

Been There, Done That

Yes, you can skin this cat a couple of different ways.  The latest incarnation that we've gotten some mileage out of is what we call 'TCopy', short for transport copy.  It allows us to copy files from one transport medium to another, with message inspection to apply transforms if required (most notably for application acknowledgements).  It supports FileToFile, FileToMSMQ, MSMQToMSMQ, MSMQToFile and I believe that we recently added FileToHTTP (one way), thought I haven't used it.  Its problem?  It only has a coarse control of the arrival rate for messages generated by the endpoint.  Its simply a command line program (C#) that takes input from some sort of scheduler (Windows Task Scheduler), which is how we control arrival rate (coupled with a batch size). 

Here is a sample of just some of the command line options:

// copy from specific file to msmq            
// in:file://c:\temp\input.xml out:msmq:\\DIRECT=OS:server01\private$\shipment

// copy from specific file to a file folder
//in:file://c:\temp\input.xml out:file://c:\temp

// copy the first 10 files from the input folder to the output folder
// in:file://c:\temp\inputFolder out:file://c:\temp count:10

// copy the top 10 messages from the input queue to the output folder
//in:msmq:\\DIRECT=OS:server01\private$\orders out:file://c:\temp count:10

// copy the top file from the input queue to the output queue
// in:msmq:\\DIRECT=OS:server01\private$\orders out:msmq:\\DIRECT=OS:server01\private$\orders_success

Note, the application configuration file supports 'registering' a message type so that an XSLT transform is applied prior to delivering the message to the destination. For example:

<add key="xsl/" value="c:\test\xslt\order to order success.xslt"/>

In the above, any time the transport engine identifies a message with a root node of 'order', and an xmlns of '', it will apply the XLST transform identified by the configuration prior to delivering it to the target endpoint.  Anyhow, it can be an incredibly useful piece of code when trying to 'mock' an application response that might not be available.  I've seen too many testing efforts stall because an application wasn't available (for a myriad of reasons) when the integration team was available for testing.  Or, worse, seen them come to a complete stop because of the mentality of 'we can't test because we don't have system x available'.  If you break down the problem to data and protocol, you might find that you can use such a method outlined above to get some level of testing done.

Sunday, August 19, 2007

BizTalk Testing - Part 1: Generating Load

Why does everyone (or is it just me) refer to this tool as "BizTalk" LoadGen?  There is nothing BizTalk-ish about it.  Your right, its not a part of BizTalk, but all applications of it seem to be associated with BizTalk.  Why is that?   You've spent months devising plans on how to take over the world using BizTalk, but hey, guess what?  If no one pokes your BizTalk implementation with a stick (external stimuli, aka message) what does your implementation do?  Nothing.  BizTalk can be an insatiable, slathering beast when it comes to message consumption, but if your not feeding it, it will slumber away in the darkness waiting for that sharp poke.

Lets pretend that your organization integrates Item lifecycle events (add, update, retire) to several different applications throughout enterprise  You've built all of your integration architecture on top of BizTalk and now your ready to test those BizTalk artifacts.   Unfortunately, the application that sources these events to your messaging bus is unable to be used to generate the load for your testing.  This used to mean you had to go write some sort of test harness that could now generate that load for you. 

I've seen quite a few simple (but slick!) methods of generating this load.  Some of these methods are:

  • The Grass Roots Method: Having a file directory full of the data files that will be copied to a File receive location, using a batch file (or vbscript, jscript, etc) that is scheduled to execute using Windows Task Scheduler.
  • The Roll Your Own Method: Writing custom code (test harness) to do the file copies, similar to above, but using code.  Maybe here threading has been introduced to have multiple threads doing copies.  We probably also had the need to push data using multiple protocols (MSMQ, HTTP, etc).  Of course, now we've introduced complexity that we shouldn't be spending time on (but can be fun nonetheless). 
  • The Complete Package:  a full blown automated testing solution such as Silk Performer, LoadRunner, or Microsoft Team Edition for Software Testers.  Now, unless your development staff cross trains as Test Engineers, or you have them available, this can be even more complex.

These testing solutions will all work to varying degrees.  However, you may  need to consider a 4th method, that sits between the Roll Your Own Method and the Complete Package:

  •  The Pareto Method: Microsoft's free LoadGen.  It is more robust than the Grass Roots Method, removes most of the need for maintaining an often buggy Roll Your Own Method, and is much less expensive than a Complete Package.  With a fresh install, will cover 80% of what you probably need to do when testing some basic application protocols: File, HTTP, MSMQ, Soap, WSE.  The key is all the dirty details of controlling arrival rates is taken care of for you in the core product.  You can also extend the tool if necessary in some very key areas, namely Transport, Message Creation and Monitoring.  There is only minimal documentation, so if you need to figure out how to write a new Transport for LoadGen, fire up Reflector and you'll be writing the transport for tcp/ip, JMS, or whatever, else you might need. 


Real World Example

Lets take another look at our Item integration briefly described above.  Given that one Item looks much like another at the messaging layer and we are only interested in the load placed upon the system, then we could conceivably replace the source system with LoadGen.  We would need a single instance of an Item message for LoadGen to work with, but then we could have it generate as many messages as we want to test with.  It will be repeatable and we won't have a dependency on the source application. 

A specific test case that we needed to execute recently was generating 30K of these Item messages that would be sent across our WAN to a remote data center.  We had the primary goal of monitoring the impact of the messages on the network, but a secondary goal of monitoring our BizTalk implementation.  In this case, the load agent was a machine running LoadGen publishing Item messages to a local BizTalk server group.  BizTalk then sent the messages across the WAN to a web service gateway in a remote data center.  There is another BizTalk server group waiting to broker those messages in the remote data center, but for the purposes of our current test, we are done. 

In the grand scheme of things, 30K messages flowing through BizTalk is nothing (ok, depends on your infrastructure backing your solution), however, as with anything but the most simple BizTalk implementation, there are many moving parts (Pipeline Components, Functoids, Orchestrations, Custom Libraries) that you will want to see under load.  LoadGen allows you the opportunity to monitor your solutions (application and infrastructure), under a variety of loads, without having source systems available.  This means you can be more proactive (vs. reactive) in identifying those bottlenecks, and other problems, before you've got other team members waiting on you.  Is the view 100% completely accurate with how it will behave under 'true' production loads?  No.  But, you may be surprised at how far that 80% will get you.

Web Service Testing with LoadGen

Yes, you may have heard of this tool referred to as 'BizTalk LoadGen', but in reality, it works very well for generating load against most types of endpoints.  Don't let the BizTalk name fool you.

Maybe you have a web service that you want to drive some load against to verify performance or stability?  Borland Silk Performer, or Mercury LoadRunner, are excellent testing tools (I've used both), however, in many cases they can be a little heavy handed.  If your unfamiliar with those aforementioned toolsets, then trying to learn their in's and out's just to drive a little load against a singular web service is overkill. 

However, with a little patience, and the documentation, you can be up and running driving load against your web service in no time!  No more hand coding a test harness to drive the load, or monkeying with those pesky thread pools.  

Here is a sample web service implementation to illustrate the point:

namespace shobu.samples
/// <summary>
/// Summary description for MessageService
/// </summary>
[WebService(Namespace = "")]
[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
public class MessageService : System.Web.Services.WebService

public string Echo(string message)
Debug.WriteLine("Echo: " + message);

return message;


Now, a simple LoadGen configuration file and we can drive different workloads against the service.  Here is a sample LoadGen configuration file.  Refer to the LoadGen documentation for some of the details.


<StopMode Mode="Files">


<Transport Name="SOAP">

<Section Name="SoapSection">
<SOAPHeader>SOAPAction: ""</SOAPHeader>
<SOAPPrefixEnv>&lt;soap:Envelope xmlns:xsi="" xmlns:xsd="" xmlns:soap=""&gt;&lt;soap:Body&gt;</SOAPPrefixEnv>
<!-- US-ASCII Or UTF-8 Or UTF-16BE Or UTF-16LE or UNICODE -->


Some of the things to note:

  • NumThreadsPerSection - number of threads handling each workload.  I believe this applies to all sections, so there is not way that if  I had two workloads in one configuration file, SOAP and File, that I could assign a different thread count to each section's thread pool.  Don't quote me on that, I've not tried it.
  • SOAPHeader - I don't believe you can apply multiple SOAP headers.  Here, we are really just applying the necessary SOAPAction.

Control the arrival rate by modifying the NumThreadsPerSection, SleepInterval and LotSizePerInterval elements.  Modify the workload by adding additional sections.  e.g. you have multiple services  you want to test on your server.

Change from a load test to more of a stability test by changing the StopMode to "Time" and allowing it to run overnight, or longer, while recording performance data of your service.

Here are the contents of the file (soap.txt) referenced by the SrcFilePath element:

<Echo xmlns="">
<message>hello, world</message>

LoadGen also has the ability to use MessageCreators to make your messages more dynamic.  For example, I could vary the string passed to the service on each call.  I'll look into that a little further soon.

Download LoadGen from Microsoft and give it a try.