Thursday, August 23, 2007

BizTalk Testing - Part 2: Mocking an Endpoint

Last time, we talked about a couple of different methods to create the load for your BizTalk solutions.  Check out BizTalk Testing - Part 1: Generating Load for some ideas.

BizTalk Server is an integration platform at its core, so it makes sense that you will have requirements to test the integration between two, or more, systems.  When I say test, I'm also referring to a volume test, not a functional, or validation test.

Unless you've lucked out and are only dealing with the most simplest of application integration scenarios, that of one way delivery to some sort of durable storage (File, MSMQ, SQL), your likely to have to correlate your request message to some sort of application level response.  I'm not talking about a protocol acknowledgement (TCP/IP, HTTP, or MSMQ).  I'm talking about an application level acknowledgement where the target system acknowledges the successful receipt, and sometimes processing, of a message.  This is pretty standard behavior with many key enterprise systems (CRM, ERP, WMS). 

For instance, if you have an orchestration which subscribes to the Item message, which executes the following itinerary:

  1. Receive Item
  2. Transforms Item to a format the WMS system can understand, e.g. WMS Item
  3. Send WMS Item
  4. Receive Correlated Application Acknowledgement
  5. On Error call a common error handling strategy

As long as the target system responds, your process gracefully finishes executing.  However, if it never responds with the acknowledgement, then the BizTalk process will continue to have messages that stack up in the Message Box and you'll have a bunch of half completed orchestrations.  What are we suppose to do then?

What we need is a way for BizTalk to get the messages it needs to keep the heart of your messaging bus in a healthy pumping state.

This is where mocking the endpoint becomes crucial to being able to test the integration tier.  Wikipedia defines a reason to use a mock as "mock objects can simulate the behavior of complex, real (non-mock) objects and are therefore useful when a real object is difficult or impossible to incorporate into a unit volume, or load, test. If an object has any of the following characteristics, it may be useful to use a mock object in its place: ...when the real object does not yet exist or may change behavior."

While I think that developing a WMS system would be a grand challenge, that's usually not what I'm thinking about when I'm trying to test integration between an ERP and WMS system.  I'm less concerned with writing the code to actually receive and put away inventory as I am in being able to generating the appropriate response to the originating message request.   Sometimes, the messages from the endpoint system actually originate there.  e.g. inventory transactions notifying ERP of inventory changes (breakage, spillage, quality holds, etc) and are not actually a response to a request, but is still something to consider when 'mocking' an endpoint.

Using Xml, XSLT, a little custom code and Windows Task Scheduler you can quickly build a 'responder' to allow you to focus on testing what's important to you: the integration.  Its actually easier than you might think.  Of course, if it is your job to actually test all those WMS processes, then you've got your work cut out for you. 

Been There, Done That

Yes, you can skin this cat a couple of different ways.  The latest incarnation that we've gotten some mileage out of is what we call 'TCopy', short for transport copy.  It allows us to copy files from one transport medium to another, with message inspection to apply transforms if required (most notably for application acknowledgements).  It supports FileToFile, FileToMSMQ, MSMQToMSMQ, MSMQToFile and I believe that we recently added FileToHTTP (one way), thought I haven't used it.  Its problem?  It only has a coarse control of the arrival rate for messages generated by the endpoint.  Its simply a command line program (C#) that takes input from some sort of scheduler (Windows Task Scheduler), which is how we control arrival rate (coupled with a batch size). 

Here is a sample of just some of the command line options:

// copy from specific file to msmq            
// in:file://c:\temp\input.xml out:msmq:\\DIRECT=OS:server01\private$\shipment

// copy from specific file to a file folder
//in:file://c:\temp\input.xml out:file://c:\temp

// copy the first 10 files from the input folder to the output folder
// in:file://c:\temp\inputFolder out:file://c:\temp count:10

// copy the top 10 messages from the input queue to the output folder
//in:msmq:\\DIRECT=OS:server01\private$\orders out:file://c:\temp count:10

// copy the top file from the input queue to the output queue
// in:msmq:\\DIRECT=OS:server01\private$\orders out:msmq:\\DIRECT=OS:server01\private$\orders_success

Note, the application configuration file supports 'registering' a message type so that an XSLT transform is applied prior to delivering the message to the destination. For example:

<add key="xsl/" value="c:\test\xslt\order to order success.xslt"/>

In the above, any time the transport engine identifies a message with a root node of 'order', and an xmlns of '', it will apply the XLST transform identified by the configuration prior to delivering it to the target endpoint.  Anyhow, it can be an incredibly useful piece of code when trying to 'mock' an application response that might not be available.  I've seen too many testing efforts stall because an application wasn't available (for a myriad of reasons) when the integration team was available for testing.  Or, worse, seen them come to a complete stop because of the mentality of 'we can't test because we don't have system x available'.  If you break down the problem to data and protocol, you might find that you can use such a method outlined above to get some level of testing done.

No comments:

Post a Comment