Friday, November 30, 2007

Mary Kay in Kunming, China

These were forwarded to me through a co-worker.  I thought they were really nice pictures and thought I would share.

 

Check out the billboard in this one!  Ok...not the elegant Mary Kay billboard on the left...I'm talking about the one on the right. 

 

This picture was taken almost a year earlier, but in Shanghai.

 

 

I think I'm seeing a trend here..

Tuesday, November 13, 2007

SOS "Son of Strike" by Mark Smith

I'm a casual user of Windbg, and usually appreciate the challenge of using it to diagnose issues.  Crazy, I know.  There are some really great resources out there on how to get started with it, but I had a pleasant surprise in this month's (December 2007) MSDN Magazine.  Check out the developmentor Winter 2008 Course Schedule, there you will find this article by Mark Smith.  Its worth a read.

HTTPERR Connections_Refused

Symptom

IIS 6.0 website stops responding to requests.  Using telnet to connect to the website port can't make a connection.

Regardless on whether you start at the "top", and worked your way down the application stack, or started at the "bottom" and worked your way up, you would eventually find the HTTPERR log files for http.sys and they would lead you to the answer: Google for "Connections_Refused" and you would have hit the number one ranked response by David Wang - HOWTO: Diagnose IIS6 Failing to Accept Connections Due to CONNECTIONS_REFUSED.   The hint here to start at the "bottom" vs. the "top" is that there is no application level response. e.g. HTTP 500, or anything else for that matter.

If this is the problem your experiencing, stop now, read the article and see if that's the end of your journey.  What follows is just my account of backing up the information found in the article.

As I never seemed to make a connection to the web service, I looked in the HTTPERR logs.  Sure enough, there was the tattletale entries of "Connections_Refused" which led me directly to David's article. 

Sample HTTPERR logs

[..snip..]

2007-11-13 20:01:44 - - - - - - - - - 3_Connections_Refused -
2007-11-13 20:01:49 - - - - - - - - - 4_Connections_Refused -
2007-11-13 20:01:54 - - - - - - - - - 3_Connections_Refused -
2007-11-13 20:01:59 - - - - - - - - - 4_Connections_Refused -
2007-11-13 20:02:04 - - - - - - - - - 3_Connections_Refused -
2007-11-13 20:02:09 - - - - - - - - - 4_Connections_Refused -

[..snip..]

 

Since I've never actually experienced a condition where I've not had non-paged pool memory available, I continued to follow the steps in the article to validate (I'm curious that way).  First thing I noticed was that the server's non-paged pool memory was a little more than 109MB, as seen below. 

 

 

Normally you have 256MB of non-paged pool memory available to an x86 Windows 2003 Server, however, we were running with the /3GB switch enabled in the boot.ini, which further reduced that non-paged pool memory half to 128MB.  There was no requirement for the server to run with the /3GB switch, so we'll remove that at the earliest opportunity, but why the high memory utilization?

Running poolmon -b showed the kernel memory allocations (paged and non-paged pool) sorted with the highest allocations.  Turns out, there was a driver that had allocated over 69MB of non-paged pool memory all to itself, as shown below. 

 

Http.sys behavior is to stop accepting new connections when available non-paged pool memory falls below 20MB.  128MB - 109MB = 19MB available non-paged pool memory, if I read that correctly, which translates to "Connections_Refused". 

This driver belonged to our virus scanning software, so I punted to our infrastructure group with a "Whats up with this?" email, basically the contents of this post.  They flipped it to the vendor, who identified that yes it is a problem, and yes its already been fixed.  Matter of fact, it had been delivered to us the day before, we were just waiting for a regularly scheduled change window to apply the fix. 

References

Saturday, November 03, 2007

Microsoft SOA and Business Process Conference - Day 4

Time Session
7:30am-8:30am Breakfast
8:30am-9:30am Modeling and Composition of Applications
10:00am-11:00am Advanced Orchestration Concepts and Best Practices
11:00am-1:00pm Lunch
1:00pm-2:00pm Robust Error Handling for BizTalk Solutions
2:00pm-2:45pm Taxi to Airport
5:25pm (PST)-12:00am (CST) Flight back to DFW and drive home!

 

As much as I like the dynamic duo of Clemens Vasters and Steve Swartz, the best presentation of today was Stephen W. Thomas on his "Advanced Orchestration Concepts and Best Practices".  Turns out, Stephen and I lived literally within 2 miles of each other, until he picked up and moved!

Microsoft SOA and Business Process Conference - Day 3

Time Session
7:30am-8:30am Breakfast
8:30am-9:30am Software Plus Services
10:00am-11:00am Viso as a Business Process Analysis Tool
11:00am-11:30am Lunch
11:30am-12:30pm Enterprise SOA: How SOA will Alter Corporate Software Buying Decisions
1:00pm-2:00pm Building Workflow Powered Services in .NET 3.5
2:30pm-3:30pm Reliable Messaging on the Microsoft Connected Systems Platform
4:00pm-5:00pm Using Visual Studio Team System to Deliver the Right Services at the Right Time

 

Best presentations today were Building Workflow Powered Services in .NET 3.5 and Reliable Messaging on the Microsoft Connected Systems Platform.

Finally got around to reviewing the DVD that was given on the first day of the conference.  They delivered something called "Microsoft BizTalk Server 2006 R2 Training Classroom-in-a-Box".  This thing actually looks pretty good!  Of course, you've got your requisite VPC image and then a large number of targeted "How To" lab exercises.  I still need to review more of it, but its 3 DVD's...!

Alan Smith presented the Reliable Messaging session and demoed a Resequencer as a BizTalk 2006 orchestration.  Definitely worth a deeper look as he is releasing the source code on his blog.  He also mentioned using an atomic transaction as a performance trick to keep the orchestration from dehydrating during the action - need to mull that over whether it would add value in one of our situations.

Went out and had the best beer I've had in a long time at the Rock Bottom Brewery right behind the Bellevue Westin.  Something with blueberries and raspberries...it was hmmmm good!

Thursday, November 01, 2007

Microsoft SOA and Business Process Conference - Day 2

Time Session
7:30am-8:30am Breakfast
8:30am-9:30am BizTalk Adapters for WCF: Deep Dive
10:00am-11:00am Service Lifecycle Management: Addressing the Challenges of the Provider/Consumer Relationship
11:00am-11:30am Lunch
11:30am-12:30pm Best Practices for Service Oriented Architecture
1:00pm-2:00pm Service Virtualization with .NET and BizTalk Server
2:30pm-3:30pm How Everyone Should Test Their BizTalk Server Based Solutions
4:00pm-5:00pm Building a Service Monitoring Infrastructure with BizTalk Business Activity Monitoring

 

Skonnard on WCF BizTalk Adapters was great!  Really communicated the flexibility of WCF, including performance gains over older WSE.

Managed Service Endpoint was discussed in both the SLM and Virtualization talks.  MSE appears to allow providers to protect consumers from service change through the use of endpoint resolution and XSLT to transform message requests to newer formats. 

Best Practices for Service Oriented Architecture was a good review of the Microsoft Enterprise Service Bus.  New code drop is on the way, don't bother with current code on Codeplex.

Darren Jaffords talk on BizTalk testing reinforces that functional testing is feasible, but it would really - really - be great it if it was more of a first class citizen in the IDE.  He also mentioned using the visual studio profiler OFF the desktop and on the server - didn't know you could do that.  Evidently can also do that with the code coverage. Will need to definitely check that out for all of our C# code running in our BizTalk projects. 

BAM as a tool for monitoring service operations kinda helps put BAM potentially back in the picture - I got a t-shirt for explaining why we aren't using BAM and a lot of feedback on how it could give better insight into our inflight service instances, as well as services outside of BizTalk.  Our 3PL tier could really benefit from something like this but will take a little thought to identify what we want to capture.  Jon openly stated that he would not use DTA for tracking, and would build a tracking infrastructure around BAM. 

Wednesday, October 31, 2007

Microsoft SOA and Business Process Conference - Day 1

I didn't think that there would be really any big news this time around, but they announced "Oslo" during the keynote this am.  I'm not going to go into all the details of "Oslo", there aren't really all that many.

However, there are a couple of features that really get me excited: Modeling Tools and The Repository.

Modeling Tools

A significant issue during the development lifecycle is the sharing of models.  Well, we share them as static copies that we pass around, or if we are lucky, host in a central location. 

Part of the modeling conundrum is that they are static.  Once you've got a model, you can't really do much with it.  Sure, there are vendor solutions that fill this void, but for the most part there is little enforcement of fidelity between the model and the instance (e.g. code constructs) with the available tools from Microsoft.  Visual Studio 2005 Team Architect was supposed to help close the gap in this area with a caveat - anyone you share the model with has to have a copy of Team Architect. 

One of the promises of "Oslo" is more robust modeling tools that enable sharing of models from Analysts (Business Process Model) to Architects (Service and Contract Models) to Developers (Workflows and Orchestrations Models), to Operations (Deployment Models and Monitoring Models).

The tools that actually generate the models will still be different, yet these models will be published to The Repository where relationships can be established between the different models and reported against.

The Repository

The Repository is where it all comes together.  This is the host for the models, but its even more than that.  It will be the centralized location for discovery, service resolution, configuration and workflows.  Well, if we are storing workflows in The Repository, why not other code?  It sounds like we could deploy code to The Repository where it gets picked up by a process host (defined by a deployment model) and executed.  

Thats cool...no more deploying code across the enterprise, deploy it to a central location and have the deployment model define where it gets executed.  Sounds too good to be true.  "Oslo" is an ambitious release, time will tell how it will actually turn out.  We can expect betas early in 2008.

More Information

Check out BizTalk Community Blogs via Syndication for up to date information. 

BizTalk Services

This is an area that really looks interesting.  I especially like the idea that I can prototype services 'in the cloud' and then move them to my own infrastructure, or another hosting provider.

Current services are Identity, Connectivity and Workflow on the horizon. 

BizTalk Roadmap

BizTalk looks like it will definitely evolve around WCF and WF.  That's right, you no longer need to know just .NET to pull off BizTalk development.  In the future, your going to want to be comfortable with Windows Communication Foundation and Windows Workflow.  Yes, the future workflow engine of BizTalk will be WF.  That being said, we've got a commitment from Oliver Sharp (GM, Connected Systems Division) that there will be support for existing messaging, and orchestration assets, in the next version. 

When is the next version shipping?  I swear I had a verbal commitment from members of the BizTalk Product group on a product delivery every two years: 2004, 2006, 2008....uh, 2008?  Doesn't look like it.  It looks more like 2009.  We'll see on this one as well.

Connected Systems Division Focus Group on Documentation

This was a last minute invitation to represent 'corporate' America.  It was somewhat humorous to be sitting at a table with close to a dozen MVP's - I kinda felt like a party crasher, but they made us feel welcome!

Minutia

My portable HD died on me this am.  This thing has all my music, and more importantly, all my virtual machines.  *sigh* Luckily, this is all at the house, but it leaves me limited in my experimenting with technology this week.

Phone was also on the fritz and not receiving email, which left me in an odd 'disconnected' feeling.

One last comment on today...there is definitely more vendor-speak and 'marketecture' than I remember from last year.  Last year, the sessions sponsored by vendors were a little more obvious than they were today.

Once I realized that I was hearing the same repetitive why reasons for needing SOA governance, and there was no hint of how we should go about enabling governance (not counting the slides showing screenshots of a vendors product) I needed to pack up and leave but didn't.  I won't make that mistake again! 

Monday, October 29, 2007

Hello Bellevue, WA

I've only been here a handful of times (like 3 - guess that qualifies for several) yet the landscape continues to change with each visit. Its certainly becoming more metropolitan. Its a great little city, and there is definitely fall weather!


I am here...


View Larger Map

Saturday, October 27, 2007

BizTalk Licensing

This is old news, but I still get the raised eyebrow look often enough that I thought it was worth re-iterating.

The introduction of multi-core processors has sent the software industry into a tizzy.   Should vendors license by the CPU core?  Or by the CPU socket?  This is one of those things that seems to confuse people because the ingrained 'per CPU' licensing model that has been around forever.

Motherboards have long supported multiple sockets.  Used to be that one socket == one CPU.  Now, we have multi-core CPU's to contend with when budgeting.  Dual core CPU's are commonplace, with quad core here today, and even more cores per CPU on the horizon.

If I have a single socket mother board, and drop in a dual core CPU, the operating system will see two CPU's, but we are really only using the space and energy, for one.  Typically this == more capacity, or more "bang for your buck".

Some vendors erred on the side of the greedy and chose to license by the core, as this is typically what their application 'see's when its running. 

Microsoft Licenses per Socket

Microsoft has taken the stance that they will license by the socket.  This is a breath of fresh air, considering how we usually watch budgeting dollars waste away because of the stacking cost of client access licenses (CAL's) - of course, everyone has to eat, right?  

As someone who has delivered solutions for enterprise for six years now, its easy to grumble at the cost of software, though I have been on the ISV side and can empathize with the need to get paid.

The Importance of per Socket Licensing

Anyway, why is this important?  Lets say that we have delivered a solution that uses 3 BizTalk servers, running Enterprise Edition, and each server supporting a single dual core CPU.  We pay for 3 BizTalk licenses (1 for each socket), but get the processing power of 6 CPU. 

Then, during testing, we discover that we are too close to capacity for the business today and have yet to account for growth.  We need to take those BizTalk servers from '2 ways' to '4 ways'.   Meaning, 2CPU to 4CPU worth of processing power.  Adding the additional dual core CPU into the existing servers would be the cheap way to go, EXCEPT we now have to pay for 3 additional BizTalk licenses.  Ouch!  No money left in the budget for that. 

With a little luck, it turns out the motherboards will support the newer quad core CPUs.  Talk about serendipity!  We pull the dual core CPU's from the BizTalk servers, and give them to another server application that needs them, and then drop in a single quad core CPU into each server.  Voila!   Processing power has gone from 6 CPU's to 12CPU's (for a nominal hardware cost) and no expensive BizTalk licenses to choke on.  The three servers still each only contain a single socket, quad core CPU, and so require only 3 BizTalk licenses.

Tuesday, October 23, 2007

Logman

Have you ever wanted to automate the starting/stopping of performance (PERFMON) logging?  Are you a command line kinda guy/gal?  Then look no further!

Microsoft r Logman.exe (6.0.6000.16386)

Usage:
  C:\Windows\system32\logman.exe [create|query|start|stop|delete|update|import|export] [options]

Verbs:
  create                        Create a new data collector.
  query                         Query data collector properties. If no name
                                is given all data collectors are listed.
  start                         Start an existing data collector and set the
                                begin time to manual.
  stop                          Stop an existing data collector and set the
                                end time to manual.
  delete                        Delete an existing data collector.
  update                        Update an existing data collector's properties.
  import                        Import a data collector set from an XML file.
  export                        Export a data collector set to an XML file.

Options:
  -?                            Displays context sensitive help.
  -s <computer>                 Perform the command on specified remote system.
  -config <value>               Settings file containing command options.
  -ets                          Send commands to Event Trace Sessions
                                directly without saving or scheduling.

I picked up the use of this particular utility during my very first lab engagement with Microsoft.  A team member was using it to automate our test process - one that I've even further refined and automated over the years.  Check out a previous post: Test Process Template.  Ah, the memories...that was SUCH a learning experience!  I highly recommend to anyone.

Anyway, once you've created your perfmon counter log its as simple as:

logman start %logname% [-s %computer%]

For example, I've got a log called 'BizTalk' in our lab environment.  In my start/stop scripts its:

logman start BizTalk

The machine that is recording performance data is also the 'automation' server, so the collection is local.  As one of the NANT tasks during the shutdown of the test process, it issues the stop command:

logman stop BizTalk

SynAttackProtect=0 Solves Connectivity Issues

We had been experiencing on again, off again, connectivity errors in our test lab.  Sometimes, there wasn't even any load placed on the system when the error occurred.  However, the two systems under test that experienced the connectivity issues where BizTalk Server 2006 Enterprise Edition and J.D. Edwards 8.11 with 8.96 tools - I don't remember the exact rev at the moment.

  • Connectivity loss for BizTalk would generally show up as a 'General Network Error' (GNE) in the Windows event log while trying to talk to any of the BizTalk databases: EntSSO, BizTalkMsgBoxDb and BizTalkDtaDb. 
  • Connectivity loss for J.D. Edwards would occur with the message in the log file stating 'Communication Link Error'.  Then there would be a cycle of messages as the application re-establishing a connection to the database.

Of course, the SQL Server did not report any connectivity loss, because we believe the connection was never actually established.

In a complete moment of 'doh!' I remembered the TCP/IP registry key SynAttackProtect.  We applied it to the lab environment and things have seemed to settled down.  However, since I do not have a netmon trace of when the error actually occurred, I couldn't say if the conditions we experienced were indicative of the server refusing connections through a reset, or what.  Anyway, we are getting beyond the realm of my experience and I would need to call in our network team to analyze further.  While I'm not beyond wanting to confirm, schedules may dictate otherwise...so your mileage may vary.  Anyway, our current production BizTalk and Sql tiers deploy with SynAttachProtect=0, so it seemed to be inline with our build server documentation.

The default value of SynAttackProtect under Windows 2003 servers is '1'.  Setting it to '0' disables the protection, something you can probably do if you are isolated from a public network.

 

Script to Get Running BizTalk Host Instance Name

Code snippet dump!  Our enterprise monitoring software needed to be able to iterate the running instances of BizTalk so that it could sample the appropriate performance counters.  This is the 'alpha' version we are running with.    Because of the way the performance counters are registered on a per instance basis, we needed the 'name' of the host and not necessarily the GUID associated with it.  Since administrators could potentially move host instances from node to node, or create, etc (however unlikely) we wanted something that we wouldn't have to remember to maintain if an administrator changed the location of a  host instance at 3am.

Other proposed options included:

  1. Iterate the BTSNTSvc.exe process and parsing the command-line looking for the host name - this tells us the running instance.
  2. Use the BizTalk WMI model to query for the information we needed - this is fairly rich object model.
  3. Iterate the HKLM\CurrentControlSet\Services registry node, looking for nodes that started with 'BTSNTSvc$' and parsing that value, though this didn't give us whether the host instance was actually running.

The code below expects to execute on the machine its querying for, a requirement of the execution model of our monitoring software.

Code: List Running BizTalk Host Instance Names

Option Explicit
 
' 
' MSBTS_HostInstance.ServiceState valid values
' More information regarding MSBS_HostInstance can be found on MSDN http://msdn2.microsoft.com/en-us/library/aa560660.aspx
' 
'Stopped:1
'Start pending:2
'Stop pending:3
'Running:4
'Continue pending:5
'Pause pending:6
'Paused:7
'Unknown:8
 
Main
 
 
Sub Main
 
    dim hosts
    dim host
    dim computerName
    
    computerName = GetLocalComputerName()
   
   ' returns a wmi object set of MSBTS_HostInstance objects
   '
    Set hosts = GetRunningBizTalkHosts( computerName )
    
    
    for each host in hosts
        WScript.Echo host.HostName
        ' do something for each host
    next
 
    
   
    
End Sub
 
Function GetLocalComputerName()
    dim wmi
    dim systemInfo
    dim objItem
    dim propertyName
    dim computerName
    
    set wmi = GetObject("winmgmts:\\.\root\cimv2")
    Set systemInfo = wmi.ExecQuery("Select * From Win32_ComputerSystem")
    
    for each propertyName in systemInfo
        computerName = propertyName.Name
        exit for
    next 
   
   GetLocalComputerName = computerName
End Function
 
 
Function GetRunningBizTalkHosts(machineName) 
 
    dim hosts
     
    Set hosts = GetObject("Winmgmts:!root\MicrosoftBizTalkServer").ExecQuery("SELECT * FROM MSBTS_HostInstance WHERE RunningServer = '" & machineName & "' AND ServiceState = 4")
    
    Set GetRunningBizTalkHosts = hosts
 
end function
 
 



Friday, September 14, 2007

Microsoft SOA and Business Process Conference 2007

Looks like myself and a co-worker have been 'green-lighted' for the conference and travel arrangements are being made.  I'm glad I've got some backup for this trip!  Hopefully there will be a little less bouncing among the different tracks, but probably not too much.

Tim Rayburn will be speaking at the conference on the topic of BizTalk Server 2006 R2 and HIPAA.  I imagine that healthcare is an interesting space to be in - its been a while, but I've worked for two practice management software companies in the past that used the services of different insurance claims clearinghouses.  I've often thought that would be a great application for the BizTalk of today.  The routing of claims to various carriers was always interesting enough, but then stack on top of that the different business rules that were applied as a claim was routed for processing (which often changed per carrier), and remittance requirements back to the originator.  Cool stuff! 

We'll be staying at the Bellevue Westin, which is walking distance to at least one of the social events if I remember correctly.  If your staying at the Westin, or just want to chat, drop me a line.  One of the great things about the last conference, besides the great content, was the chance to hear how people used BizTalk and [insert your favorite TLA here] to solve their business problems!

..and what would be a trip to the Redmond area without getting my fix of Talking Rain carbonated drinks!  I've got an unhealthy preoccupation with this product line, although I notice that 'grapefruit' is not part of the current line up!?  Say it ain't so!  Although, 'Spearmint Mojito' might be interesting...hmmmm.

 

Register for the Microsoft SOA and Business Process Conference for 2007 here!

Friday, September 07, 2007

Guys Weekend - The Potato Gun

Every once and a while my Dad, my Brother and I get together just to goof off outside the scope of normal family visits.  A chance to just relax, swap stories, brainstorm how to change the world, and act well, like kids.

My Brother owns a house overlooking Lake Travis in Jonestown, TX [northwest of Austin], he's got the nice boat, a great entertainment center, you know...all the toys.  What he's lacking is a good guest bed and some food on the shelves [bachelor], but that can all be remedied.  Dad usually arrives on his motorcycle [from Conroe] at the crack of dawn and I'll roll in around noon [from Dallas].

Prior to arriving, my Dad forwarded the following video of a 'potato cannon' to us.  It was to become our inspiration for the trip.  I've linked to it below in case your interested.

 

Once we saw the video [you can stop after a couple minutes, you'll get the point] the inner child in us was raring to go!  We bought all the necessary components from the local True Value hardware store and the requisite 10lbs of potato's and a can of 'Aquanet'.  Less than an hour [and a few beers] later, we had built the toy d' jour and it was awesome.  You can see some of the pictures below and hopefully Dad will rip the video he took off his camera and get it to us so that we can see it.

This contraption literally shot a potato a hundred yards down the side of Jason's property! 

Now, we realized that we were doing something dangerous [and quite possibly illegal] so we rationalized who would go first.   We couldn't let Jason go first, he as the only one of us with any medical training [over 10 years as a paramedic], if anything went wrong we'd need him.  They wouldn't let me go first because I've got a wife and two kids.  That left Dad.  Dad was barely able to contain himself just waiting for us to quit fooling around and give him the damn thing..

Of course, after all that deliberating [inebriating], Jason goes and sticks his face right up to the combustion chamber just as I am testing the 'spark' for what we thought would be a misfire.  Well, the picture can be found in the group below!  The irony is that we've grown up with guns our entire lives and gun safety has always been a priority.  

 

Guys Weekend 2007 - The Potato Cannon

We enjoyed ourselves so much that we are going to each make one, with a little different design and definitely with a personal touch.  Jason has already got his Halo design worked out. 

We also spent a lot of time rocking the hilltop on Guitar Hero 2!  I've got to get this, it is still the best game I've played in a long time. I still can't believe they've yet to release the wireless controller for the XBox 360!  What kind of sick joke is that? Anyway, that's what I'm holding out for.  Anyway, Dad won the cards with an all in late in the evening.

I had to cap off the weekend with a hangover and a burning desire to get back to my family [two sick kids] so called in on Sunday.  Dad, Jason [and crew] still took Luscious out on the lake on Monday.  I've yet to hear how that went.

Till next time..

Wednesday, September 05, 2007

Sea of Technology

I'm overdue for a rambling post, so here you go...

Just saw this article: InfoQ - Interview: Dino Chiesa on Microsoft's SOA Strategy (from BizTalk Server Team blog).

Of particular interest is the Microsoft ESB Guidance and the eBook ‘SOA in the Real World’.   By the very nature of BizTalk, much of the work we do falls into the ‘service oriented’ design category. 

However, BizTalk does not an SOA (or ESB) make.  BizTalk can certainly be a core platform for delivering service oriented solutions and it certainly can be a key component for building a service bus. 

This isn't a dive into how to develop service oriented solutions with BizTalk, or how to build an ESB with BizTalk.  Its mainly about what skills you need to develop in order to swim in a sea of technology.

One thing that continues to be evident that it is not really possible to be just a 'BizTalk' developer without having a good grasp on at least the Microsoft technology stack. 

The breadth of the technologies I've had to keep up with just seems to overwhelm compared to back when I was either just the 'UI Guy' (C++, Win32, MFC, a good understanding of the message pump and subclassing windows controls) or a 'Server Guy' (C++, Win32, STL and UI technologies be damned).

This is the short list:

  • .NET Framework - BizTalk is built on .NET and without a good understanding of .NET you won't be able to fully leverage the extensibility of BizTalk.
  • Sql Server - who doesn't  need to store data?  Who hasn't done this badly?  A badly implemented data tier (including data access logic) can quickly become one of the most insidious of bottlenecks.
  • Xml - this is a technology stack unto itself.   A working knowledge of Xml, Xml Schema, XPath and Xslt is pretty much required.
  • SOAP, WS-* - Pay attention to WS-BasicProfile and WS-Security.  The rest are worth a look if your situation calls for it.  Be wary about your interoperability story if you have heterogeneous clients consuming your services.
  • .NET Web Services (a.k.a. ASMX) - if you have any services in production, they are likely of the ASMX flavor.  The initial service workhorse.  Getting at least a passing familiarity of tcp/ip and http protocols really helps to understand what's going on at the 'wire'.
  • Windows Communication Foundation (WCF) - the WCF adapter is baked into the BizTalk 2006 R2 release.  If your writing services today, WCF is the 'foundation' your likely starting with.  Understanding the WCF programming model is key.
  • Windows Workflow - likely successor to the BizTalk orchestration engine.  Actually, Tomas Restrepo has a good post that will make you go hmmmm... check out 'The Future of BizTalk/WCF/WF'.  I've not realized the full potential for WF yet...but I know I want some of the IDE features with BizTalk!  This is an area to keep an eye on for growth.
  • Debugging - learning what to do when it hits the fan, and it will hit the fan.  Its a just a matter of time and it won't always be nice enough to occur in a development environment.  Oh, no siree, it will wait to occur until your businesses critical time of the year/month/week/day, and it will hurt and you will have to fix it. 

And lets not forget the plethora of bits that sit above these on the stack: Enterprise Library, the aforementioned Microsoft ESB Guidance, Testing, Source Control (e.g. TFS or Subversion), Security, the list goes on. 

Oh yeah...that's why they pay us.  Ok, then.  Well, where do you start?  Check out the following resources:

It starts with code.  code.  code.  Build it.  Analyze it.  Learn from it.  Rinse and Repeat. 

Well, I've rambled enough for one night.

Friday, August 31, 2007

Quick Overview of Coordinated Universal Time by Scott Mitchell

This article, 'Using Coordinated Universal Time (UTC) to Store  Date/Time Values', by Scott Mitchell, provides the reader with a good working knowledge of why using UTC time is important.  As we begin spanning the globe, in a conquest of world domination, its been necessary for developers, business analysts, and yes, managers, to understand the significance of 'when' something happened, relative to a specific time zone and just as importantly, that we are prepared to deal with it. 

Scott's article provides a nice reference to pass around to get people jump started before delving into more technical gooeyness.

Thursday, August 30, 2007

Testing Guidance

With the release of Performance Testing Guide for Web Applications I realized that there were a couple of testing related resources that I tend to refer to pretty consistently.   By listing them here I'm creating a 'favorites', of a sort, so that I can quickly get back to them. 

 

Reference Material

 

Tools

Saturday, August 25, 2007

Test Process Template

No matter what type of testing you might be doing, it helps to have as part of your process a list of scripted steps to follow.  Once identified, the list of steps will help make sure that you have consistency between test runs, and if automated, helps to get you back into a position to test again as soon as possible.

The following template is one that I've used for several years with success.  What really helps get the extra mileage out of it is if you automate all, or most, of it using whatever scripting technologies work for you.  My current personal favorite is NANT as the execution engine and then a suite of VBScript, or even .BAT files, that actually execute each specific step.

NANT's foreach construct makes it very easy to loop over a server list, getting consistent results across the board. 

PRE-Test

  1. Initialize Repository
    1. Create folder(s) that will be repository for information from this test run.  This will probably be something like a test label plus a run number.  If your doing a lot of different types of testing, then it will make sense to have more meaningful labels.  Otherwise, 'TESTRUN1', will likely work just fine.  The important take away is that you need a friendly way to refer back to all the data that you will be collecting.
  2. Capture Configuration
    1. Copy all configuration files to the repository.  The purpose of capturing the configuration is that you may need to correlate test results back to a specific configuration that was in effect during a particular test run. Examples of configuration files are: machine.config, web.config, app.config, .INI files.  Consider exporting any registry settings to .REG files.  If your using non .NET applications and services, then look for those files as well.  Apache and WebSphere are two very file centric server applications.
    2. Clear, or delete, any log files.  Make sure that any event notification received should be from this particular test run.
  3. Start Services
    1. Identify the order that the services need to be started.  Start services in preparation of the test run.   These services are anything that your application needs to function: IIS, BizTalk, ERP or WMS services.
  4. Start Monitoring
    1. Make sure that your monitoring scripts fire up as well and that you are collecting data from the run.  Unless your specifically measuring startup activity, you may wish to discard, or ignore, the 'warm  up' period of your application.   

Test

  1. Begin any test scripts that will be driving the load for your applications.  This could be automated testing such as SilkPerformer, LoadRunner, LoadGen, or even Windows Task Scheduler.   

POST-Test

  1. Stop Services
    1. Most likely, if there is an order that the services needed to be started in, then you will want to reverse that order for shutdown.  Many times you will need to stop services in order to execute the data collection.
  2. Data Collection
    1. Copy all log files to the test repository (e.g. application and server's such as IIS)
    2. Copy all Windows event logs to the repository
    3. Copy all performance (Perfmon) logs to the repository
  3. Clean up
    1. You probably have lots of 'data bits' floating around the test environment.  Examples of this might be files in folder locations, MSMQ messages, rows in databases queues, etc.
    2. Reset your data tier to a known good state.  If your data lends itself to just flipping a status bit to get back to a known good state, then execute the SQL to do this.  If not, then you will want to restore to your applications 'golden' backup.

Friday, August 24, 2007

Identifying BizTalk Hosts

Nick Heppleston has a post on how to identify the BizTalk host by correlating the process identifier (PID) to the results of executing tasklist.  Check out his post for the details.

I'd like to offer up an alternate method that is pretty much the method our team uses: using Process Explorer found in the newly christened Microsoft SysInternals Suite.

From an operational perspective, its pretty easy to fire up Process Explorer in place of Task Manager to get a wealth of information about the processes running on your system.  Check out the Windows SysInternals Suite site for more information and downloads.

Each BizTalk host is started with a set of command line parameters passed to it.  One of these parameters will be the name of the BizTalk Host that the process is an instance of.  You can view these command line parameters via Process Explorer and identify which hosts are consuming the most CPU, Memory, etc.

Once your running Process Explorer, add the command line column (right click column-header -> check command line option), and that view will become saved as part of your user preferences. 

Here is a snapshot:

BizTalk host instance in Process Explorer

Notice that you can see the two BizTalk host instances 'Tracking' and 'MessagingReceive' that are running on this server.

Thursday, August 23, 2007

BizTalk Testing - Part 2: Mocking an Endpoint

Last time, we talked about a couple of different methods to create the load for your BizTalk solutions.  Check out BizTalk Testing - Part 1: Generating Load for some ideas.

BizTalk Server is an integration platform at its core, so it makes sense that you will have requirements to test the integration between two, or more, systems.  When I say test, I'm also referring to a volume test, not a functional, or validation test.

Unless you've lucked out and are only dealing with the most simplest of application integration scenarios, that of one way delivery to some sort of durable storage (File, MSMQ, SQL), your likely to have to correlate your request message to some sort of application level response.  I'm not talking about a protocol acknowledgement (TCP/IP, HTTP, or MSMQ).  I'm talking about an application level acknowledgement where the target system acknowledges the successful receipt, and sometimes processing, of a message.  This is pretty standard behavior with many key enterprise systems (CRM, ERP, WMS). 

For instance, if you have an orchestration which subscribes to the Item message, which executes the following itinerary:

  1. Receive Item
  2. Transforms Item to a format the WMS system can understand, e.g. WMS Item
  3. Send WMS Item
  4. Receive Correlated Application Acknowledgement
  5. On Error call a common error handling strategy

As long as the target system responds, your process gracefully finishes executing.  However, if it never responds with the acknowledgement, then the BizTalk process will continue to have messages that stack up in the Message Box and you'll have a bunch of half completed orchestrations.  What are we suppose to do then?

What we need is a way for BizTalk to get the messages it needs to keep the heart of your messaging bus in a healthy pumping state.

This is where mocking the endpoint becomes crucial to being able to test the integration tier.  Wikipedia defines a reason to use a mock as "mock objects can simulate the behavior of complex, real (non-mock) objects and are therefore useful when a real object is difficult or impossible to incorporate into a unit volume, or load, test. If an object has any of the following characteristics, it may be useful to use a mock object in its place: ...when the real object does not yet exist or may change behavior."

While I think that developing a WMS system would be a grand challenge, that's usually not what I'm thinking about when I'm trying to test integration between an ERP and WMS system.  I'm less concerned with writing the code to actually receive and put away inventory as I am in being able to generating the appropriate response to the originating message request.   Sometimes, the messages from the endpoint system actually originate there.  e.g. inventory transactions notifying ERP of inventory changes (breakage, spillage, quality holds, etc) and are not actually a response to a request, but is still something to consider when 'mocking' an endpoint.

Using Xml, XSLT, a little custom code and Windows Task Scheduler you can quickly build a 'responder' to allow you to focus on testing what's important to you: the integration.  Its actually easier than you might think.  Of course, if it is your job to actually test all those WMS processes, then you've got your work cut out for you. 

Been There, Done That

Yes, you can skin this cat a couple of different ways.  The latest incarnation that we've gotten some mileage out of is what we call 'TCopy', short for transport copy.  It allows us to copy files from one transport medium to another, with message inspection to apply transforms if required (most notably for application acknowledgements).  It supports FileToFile, FileToMSMQ, MSMQToMSMQ, MSMQToFile and I believe that we recently added FileToHTTP (one way), thought I haven't used it.  Its problem?  It only has a coarse control of the arrival rate for messages generated by the endpoint.  Its simply a command line program (C#) that takes input from some sort of scheduler (Windows Task Scheduler), which is how we control arrival rate (coupled with a batch size). 

Here is a sample of just some of the command line options:

// copy from specific file to msmq            
//
// in:file://c:\temp\input.xml out:msmq:\\DIRECT=OS:server01\private$\shipment

// copy from specific file to a file folder
//
//in:file://c:\temp\input.xml out:file://c:\temp

// copy the first 10 files from the input folder to the output folder
//
// in:file://c:\temp\inputFolder out:file://c:\temp count:10

// copy the top 10 messages from the input queue to the output folder
//
//in:msmq:\\DIRECT=OS:server01\private$\orders out:file://c:\temp count:10

// copy the top file from the input queue to the output queue
//
// in:msmq:\\DIRECT=OS:server01\private$\orders out:msmq:\\DIRECT=OS:server01\private$\orders_success

Note, the application configuration file supports 'registering' a message type so that an XSLT transform is applied prior to delivering the message to the destination. For example:

<appSettings>
<add key="xsl/http://schemas.marykay.com#order" value="c:\test\xslt\order to order success.xslt"/>
</appSettings>

In the above, any time the transport engine identifies a message with a root node of 'order', and an xmlns of 'http://schemas.marykay.com', it will apply the XLST transform identified by the configuration prior to delivering it to the target endpoint.  Anyhow, it can be an incredibly useful piece of code when trying to 'mock' an application response that might not be available.  I've seen too many testing efforts stall because an application wasn't available (for a myriad of reasons) when the integration team was available for testing.  Or, worse, seen them come to a complete stop because of the mentality of 'we can't test because we don't have system x available'.  If you break down the problem to data and protocol, you might find that you can use such a method outlined above to get some level of testing done.

Sunday, August 19, 2007

BizTalk Testing - Part 1: Generating Load

Why does everyone (or is it just me) refer to this tool as "BizTalk" LoadGen?  There is nothing BizTalk-ish about it.  Your right, its not a part of BizTalk, but all applications of it seem to be associated with BizTalk.  Why is that?   You've spent months devising plans on how to take over the world using BizTalk, but hey, guess what?  If no one pokes your BizTalk implementation with a stick (external stimuli, aka message) what does your implementation do?  Nothing.  BizTalk can be an insatiable, slathering beast when it comes to message consumption, but if your not feeding it, it will slumber away in the darkness waiting for that sharp poke.

Lets pretend that your organization integrates Item lifecycle events (add, update, retire) to several different applications throughout enterprise  You've built all of your integration architecture on top of BizTalk and now your ready to test those BizTalk artifacts.   Unfortunately, the application that sources these events to your messaging bus is unable to be used to generate the load for your testing.  This used to mean you had to go write some sort of test harness that could now generate that load for you. 

I've seen quite a few simple (but slick!) methods of generating this load.  Some of these methods are:

  • The Grass Roots Method: Having a file directory full of the data files that will be copied to a File receive location, using a batch file (or vbscript, jscript, etc) that is scheduled to execute using Windows Task Scheduler.
  • The Roll Your Own Method: Writing custom code (test harness) to do the file copies, similar to above, but using code.  Maybe here threading has been introduced to have multiple threads doing copies.  We probably also had the need to push data using multiple protocols (MSMQ, HTTP, etc).  Of course, now we've introduced complexity that we shouldn't be spending time on (but can be fun nonetheless). 
  • The Complete Package:  a full blown automated testing solution such as Silk Performer, LoadRunner, or Microsoft Team Edition for Software Testers.  Now, unless your development staff cross trains as Test Engineers, or you have them available, this can be even more complex.

These testing solutions will all work to varying degrees.  However, you may  need to consider a 4th method, that sits between the Roll Your Own Method and the Complete Package:

  •  The Pareto Method: Microsoft's free LoadGen.  It is more robust than the Grass Roots Method, removes most of the need for maintaining an often buggy Roll Your Own Method, and is much less expensive than a Complete Package.  With a fresh install, will cover 80% of what you probably need to do when testing some basic application protocols: File, HTTP, MSMQ, Soap, WSE.  The key is all the dirty details of controlling arrival rates is taken care of for you in the core product.  You can also extend the tool if necessary in some very key areas, namely Transport, Message Creation and Monitoring.  There is only minimal documentation, so if you need to figure out how to write a new Transport for LoadGen, fire up Reflector and you'll be writing the transport for tcp/ip, JMS, or whatever, else you might need. 

 

Real World Example

Lets take another look at our Item integration briefly described above.  Given that one Item looks much like another at the messaging layer and we are only interested in the load placed upon the system, then we could conceivably replace the source system with LoadGen.  We would need a single instance of an Item message for LoadGen to work with, but then we could have it generate as many messages as we want to test with.  It will be repeatable and we won't have a dependency on the source application. 

A specific test case that we needed to execute recently was generating 30K of these Item messages that would be sent across our WAN to a remote data center.  We had the primary goal of monitoring the impact of the messages on the network, but a secondary goal of monitoring our BizTalk implementation.  In this case, the load agent was a machine running LoadGen publishing Item messages to a local BizTalk server group.  BizTalk then sent the messages across the WAN to a web service gateway in a remote data center.  There is another BizTalk server group waiting to broker those messages in the remote data center, but for the purposes of our current test, we are done. 

In the grand scheme of things, 30K messages flowing through BizTalk is nothing (ok, depends on your infrastructure backing your solution), however, as with anything but the most simple BizTalk implementation, there are many moving parts (Pipeline Components, Functoids, Orchestrations, Custom Libraries) that you will want to see under load.  LoadGen allows you the opportunity to monitor your solutions (application and infrastructure), under a variety of loads, without having source systems available.  This means you can be more proactive (vs. reactive) in identifying those bottlenecks, and other problems, before you've got other team members waiting on you.  Is the view 100% completely accurate with how it will behave under 'true' production loads?  No.  But, you may be surprised at how far that 80% will get you.

Web Service Testing with LoadGen

Yes, you may have heard of this tool referred to as 'BizTalk LoadGen', but in reality, it works very well for generating load against most types of endpoints.  Don't let the BizTalk name fool you.

Maybe you have a web service that you want to drive some load against to verify performance or stability?  Borland Silk Performer, or Mercury LoadRunner, are excellent testing tools (I've used both), however, in many cases they can be a little heavy handed.  If your unfamiliar with those aforementioned toolsets, then trying to learn their in's and out's just to drive a little load against a singular web service is overkill. 

However, with a little patience, and the documentation, you can be up and running driving load against your web service in no time!  No more hand coding a test harness to drive the load, or monkeying with those pesky thread pools.  

Here is a sample web service implementation to illustrate the point:

namespace shobu.samples
{
/// <summary>
/// Summary description for MessageService
/// </summary>
[WebService(Namespace = "http://shobu.com/")]
[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
[ToolboxItem(false)]
public class MessageService : System.Web.Services.WebService
{

[WebMethod]
public string Echo(string message)
{
Debug.WriteLine("Echo: " + message);

return message;
}
}
}

 


Now, a simple LoadGen configuration file and we can drive different workloads against the service.  Here is a sample LoadGen configuration file.  Refer to the LoadGen documentation for some of the details.

<LoadGenFramework>
<CommonSection>
<LoadGenVersion>2</LoadGenVersion>
<OptimizeLimitFileSize>204800</OptimizeLimitFileSize>
<NumThreadsPerSection>12</NumThreadsPerSection>
<SleepInterval>2000</SleepInterval>
<LotSizePerInterval>1</LotSizePerInterval>
<RetryInterval>10000</RetryInterval>

<StopMode Mode="Files">

<NumFiles>3600</NumFiles>
<TotalTime>3600</TotalTime>
</StopMode>

<Transport Name="SOAP">
<Assembly>SOAPTransport.dll/SOAPTransport.SOAPTransport</Assembly>
</Transport>
</CommonSection>

<Section Name="SoapSection">
<SrcFilePath>c:\temp\loadgen\soap\soap.txt</SrcFilePath>
<DstLocation>
<Parameters>
<URL>http://localhost/Samples/MessageService.asmx</URL>
<SOAPHeader>SOAPAction: "http://shobu.com/Echo"</SOAPHeader>
<SOAPPrefixEnv>&lt;soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"&gt;&lt;soap:Body&gt;</SOAPPrefixEnv>
<SOAPPostfixEnv>&lt;/soap:Body&gt;&lt;/soap:Envelope&gt;</SOAPPostfixEnv>
<IsUseIntegratedAuth>False</IsUseIntegratedAuth>
<ResponseMsgPath>c:\temp\loadgen\soap\responses</ResponseMsgPath>
<DstEncoding>UTF-8</DstEncoding>
</Parameters>
</DstLocation>
</Section>
<!-- US-ASCII Or UTF-8 Or UTF-16BE Or UTF-16LE or UNICODE -->

</LoadGenFramework>

Some of the things to note:



  • NumThreadsPerSection - number of threads handling each workload.  I believe this applies to all sections, so there is not way that if  I had two workloads in one configuration file, SOAP and File, that I could assign a different thread count to each section's thread pool.  Don't quote me on that, I've not tried it.
  • SOAPHeader - I don't believe you can apply multiple SOAP headers.  Here, we are really just applying the necessary SOAPAction.

Control the arrival rate by modifying the NumThreadsPerSection, SleepInterval and LotSizePerInterval elements.  Modify the workload by adding additional sections.  e.g. you have multiple services  you want to test on your server.


Change from a load test to more of a stability test by changing the StopMode to "Time" and allowing it to run overnight, or longer, while recording performance data of your service.


Here are the contents of the file (soap.txt) referenced by the SrcFilePath element:

<Echo xmlns="http://shobu.com/">
<message>hello, world</message>
</Echo>

LoadGen also has the ability to use MessageCreators to make your messages more dynamic.  For example, I could vary the string passed to the service on each call.  I'll look into that a little further soon.


Download LoadGen from Microsoft and give it a try. 



Monday, July 30, 2007

Microsoft SOA and Business Process Conference (10/29-11/2 Redmond, WA)

This is an opportunity to get up close and personal with Microsoft on those topics that matter to you.

I was able to attend this conference last year and it packed more punch around communicating Microsoft’s strategy with Business Process Integration than both PDC and TechEd combined. The only regret I had is that, as an organization, we didn’t have a larger presence to cover each track (Developer, IT Admin, Architect, Business Owner).

As I mentioned, this was the most significant BizTalk (and all things related) presence I’ve seen to date, although it was under their new business unit: Connected Systems Division. This new division includes the Distributed Systems Group, Business Process and Integration, Windows Communication Foundation, Windows Workflow, InfoCard, Active Directory, MSMQ, BizTalk Server, Host Integration Server, Commerce Server, RFID and Industry Standard Accelerators (Rosetta.NET, SWIFT, AS2, EDI, etc).

 

Check it out...

http://mssoaandbpconference.com/

Who is invited to attend?
Space will be limited, but anyone who is interested in SOA and Business Process should attend.
Who should attend the conference?
The conference is intended to educate and inform project managers, business managers, process owners/executives, IT/technical implementers, developers and business/technical consultants. Individuals from IT departments, development organizations, Project Management Offices (PMOs), and those business focused individuals who need to grasp a deeper understanding of how Project, Portfolio and EPM can empower their organization to do more will get great value from this conference.
What is the charge to attend the conference?
Early bird rate is $199. After that point, registration will be $299.

Wednesday, July 25, 2007

Last Hope

Found a link to a cheeseball submission to gamedev.net from way, way, way back in the day.  A screen shot is below.  You can download the game here.

 

 

I can't tell you how absolutely amazed I was that it ran on my PC today! 

Consider when it was written (2000), what the OS was (Windows 98) and what your technology stack looked like (Visual Studio 6, C++ and DirectX 7).  Just think about that for a minute.  No...not the fact that its some amazingly crafted piece of code (its not, but thanks anyway) that has withstood the test of time.

Think about the commitment Microsoft has put into the various levels of the application stack to make sure that future versions are backwards compatible.

That's really quite something. 

It got me thinking...will the solutions I'm delivering today, be able to run, much less deliver value 3, 5, or 10 years from now?   Its 2007.  Do you know where your code is?

 

By the way, my current rig at home is running Windows Vista Home Premium with whatever version of DirectX ships with it.  I don't make any claims it will work on your system, nor will I be held liable for making work (think I have the source though).

Another thing...my home development rig back then was a P400 with 128MB RAM.  Woot!

Saturday, July 21, 2007

Managing Humans

Managing Humans
Biting and Humorous Tales of a Software Engineering Manager
Michael Lopp
Apress

 

This was a Joel reading recommendation that panned out very well.  There were many reflections of my career that I found between the pages.  More than once I found myself either laughing out loud or cringing at memories the author brought back.

Managing Humans is a great read, not only for those who might be in a management position, and want to know how to handle the "Monday Freakout", but also for those in the individual contributor position and want to understand why "Managers are not Evil" and why you don't want to wind up like "Fez".

I do believe the author is too critical on himself throughout the book when he is unable to find that one thing that makes a "Fez" tick and get them motivated in growing with the business instead of being inevitably left behind.  Or, maybe that's just the sign of a great manager.

Either way, you'll probably pick it up and burn through it pretty fast.  However, stop to re-read chapter 16: "Malcom Events".  They happen everywhere, whether you actually realize it or not, but now you have a name for it and the author offers some sound advice on avoiding them.  A co-worker has the uncanny ability to successfully identify "Malcom Events" (not named as such) at our office.  He doesn't call them that, but has accurately predicted several over the course of the years of us working together (sometimes months in advance).  Some people are just more sensitive to disturbances in the Force than others, I guess.  I'll keep up my training...

Monday, July 16, 2007

BizTalk 2002 Nov. 2001-Jul. 2007 RIP

Kudos' to the Mary Kay Integration Team for hitting a significant milestone today!  They have finished successfully migrating a large BizTalk 2002 implementation to BizTalk 2004. 

As most of you know, the upgrade path from version of BizTalk prior to BizTalk 2004 posed some significant challenges.   For various reasons, there were many BizTalk 'applications' that were left behind as the rest of the business moved forward to take advantage of new BizTalk versions. 

Now that those 'legacy' bits have been upgraded that leaves us a clean upgrade path to BizTalk 2006, and a single BizTalk environment globally, for the time being anyway!

Way to go guys!

Thursday, July 05, 2007

Getting Started with Microsoft Windows Desktop Development

My brother-in-law came to me with a request to help him get jump started in developing an application that has Quick Books integration. We are still discussing what development stack he should adopt for this project. His background is an Electrical Engineer with PERL and C/C++ experience. The application will run on Windows XP SP2 on a local area network (LAN). The application will have a GUI, its own data store, and need to support real-time events into Quick Books.

A project like this, while small in scope, is going to span a lot of technologies and concepts. For example:

  • C# development language
  • Windows GUI development (Windows Forms)
  • Database access (table creation, queries, backup/restore of data)
  • COM (Quick Books has a COM API for integration. They also appear to have a web service API [1]) and COM interoperability (going from .NET to ‘legacy’ COM)
  • Xml [2]
  • Version Control
  • Deployment

We are still evaluating what the technology stack will look like using PERL as the development language, but also wanted to assemble the necessary components for a Microsoft development stack.

Microsoft Visual C# 2005 Express Edition

Visual C# 2005 Express is the integrated development environment (IDE) for .NET C# development. It also includes the .NET framework 2.0 runtime (Microsoft’s answer to Java runtime). This IDE is mainly for developing applications for the Windows desktop. There are other versions available, but ignore them for now.

  1. Download Visual C# 2005 Express Edition: http://msdn.microsoft.com/vstudio/express/downloads/
  2. Download Visual C# 2005 SP1: http://msdn.microsoft.com/vstudio/express/downloads/

SQL Server Express SP2 Advanced Edition (includes SQL Management Studio)

SQL Server Express is the database engine and IDE for database management/development. It’s a fully functioning version of the enterprise class of SQL, just limited to a single CPU, 2GB RAM, etc…its supposed to run on the desktop. It will meet your database needs; the advanced edition simply includes SQL Reporting, SQL Integration Services, and the IDE all in one download.

  1. Download SQL Server Express SP2 Advanced Edition: http://go.microsoft.com/fwlink/?LinkId=65109

Optionally, you can download SQL examples from here. I’d probably recommend it if you have the disk space for it. However, the books online (hit [F1] from within SQL Server Management Studio) is a very good place to start.

SQL Server Express Samples

  1. Download SQL Server Samples: http://go.microsoft.com/fwlink/?linkid=65209

Online information

Intuit Quick Books SDK

You’ll need to register with Intuit to download the Quick Books SDK, but this will be required to get any sort of automatic import/export of data with QB. If your still looking to just write to a CSV file, and manually import (not a bad way to start to get an understanding of QB buckets) then you don’t need this. However, if you want to be able to enter data in a separate application and have it trigger data updates to QB, then you will need this SDK.

http://developers.intuit.com

[1] The web service API I am not familiar with. I’ll need to review and see what sort of constraints, if any, it will place on your application. Its possible that it might even allow you to use PERL to integrate with QB

[2] QB also uses XML messages for data exchange as their primary method when using the SDK. There may be some sort of CSV option, we’ll need to look for it.

Thursday, June 28, 2007

BizTalk Performance Explorer

Just got directed (via Richard Seroter) to Rob Steel's blog on BizTalk performance.  He is putting together what he refers to as the BizTalk Performance Lab Delivery Guide, which is the process his team has established for successful BizTalk testing.  Go check it out at BizTalk Performance Explorer!  

If you've ever been tasked to test BizTalk you can get a jumpstart from reviewing this blog.  If your managing a BizTalk testing engagement, or any testing engagement for that matter, you'll still find good information around lab process(es).  He's also posted links to several other relevant BizTalk testing information.

Tuesday, June 26, 2007

Debugging Tools for Windows - Crash Analysis

Once you’ve created a snap, you’re ready to analyze the data! See Debugging Tools for Windows - Snapping a Process, Part 1 and Part 2 for steps that led up to this point.

Open Windbg, select File | Open Crash Dump… and select the *.dmp you want to analyze.

Since we are analyzing managed code we are going to need a little help in the form of a Windbg extension. The extension we are interested in is the SOS.DLL (Son of Strike), which is included in the installation of Debugging Tools for Windows, allows Windbg to understand CLR constructs such as the managed heap.

To load the SOS extension, type the following in Windbg:

0:068> .load clr10\sos.dll

Alternatively, you can path the explicit path to SOS for the version of the .NET framework you’re targeting.

0:068> .load C:\WINDOWS\Microsoft.NET\Framework\v1.1.4322\sos.dll

Typically, when a snap is created as the result of an exception (-crash), the debugger will identify which thread has the exception and position the Windbg context to that thread. One of the first things you want to do is take a look at the call stack. The managed call stack can be viewed by using !clrstack. This will produce something similar to below.

0:068> !clrstack
Thread 68
ESP EIP
0x0edfe7fc 0x77e55dea [FRAME: GCFrame]
0x0edfeb2c 0x77e55dea [FRAME: ECallMethodFrame] [DEFAULT] R8 System.Number.ParseDouble(String,ValueClass System.Globalization.NumberStyles,Class System.Globalization.NumberFormatInfo)
0x0edfeb3c 0x79a14bef [DEFAULT] R8 System.Double.Parse(String,ValueClass System.Globalization.NumberStyles,Class System.IFormatProvider)
0x0edfeb78 0x79a0e1e1 [DEFAULT] R8 System.Convert.ToDouble(String,Class System.IFormatProvider)
0x0edfeb80 0x00c54d35 [DEFAULT] [hasThis] Boolean Microsoft.Xslt.CompiledScripts.CSharp.ScriptClass_1.IsNumeric(String)
0x0edfeba8 0x00c55e05 [DEFAULT] [hasThis] Boolean Microsoft.Xslt.CompiledScripts.CSharp.ScriptClass_1.LogicalEq(String,String)
0x0edfec68 0x791b33cc [FRAME: GCFrame]
0x0edfeddc 0x791b33cc [FRAME: ECallMethodFrame] [DEFAULT] [hasThis] Object System.Reflection.RuntimeMethodInfo.InternalInvoke(Object,ValueClass System.Reflection.BindingFlags,Class System.Reflection.Binder,SZArray Object,Class System.Globalization.CultureInfo,Boolean,Class System.Reflection.Assembly,Boolean)
0x0edfee04 0x799dd299 [DEFAULT] [hasThis] Object System.Reflection.RuntimeMethodInfo.InternalInvoke(Object,ValueClass System.Reflection.BindingFlags,Class System.Reflection.Binder,SZArray Object,Class System.Globalization.CultureInfo,Boolean)
0x0edff4c0 0x00000000 [FRAME: ContextTransitionFrame]


Note, that in order to get great detail around the stack, you’ll need to have debug symbols available for Windbg to resolve some of the names and information. If you have source code available as well, you can get to the actual source and line data!

A quick review of the managed threads will show if there are any threads that have exceptions. For example, the output of !threads shows that we have two threads with exceptions; TID’s 63 & 68. Notice how the IDE has already positions us to TID 68?

0:068> !threads
ThreadCount: 32
UnstartedThread: 0
BackgroundThread: 11
PendingThread: 0
DeadThread: 4

PreEmptive GC Alloc Lock
ID ThreadOBJ State GC Context Domain Count APT Exception
1 0xc0c 0x000f2c10 0xa220 Enabled 0x00000000:0x00000000 0x000e7400 0 MTA
6 0xc54 0x000f2dd8 0xb220 Enabled 0x00000000:0x00000000 0x000e7400 0 MTA (Finalizer)
13 0xe18 0x000f3330 0x220 Enabled 0x00000000:0x00000000 0x000e7400 0 MTA
[...snip...]
62 0xde0 0x0bf1ae48 0x2001020 Enabled 0x00000000:0x00000000 0x000e7400 0 Ukn
63 0x1a8 0x0bf1bc88 0x1800220 Enabled 0x051bb3c8:0x051bd120 0x000e92a8 0 MTA (Threadpool Worker) System.FormatException
17 0xf54 0x0bf1be50 0x220 Enabled 0x00000000:0x00000000 0x000e7400 0 Ukn
19 0xf5c 0x0bf1c1e0 0x220 Enabled 0x00000000:0x00000000 0x000e7400 0 Ukn
67 0x168c 0x0bf1bac0 0x220 Enabled 0x00000000:0x00000000 0x000e7400 0 Ukn
68 0x10a0 0x0bf1b3a0 0x1800220 Enabled 0x0110fa48:0x0110ffc4 0x000e92a8 0 MTA (Threadpool Worker) System.FormatException
69 0x101c 0x0bf1c018 0x1800220 Enabled 0x00000000:0x00000000 0x000e7400 0 MTA (Threadpool Worker)


To locate the FormatException on the threads stack, execute the !dumpstackobjects command.

0:068> !dumpstackobjects
Thread 68
ESP/REG Object Name
0xedfe720 0x110f8f0 System.FormatException
0xedfe738 0x110f8f0 System.FormatException
0xedfe74c 0x110f8f0 System.FormatException
0xedfe7b4 0x110f8f0 System.FormatException
0xedfe7dc 0x110f8f0 System.FormatException
0xedfe7ec 0x110f8f0 System.FormatException

0xedfe84c 0x110e24c System.String Format_InvalidString
0xedfe850 0x110e15c System.Resources.ResourceManager
0xedfe890 0x110f8f0 System.FormatException
0xedfe894 0x110f9e4 System.String Input string was not in a correct format
0xedfe89c 0x79b96950 System.Exception
0xedfe91c 0x110e24c System.String Format_InvalidString
0xedfe9d4 0x110e24c System.String Format_InvalidString
0xedfea48 0x110f88c System.String Input string was not in a correct format
0xedfea70 0x5163a38 System.Globalization.NumberFormatInfo
0xedfeabc 0x110e09c System.String val2
0xedfeb08 0x5163a38 System.Globalization.NumberFormatInfo
0xedfeb10 0x51a1a28 System.String MC
0xedfeb18 0x51a1a28 System.String MC
[…snip…]


Here we notice that we have multiple System.FormatException on the stack at address 0x110f8f0. We can dump the object by using the !dumpobj command, passing it the address of the object in question.

0:068> !dumpobj 0x110f8f0
Name: System.FormatException
MethodTable 0x79bacb74
EEClass 0x79bacbec
Size 64(0x40) bytes
GC Generation: 0
mdToken: 0x02000090 (c:\windows\microsoft.net\framework\v1.1.4322\mscorlib.dll)
FieldDesc*: 0x00000000
MT Field Offset Type Attr Value Name
0x79b96824 0x400001d 0x4 CLASS instance 0x00000000 _className
0x79b96824 0x400001e 0x8 CLASS instance 0x00000000 _exceptionMethod
0x79b96824 0x400001f 0xc CLASS instance 0x00000000 _exceptionMethodString
0x79b96824 0x4000020 0x10 CLASS instance 0x0110f9e4 _message
0x79b96824 0x4000021 0x14 CLASS instance 0x00000000 _innerException
0x79b96824 0x4000022 0x18 CLASS instance 0x00000000 _helpURL
0x79b96824 0x4000023 0x1c CLASS instance 0x00000000 _stackTrace
0x79b96824 0x4000024 0x20 CLASS instance 0x00000000 _stackTraceString
0x79b96824 0x4000025 0x24 CLASS instance 0x00000000 _remoteStackTraceString
0x79b96824 0x4000026 0x2c System.Int32 instance 0 _remoteStackIndex
0x79b96824 0x4000027 0x30 System.Int32 instance -2146233033 _HResult
0x79b96824 0x4000028 0x28 CLASS instance 0x00000000 _source
0x79b96824 0x4000029 0x34 System.Int32 instance 0 _xptrs
0x79b96824 0x400002a 0x38 System.Int32 instance -532459699 _xcode
-----------------
Exception 0110f8f0 in MT 79bacb74: System.FormatException
_message: Input string was not in a correct format.


Notice that we’ve already got useful information, the _message object has already been dumped for us. However, we could use !dumpobject 0x0110f9e4 and inspect it ourselves. If we do so, we find that it’s actually a reference to a System.String.

0:068> !dumpobj 0x0110f9e4
Name: System.String
MethodTable 0x79b94638
EEClass 0x79b94984
Size 100(0x64) bytes
GC Generation: 0
mdToken: 0x0200000f (c:\windows\microsoft.net\framework\v1.1.4322\mscorlib.dll)
String: Input string was not in a correct format.
FieldDesc*: 0x79b949e8
MT Field Offset Type Attr Value Name
0x79b94638 0x4000013 0x4 System.Int32 instance 42 m_arrayLength
0x79b94638 0x4000014 0x8 System.Int32 instance 41 m_stringLength
0x79b94638 0x4000015 0xc System.Char instance 0x49 m_firstChar
0x79b94638 0x4000016 0 CLASS shared static Empty
>> Domain:Value 0x000e7400:0x05010224 0x000e92a8:0x05010224 <<>> Domain:Value 0x000e7400:0x05010238 0x000e92a8:0x0509c078 <<>!clrstack -a
Thread 68
ESP EIP
ESP/REG Object Name
0x0edfe7fc 0x77e55dea [FRAME: GCFrame]
ESP/REG Object Name
0xedfe84c 0x110e24c System.String Format_InvalidString
0xedfe850 0x110e15c System.Resources.ResourceManager
0xedfe890 0x110f8f0 System.FormatException
0xedfe894 0x110f9e4 System.String Input string was not in a correct format
0xedfe91c 0x110e24c System.String Format_InvalidString
0xedfe9d4 0x110e24c System.String Format_InvalidString
0xedfea48 0x110f88c System.String Input string was not in a correct format
0xedfea70 0x5163a38 System.Globalization.NumberFormatInfo
0xedfeabc 0x110e09c System.String val2
0xedfeb08 0x5163a38 System.Globalization.NumberFormatInfo
0xedfeb10 0x51a1a28 System.String MC
0xedfeb18 0x51a1a28 System.String MC
0xedfeb1c 0x51a1a28 System.String MC
0x0edfeb2c 0x77e55dea [FRAME: ECallMethodFrame] [DEFAULT] R8 System.Number.ParseDouble(String,ValueClass System.Globalization.NumberStyles,Class System.Globalization.NumberFormatInfo)
ESP/REG Object Name
0x0edfeb3c 0x79a14bef [DEFAULT] R8 System.Double.Parse(String,ValueClass System.Globalization.NumberStyles,Class System.IFormatProvider)
EDI 0x051a1a28 ESI 0x000000e7 EBX 0x0bf1b3a0 EDX 0x00000000 ECX 0x0bf1b3a0
EAX 0x0edfe704 EBP 0x0edfeb6c ESP 0x0edfeb3c EIP 0x79a14bef
ESP/REG Object Name
0xedfeb3c 0x5163a38 System.Globalization.NumberFormatInfo
0xedfeb44 0x51a1a28 System.String MC
0xedfeb48 0x51a1a28 System.String MC
0xedfeb4c 0x5163a38 System.Globalization.NumberFormatInfo
0xedfeb50 0x51a1a28 System.String MC
0xedfeb58 0x501590c System.RuntimeType
0xedfeb5c 0x509b7f0 System.Globalization.CultureInfo
0xedfeb68 0x51a1a28 System.String MC
0xedfeb74 0x5163a38 System.Globalization.NumberFormatInfo
0x0edfeb78 0x79a0e1e1 [DEFAULT] R8 System.Convert.ToDouble(String,Class System.IFormatProvider)
EDI 0x051a1a28 ESI 0x051a1a28 EBX 0x0bf1b3a0 EDX 0x00000000 ECX 0x0bf1b3a0
EAX 0x0edfe704 EBP 0x0edfeba0 ESP 0x0edfeb78 EIP 0x79a0e1e1
ESP/REG Object Name
0xedfeb78 0x51a1a28 System.String MC
0x0edfeb80 0x00c54d35 [DEFAULT] [hasThis] Boolean Microsoft.Xslt.CompiledScripts.CSharp.ScriptClass_1.IsNumeric(String)
EDI 0x051a1a28 ESI 0x051a1a28 EBX 0x0bf1b3a0 EDX 0x00000000 ECX 0x0bf1b3a0
EAX 0x0edfe704 EBP 0x0edfeba0 ESP 0x0edfeb80 EIP 0x00c54d35
ESP/REG Object Name
0xedfeb84 0x10f2524 Microsoft.Xslt.CompiledScripts.CSharp.ScriptClass_1
0xedfeb88 0x51a1a28 System.String MC
0xedfeb9c 0x10f2524 Microsoft.Xslt.CompiledScripts.CSharp.ScriptClass_1
0x0edfeba8 0x00c55e05 [DEFAULT] [hasThis] Boolean Microsoft.Xslt.CompiledScripts.CSharp.ScriptClass_1.LogicalEq(String,String)
EDI 0x051a1a28 ESI 0x010f2524 EBX 0x0bf1b3a0 EDX 0x00000000 ECX 0x0bf1b3a0
EAX 0x0edfe704 EBP 0x0edfebb8 ESP 0x0edfeba8 EIP 0x00c55e05


Looking at the contents of the EDI register, we have the address 0x051a1a28. If we dump the address in this register, we see that it is indeed a string with the value of ‘MC’. This value obviously cannot be converted into a double, so a System.FormatException is thrown. The astute observer will notice that the address can be traced back up the call stack to the very top. Recognizing that sort of pattern, and a little insight as to what’s going on, will go a long way!



Conclusion

I actually contrived this issue just a little. Not much granted, an unhandled System.FormatException will terminate a process just as fast as a System.OutOfMemoryException. However, I didn’t have any sample code illustrating an unhandled exception. Each of us can create that in our sleep. What I did have was a live test case in our BizTalk staging environment where, while doing a little issue resolution on another exception type, I noticed this little guy being thrown. Not often, but I was seeing them.

This System.FormatException obviously wasn’t terminating the process; we’d have noticed it much sooner if it had! That means it was being handled somewhere, either in our code base, or in BizTalk. By attaching Windbg to the BTSNTSvc.exe process, and giving the debugger first chance at any exceptions (Adplus –crash –p xxxx -FullOnFirst) I was able to catch the exception in action. Reviewing the call stack produced shows that this exception was happening during the execution of a BizTalk map, from an orchestration, specifically when a particular functoid was being invoked: IsNumeric. It seems that IsNumeric is simply attempting to parse the value into a System.Double. Presumably, if it can, it returns true. If it can’t, the System.FormatException is thrown, but then caught in IsNumeric and then IsNumeric returns false. This catch uncovered a bug in the map that hadn't yet been flushed out in testing.