Saturday, October 16, 2010

Getting Started with WCF Routing Service, Part 1

"Change is inevitable. Change is constant."
  --  Benjamin Disraeli

In software development, everyone expects change.  Change is inevitable. Its constant. Its the law of the land. The Disraeli Principal and all that.  This is true through the development of software. 

Once software reaches maturity, and is released, then our expectations of change shift radically. A new law takes effect, one almost diametrically opposed to the first:

"Users hate change."
  --  anonymous

How we manage that change can make all the difference in the world.

In developing WCF services, we have a new option for mitigating change to a service endpoint, contract or behavior: the WCF Routing Service.

The WCF Routing Service is available with Microsoft .NET 4.0 and can be best described as a out of the box soap router for WCF. 

What can I do with a soap router you might ask? 

With a intermediate software router you can perform:

  • Protocol Bridging
  • Content Based Routing
  • Access Control
  • Unified Exception Handling
  • Logging
  • Load Distribution / Balancing
  • just about anything you can think of, both good and bad

Over the next few posts, I’ll be introducing myself to the scenarios supported by the WCF Routing Service and sharing what I find along the way, along with how I might use them in the real world scenarios I’m currently working in.  The features I’m most interested in are:

  • Part 2: Protocol Bridging
  • Part 3: Content Based Routing
  • Part 4: Access Control

If your looking for something now go check out the documentation on the WCF Routing Service

If your looking at how to kick it old school, then check out Building a WCF Router, Part 1 and Part 2 by Michele Bustamante.

Friday, August 06, 2010

[TestMethod] Code Snippet

Writing lots of tests today and I found myself typing a lot of repetitive code just inserting test methods, so I created the ‘tm’ code snippet.

Of course, as I installed it, I realized that there was already a ‘testm’ code snippet to do almost exactly the same thing.  I could/should have modified the original ‘testm’ to follow the naming convention for our tests.  The moral of that story is look before you leap.

This gist:

Will generate the following code snippet:

[TestMethod]
public void TestSuite_TestCondition_Assertion()
{

}

Thursday, April 08, 2010

GetSmarx C# vs. Ruby (and something about Shaving a Yak)

Steve Marx, of Windows Azure fame, recently posted “annoy.smarx.com: Letting the Internet Choose My Wallpaper”, on how he allows the Internet to choose his wallpaper using Windows Azure and Service Bus.  Somewhat cool use of the technology showcased in a very easy to follow manner. 

Of course, the little devil in me thought how easy it would be to write a snippet of code to create an Azure Worker role to randomly change his wallpaper – anything worth doing is worth automating, right? 

The requirements were pretty simple. 

  1. http GET http://annoysmarx.cloudapp.net/
  2. Parse the xhtml response (thanks, Steve!)
  3. Identify the wall paper links
  4. Randomly select a single link resource
  5. https GET of the resource (e.g. https://annoysmarx.servicebus.windows.net/AnnoySmarx/SetWallpaper?id=ac21c047-f437-4e01-a0d1-25ee47e128b3)

Then I realized I don’t have the money to shell out just to poke Steve Marx, whom I don’t know.  Then I thought, well, I *do* have VS2010 Ultimate RC1 running, including the Testers edition (which rocks, btw), so then I thought I’d just plopped the code into a virtual user (VU) and let it (or multiple VU’s!) run every hour. 

But, you see, I’ve been hearing so much about Ruby, and how what an awesome language it is.  I’ve even started noodling with it on the side (shhh….don’t tell C#).  I do kind of like Ruby, but it can be raw.  Kinda of like how I view Linux – on the surface it looks great, especially now, but it can get rough quick if your anything more than just a casual user.

Anywayz….I then got distracted with creating a snippet of Ruby code to do the same thing I did in C# just to see if life was better. 

So, just like your optometrist…which do you like better?

1)  GetSmarx.cs

or

2) GetSmarx.rb

Of course, the C# version took about as long as it took to type it.  The Ruby version took much longer as I had to consult the almighty Google for much of it.  Its times like these (Foo Fighters FTW!) that makes the heart all warm when you realize what a rich ecosystem Ruby has underneath it.

And that is how Steve Marx was saved from being sent into an epileptic coma by his constantly changing desktop background due to my being so worn out by Shaving a Yak tonight that I just plain lost interest.

Good night, Steve.  Thanks for the Azure content.  Keep it coming!

Organize XAML Namespace Declarations with XmlnsDefinitionAttribute

Microsoft does it and you can do it, too!

Whenever you need to use a control from an assembly you need to provide an xml namespace for it to avoid any collisions with existing types. By default, you need to specifically reference the namespace for each control you need access to in your XAML.

This might lead to declarations like this in your XAML:

<UserControl
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:treeView="clr-namespace:MaryKay.SamPortal.Common.UI.TreeView.Views;assembly=MaryKay.SamPortal.Common.UI"
xmlns:infoBar="clr-namespace:MaryKay.SamPortal.Common.UI.InfoBar.Views;assembly=MaryKay.SamPortal.Common.UI"
</UserControl>


This can get ugly pretty fast if you have lots of controls in your XAML view!

However, notice how clean the Microsoft xml namespaces are? You get a lot of controls brought into scope by leveraging those simple namespaces. This is accomplished by using the assembly XmlnsDefinitionAttribute.

From MSDN


Specifies a mapping in an assembly between an XML namespace and a CLR namespace.


Which means it allows us to provide a facade, or an alias, for a namespace, or group of namespaces within an assembly. This makes it much easier to bring all the controls in a control library into scope.

For example, in the MaryKay.SamPortal.Common.UI library, all of the controls are surfaced through the namespace alias “urn:marykay-samportal-common-ui” using XmlnsDefinition in the assemblies AssemblyInfo.cs:

[assembly: XmlnsDefinition("urn:marykay-samportal-common-ui", "MaryKay.SamPortal.Common.UI.InfoBar.Views")]
[assembly: XmlnsDefinition("urn:marykay-samportal-common-ui", "MaryKay.SamPortal.Common.UI.RoleGroupPicker.Views")]
[assembly: XmlnsDefinition("urn:marykay-samportal-common-ui", "MaryKay.SamPortal.Common.UI.BetterPopup")]
[assembly: XmlnsDefinition("urn:marykay-samportal-common-ui", "MaryKay.SamPortal.Common.UI.TextEditor")]
[assembly: XmlnsDefinition("urn:marykay-samportal-common-ui", "MaryKay.SamPortal.Common.UI.Converters")]
[assembly: XmlnsDefinition("urn:marykay-samportal-common-ui", "MaryKay.SamPortal.Common.UI.Documents")]
[assembly: XmlnsDefinition("urn:marykay-samportal-common-ui", "MaryKay.SamPortal.Common.UI.SplashScreen")]
[assembly: XmlnsDefinition("urn:marykay-samportal-common-ui", "MaryKay.SamPortal.Common.UI.TemplateSelector")]
[assembly: XmlnsDefinition("urn:marykay-samportal-common-ui", "MaryKay.SamPortal.Common.UI.ModalDialog")]
[assembly: XmlnsDefinition("urn:marykay-samportal-common-ui", "MaryKay.SamPortal.Common.UI.ConsultantSearch.Views")]

and so forth..

This cleans up our required XAML declarations to simply:

<UserControl
  xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:commonUI="urn:marykay-samportal-common-ui">
<commonUI:InformationBar DataContext="{Binding InfoBar}"/>
</UserControl>

Which makes our XAML much cleaner and easier to work with. It even supports intellisense!

Wednesday, March 31, 2010

PANDA – Packaging ANd Deployment Automation

Just read “CoApp: An open-source package management system for Windows”.

I really respect what CoApp is trying to do.  It can be amazingly difficult, and tedious, to deploy various applications types on the Windows platform.  Its not like the problem hasn’t been solved.  Its unfortunately been solved too many times, by too many.

Of course, anytime you have something amazingly tedious and difficult it needs to be automated. The Windows platform has needed this forever. 

Unfortunately, what Windows users got, at least for the desktop, was Windows Installer technology.  I never really liked this technology, for a variety of different reasons, but the biggest problem I have with them are that I just don’t care.  I don’t care to see your company’s logo, I don’t care to see a progress bar going across the screen, I don’t care to plug in options – I just don’t care.  I got frustrated that it would take someone with a PhD in Windows Installer technology to be able to produce a package you could install. 

There have been advancements recently, most notably MSDeploy, that seem like they should be able to fit the bill, but they still seem focused on web projects only.  I still have hopes for an extensibility model for MSDeploy that is something we can build on.  The Web Platform Installer is also a huge step forward – again, would be nice if it were opened up, which is where I think CoApp might come in.

You know your data center operations teams have also solved this problem of packaging, deployment, and dependency management, but its still likely to be focused on a couple of notable application types: web applications and Windows services.

I work for corporate America - which has a plethora of bad practices, some I probably perpetuate, and many I'm not proud of, but every once and a while we go and do something worthwhile.  For us, it was the development of PANDA.  PANDA, Packaging and Deployment Automation, is as much process as it is actual tooling.

PANDA actually started with the requirement of a repeatable deployment for web sites, windows services, click once applications and just basic file copies.  To solve this problem, we chose to rely heavily on convention, with a sprinkle of configuration metadata.

The idea of a PANDA ‘package’ was born, and that package had a specific structure.   In addition, it was decided that all applications would have specific target locations, based on conventions set by our operations team.   A PANDA package, being heavily file centric, might look like this:

  • /WindowsServices/{service}
  • /Web/{website}/{virtual root}
  • /ClickOnce/{app}
  • /Config
  • /Workflows
  • /Files/{bucket}

This package structure, combined with some magical application metadata, would allow us to write generic MSBuild scripts that would simply know how to execute what was in the package.  Samples of application metadata for a Windows service might be:

  • service name
  • display name
  • service account
  • executable
  • startup type
  • restart properties

The Windows service developer would include in their project, not an Installer project, but simply an xml file with a well known schema – a .deploy file.  In this case, it would be service.deploy.  Changing any of the properties of the service simply meant modifying the service.deploy file and executing the deployment.  The actual application code didn’t need to be recompiled each time.

Other application types would have their own metadata schema.

This PANDA package is then what all of our software builds produce.  We use Team Foundation Server, but instead of regular output from a build, build output gets rolled into a PANDA package.  If a developer needs to install those bits, either on their local machine, or a server, the experience is the same – they execute the deploy.bat found in the package.

Assuming that the deployment package had a consistent structure, we then built MSBuild targets that knew how to install whatever happened to be in the package.  Our deployment tools, and scripts, really just reused some community task libraries and a single custom library that was essentially a data access layer for our PANDA package.

The overall execution of the deployment had just a few high level MSBuild targets:

  • PreDeploy – users could provide their own MSBuild scripts, following the convention of *.predeploy and PANDA would execute them
  • StopServices – can’t copy over files in use
  • CopyFiles – a top-level target that….copies files, it actually chains together other file copy operations
    • CopyConfig, CopyServices, CopyWebs,CopyClickOnces,CopyWorkflow
  • Uninstall Services – it was easier to uninstall services
  • Install Services – since we uninstall, we pick up any changes to metadata (e.g. service name)
  • Start Services – respected the startup type in service metadata
  • PostDeploy – users could provide their own MSBuild scripts, following convention of *.postdeploy

Each target was smart enough that if there was nothing in the package that matched its criteria, then it didn’t do anything.  And that's it.  The MSBuild scripts just know how to execute the package, which has a consistent structure. 

Here is the breakdown of *.deploy files:

  • service.deploy – metadata for Windows services
  • site.deploy – web site metadata
  • apppool.deploy – app pool metadata
  • webapp.deploy – virtual directory metadata
  • clickonce.deploy- click once metadata
  • files.deploy – files manifests.  Only required for copying files that didn’t already fall into an application archetype – usually one off

A PANDA package could contain a single file, or single set of application components, or it could contain and aggregate of many application components, or indeed entire systems – all in a single PANDA package that has a one click installation (deploy.bat).

Other features of PANDA (not exhaustive):

  • application metadata can specify arbitrary target server role (e.g. ‘web server’, ‘app server’, ‘mylabenvironment’), so application bits only get copied and installed if target server role matches
  • environmental overrides – e.g. use different service account name for DEV, QA, Staging and Production environments.
  • use of hashed passwords – we request provisioning of service accounts by our security group.  We don’t actually receive a password, but the hash.  PANDA will decrypt the hash and install using actual accounts.  Passwords aren’t in the open and developers could care less what the actual value is.

Some notable PANDA shortfalls:

  • PANDA packages are not zipped.  We have the actual step to insert into the packaging process to zip up the bits, we just aren’t using it because we are blessed with near infinite bandwidth in our data centers.
  • PANDA tools need to be deployed to any server, or desktop, prior to attempting to execute a PANDA package.  We would like to eventually include the tools required for THAT package version

I didn’t realize I had typed this much until just now.. My intention was not really to toot the horn (toot! toot!) on PANDA – rather pointing out the need to have roll something like PANDA at all

Granted, PANDA is in its infancy.  My preference is that it stay there and not any more time be spent on enhancing it.  It meets the current needs of several different teams.  I’d rather it was killed off by something supported by either Microsoft, or the community – we don’t want to maintain it indefinitely, but probably would with no other choice.   I’ve often thought about approaching our management team about releasing PANDA into the community in hopes it would gain support there.

Would anyone else use something like PANDA?

Wednesday, February 10, 2010

@ntdebugging Q&A Question Opportunity

The team responsible for the  Advanced Windows Debugging and Troubleshooting blog is looking for your “burning debug questions” that they hope to answer in an upcoming Q&A article.  You can tweet your questions to @ntdebugging for consideration for the article.  If your remotely interested in Windows debugging, and anything debugging related, I highly recommend checking them out!

Unfortunately, I am finding it particularly hard to frame my question into 140 characters this morning, and short of a vertigo-like experience where I realize “I know Kung-Fu”, I thought I’d post my question here in hopes to get my question answered by the team that does know kung fu.

Realize that anything posted here, especially if false, is due to my imperfect understanding of experiences and knowledge imparted by Advanced Windows Debugging and Troubleshooting, Tess Ferrandez (and her many compatriots!), Jeffrey Richter,  and the Godfather of the Debugger, John Robbins

The Setup

.NET allocates memory for each generation’s heap (0, 1, 2, LOH) in segments to get a continuous block of memory when it attempts to satisfy an allocation request, after a collection, and can’t.  This memory for each heap will likely level off as the application “warms up”, except potentially for generation 2, and large object heap.  During a garbage collection, each heap (0, 1, 2) is swept and compacted, except for the large object heap (LOH), which is just swept. 

I understand the ‘sweep’ part of a collection to mean that the GC identifies which objects are no longer rooted and are available for collection (or finalization) and that ‘compact’ means that the addresses that are still alive in a heap are reorganized so that the heap has more continuous memory available to it.   As the budget for each segment within the heap is exceeded, .NET will allocate another segment in order to fulfill allocations if it can.

The Question

My question comes down to what happens to that memory in each heap, that might not be used by the application (committed) any longer, but is still reserved by .NET?  When is it released back to the OS? 

I’ve heard of rituals such as forcing Windows itself into low memory condition so that it asks each process to give up any reserved memory, or minimizing a process, and then restoring it, which essentially does the same thing.  These are just rumors and I’d prefer to know. 

There are also rumors of something involving chickens, but I’d rather not go there…no one likes rituals, they are unreliable and often messy (think of the chickens!).