Saturday, December 27, 2008

MSBuild Blah's

I did get to spend more time with MSBuild and I've got to say as a build tool for Microsoft its spiffy.  As a scripting tool/language for either automating packaging, or deployment, it comes in as just 'ok'.  There are two constraints with MSBuild that require a slightly different way of thinking when approaching how to automate different parts of the package and deployment process: lack of foreach and item collections (ItemGroup) conditions are only evaluated once when determining whether a Task executes for that ItemGroup (for performance). 

Neither is insurmountable.  The first is slightly problematic for me - I just seem to be wired to think in 'foreach'. The second part of the item collections only being evaluated once means that if I've got an ItemGroup with different metadata values, I can't just throw the ItemGroup at a Task and let the Conditional on the Task determine what executes - only the first item(s) to match the condition will execute (or not) - the rest won't be re-evaluated, even if their values are different.  This means I've got to cruft up a dynamic ItemGroup to throw at the Task. Nant does foreach very well - its a first class language construct.

Oh, I'd like to throw a third in there as well - easily iterating child folders of a parent.  There seem to be workarounds: parsing dir command output, custom task, or third party community tasks.  None of which are intuitive enough for me.  Nant has a 'Folder' collection that works well in this scenario.

The other thing that just stands out is that I have never, as in EVER, lost any time figuring out how to accomplish something with Nant.  Even back when getting started I don't remember being stymied in getting something done.  Just set 'em up and knock them out.  With MSBuild, its par for the course.  Make some progress, get stymied for an hour or three, make some progress, roll back that progress, rinse and repeat. 

Clunky - but doable.  I'm getting it, I just don't like it.

What still amazes me is that people, myself included for which I am still slightly embarrassed, keep trying to justify using one tool or the other.  Because we are primarily a Microsoft shop, Microsoft tools tend to get a priority they may not rightly deserve.  In any case, with time == money, if I can accomplish a task in half the time (or less in some cases) why keep pushing for the 'breakthrough' with a different tool?  It just doesn't make sense.   I think it comes back to pride/ego: I can do this. 

Make sure you've got the right reasons for tool selection.  Vendor preference is not necessarily the right one.  Not having to distribute the runtime is such a minor inconvenience as to not be mentioned - especially something as file copy centric as Nant.  Integration with TFS?  TFS can kick off just about any process - Nant is just fine.  Nant can call MSBuild just fine (and vice versa).  There are really two considerations: development and maintenance.  How fast can you get something done and how easy is it to maintain when you need to fix/enhance?

One co-worker's mantra is "build with MSBuild, but automate with Nant" - he should know.  I asked him to walk this path (2 years ago?) and had him automate our package and deployment process with MSBuild - for the wrong reason. He did it, but it took him much longer than expected for many of the same reasons I've mentioned here - and when he did it there wasn't a whole lot of blog content to fall back on either. 

Don't get me wrong - MSBuild will work. I really do like the Item metadata feature of MSBuild and I am *that* much closer to the breakthrough that I'm probably not going to stop now - its a matter of pride, stupid pride.

Thursday, December 18, 2008

[DataMember] Code Snippet

So I am sitting here, feeding the cold I’ve had, and I found myself creating a DataContract – something that got even easier with the introduction of Automatic Properties in .NET 3.0.

I tend to forget about the Visual Studio feature of Code Snippets, but I am usually reminded again anytime I see a presentation by an experienced technical presenter – then I forget again.

Today, I happened to remember our last instructor making heavy use of the ‘prop’ expanding code snippet during demos and thought - “Hey, I bet they have one for automagically creating properties marked with the DataMember attribute”.  Well, I didn’t immediately see it and asking the great information oracle didn’t reveal anything interesting, so I set out to copy the existing ‘prop’ code snippet.

Turns out it was even easier than I expected – it took longer to write this post than it did to create the snippet.

Save the following into a file named dmprop.snippet (or something suitable) and drop it into your C:\Users\xxxxx\Documents\Visual Studio 2008\Code Snippets\Visual C#\My Code Snippets folder, and voila!  Visual Studio should automatically detect the new snippet.  If not, you can add manually via Tools | Code Snippets Manager.

Now you can type ‘dmprop’ [tab] [tab] and get the same property expansion code snippet as ‘prop’, but marked with a DataMember attribute.  Cheesy, but I figured it was appropriate as I was snacking on some sharp cheddar..

<?xml version="1.0" encoding="utf-8" ?>
<
CodeSnippets xmlns="http://schemas.microsoft.com/VisualStudio/2005/CodeSnippet">
<
CodeSnippet Format="1.0.0">
<
Header>
<
Title>dmprop</Title>
<
Shortcut>dmprop</Shortcut>
<
Description>Code snippet for an automatically implemented [DataMember] property</Description>
<
Author>Your name here</Author>
<
SnippetTypes>
<
SnippetType>Expansion</SnippetType>
</
SnippetTypes>
</
Header>
<
Snippet>
<
Declarations>
<
Literal>
<
ID>type</ID>
<
ToolTip>Property type</ToolTip>
<
Default>int</Default>
</
Literal>
<
Literal>
<
ID>property</ID>
<
ToolTip>Property name</ToolTip>
<
Default>MyProperty</Default>
</
Literal>
</
Declarations>
<
Code Language="csharp"><![CDATA[[DataMember]
public $type$ $property$ { get; set; }$end$
]]>
</
Code>
</
Snippet>
</
CodeSnippet>
</
CodeSnippets>

Saturday, November 22, 2008

Too Many Curses: A. Lee Martinez

A decent, easy read.  So far, I have liked every book by A. Lee Martinez.  No, scratch that.  So far, I have liked, to loved, every book by A. Lee Martinez. 

You want to know why I love books by A. Lee Martinez?  I’m guaranteed a decent story, plots just a step outside the norm, characters who are definitely unique, and a complete story in a single book.  Eh?  What’s that?  Yes, you heard me right.  A complete story, cover to cover, in a single book.  Uno.  Un. Ein. 

Don’t believe me?   Didn’t think authors did that anymore?  Well, neither did I.  Especially not in the sci-fi/fantasy genre.  Check out any of his other books to date and you won’t be disappointed.  Of course, that doesn’t come with a guarantee or anything.

Gil’s All Fright Diner remains my favorite, but they all have been great!  Check ‘em out!

Sunday, November 09, 2008

The Stormcaller: Book One of the Twilight Reign, Tom Lloyd

 

Tom Lloyd is off to a good start with Stormcaller.  He adds an interesting take to an age old plot of the nobody who suddenly finds themselves as a somebody.  Its almost as if the author took a pinch of Wheel of Time, added a dash of Song of Fire and Ice and stirred it all together with The Runelords to give us Stormcaller.   I'm not comparing it to those notable titles, but it does seem to draw breath from them.  It doesn't take long to get caught up in the story and get carried through to the end.

The dialog has a couple of rough spots - mainly I lost context of who was speaking, which meant I had to re-read several important dialogs multiple times.  I also wasn't sure I liked how quickly the hero transforms from outcast bumpkin to Superman, but that can be overlooked, especially if the stage will be set for even greater challenges in future books.

Caching and Exception Handling with WCF

Todd Stone, and I, will be presenting on some WCF topics at the DFW Connected Systems Usergroup this week.  Check it out: Caching and Exception Handling with Windows Communication Foundation.

Hope to see you there!

Wednesday, October 29, 2008

Windows Azure

Aaron gives a great summary here with Windows Azure: A New Era for Microsoft Developers.  This is extremely helpful for those of us who didn't get to this years PDC! Oh, woe of woes... :)

I'm extremely interested in this space (who isn't?) but I'm interested in finding out if "Azure" is MORE than sweeping a bunch of relevant existing technologies under a new name.  Though, even if that's the case, its still better than what MS was offering before - they were seriously fragmented in this space, an identity crisis if you will - they didn't have a coherent vision that competitors (Amazon) had.  Azure, if it does nothing else, will at least do that.  This is still a big win!

Microsoft also often becomes untouchable in a particular space because of the commitment to a quality developer experience.  It sounds as if this might be the case again in the 'cloud computing' domain.  I can't wait to get a hold of some of the bits to see what the experience is like! 

Now matter how cool the platform is the developer experience can make or break it!

Saturday, October 25, 2008

Toll the Hounds, Steven Erikson

Steven Erikson's Mazalan Book of the Fallen series has been a phenomenal ride for seven strong books.   When I say strong, I mean like Lance Armstrong endurance strong.  However, with book eight, titled Toll the Hounds, I feel like he really missed his stride, stumbled hard, didn't quite fall - but it was a very, very close call. 

Is it on par with the previous books?  No.  Is it a decent read?  Yes - though I felt mentally exhausted many times while reading the book.

Normally I can blow through an Erikson book in about seven days (weekend to weekend).  Toll the Hounds took me closer to a month to finish.

Where does the book fall short?  Erikson, and many authors, will sometimes establish a large number of seemingly orthogonal plot lines and then weave them into the main plot line when, and where appropriate, even sometimes spanning novels.  Erikson has been highly successful in doing this until Toll the Hounds. There is a limit to which I can handle in the number of plot lines, I don't know what it is, but its probably a magic number contingent on many factors.  Erikson may have come close to this threshold many times, but with Toll the Hounds, he crossed over.

I have a rule that I generally apply to books:  grab my attention in the first 100 pages or so or its done.  I may actually finish the book (a sickness, really), but I've already checked out on it.  With the size of these tomes that are coming out these days, I'll give as much as 250 pages.  Toll the Hounds did not grab my attention, or full on engage me, until around  pages ~535. I was brought out of my induced auto-pilot like reading and we had Erikson in all of his glory for all of about ~100 pages, before falling back into the realm of mediocre.

I still have a huge amount of respect for this work.  You just have to for anyone who can maintain the level of writing that he has for ~5,600 pages before running into any real trouble.  I can go on, and on, but basically it will come out blah, blah, blah - this series has had me grossly entertained to date and I look forward to the next two (?) novels!

Wednesday, October 01, 2008

Amazon EC2 for Windows

Amazon EC2 Running Windows Server: Coming Soon

Rock n Roll!  This kind of news is exactly what I've been hungering for!  I am extremely enamored with the idea of being able to push units of work (code+data) into the cloud for processing.  These infrastructures have typically been Linux, Python/Java(script)/Perl based.

Being able to write code with familiar OS, tools and (hopefully!) .NET language of choice, and push that into the cloud makes me positively giddy!

I had presumed that the licensing costs of a Microsoft based stack would keep anyone but Microsoft from doing this.  They still haven't released the cost of a Windows based EC2 so its still a question on the table. 

The Amazon team will be at PDC.  Be sure to check them out!

Sunday, September 21, 2008

Amazon as a CDN?

"The big wheel keeps on turnin... "

It makes sense.  If your looking for a 'cloud' platform to develop on Amazon arguably has one of the strongest and feature rich platforms to deliver on so why not continue to round out those service offerings?  Check out "Expanding the Cloud" by Werner Vogels.

Latency, security, and tooling are significant constraints of a cloud based application infrastructure.  If latency is reduced by using a baked in CDN available from your 'cloud' partner, then that leaves security and tooling.

The security constraint is also more likely a mindset that "I don't physically own my data", or lack of communication on clearly stating where responsibilities start and stop, than any technical hurdle.  However, the last time I looked, tools for each of the cloud platforms were coming along at a fast clip.  Though, in terms of cloud computing, if you've traditionally developed for a non-Microsoft platform, you've got an advantage at the current date.

More importantly, with Amazon setting the bar so high for cloud computing, I'm extremely excited to see what other vendors will be releasing to stay competitive!

Saturday, September 20, 2008

Generic Type Parameters

I ran into something interesting with the .NET framework this week regarding how to express a fully qualified type name for a generic type. 

In my specific scenario, I was looking to use the Enterprise Library Exception Handling Application Block to be able to apply some runtime policy on whether  a particular operation needed to be retried if a specific exception had been thrown across the service boundary. 

The specific exception I was looking to apply this “Retry Policy” against?  FaultException<ConsultantNotFoundFault>.   ConsultantNotFoundFault is a business processing exception that my client code might need to be able to react to gracefully.

Note, FaultException “is a” generic type of FaultException and “has a” ConsultantNotFoundFault.  When using the EL Exception Handling Block, if it were just a FaultException I was looking to act on, then I would declare this via configuration as:

<exceptionPolicies>
<add name="Retry Message">
<exceptionTypes>
<add type="System.ServiceModel.FaultException, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"
postHandlingAction="None" name="FaultException" />
</exceptionTypes>
</add>



This should be fairly comfortable syntax and allows me to apply policy if my client code catches a FaultException.  That doesn’t help too much when I need to be able to react to a specific exception, though I’ll explain in a minute why you might be able to get by with this using code.



A Tale of Two Brackets



However, that still left me wondering how to indicate that the type I’m looking to declare is the generic version  FaultException, FaultException<T>?




<exceptionPolicies>
<add name="Retry Message">
<exceptionTypes>
<add type="System.ServiceModel.FaultException`1, System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"
postHandlingAction="None" name="FaultException" />
</exceptionTypes>
</add>



Note the ‘mangled’ type name of FaultException`1, somewhat reminisce of C++ days.  This is almost there.  This allows my client to catch any type thrown across the service boundary that is a FaultException<>.  However, I’m still in the situation of not knowing what type of exception the FaultException<> contains.   Again, I can figure it out in code if I have a limited number of these exception cases I can handle – but its a code solution.  Not the configuration declarative solution I’m looking for.



I had the answer in front of me, but I missed it.  It took a nudge from Fernando, an EL community member, to push me in the right direction.  The AssemblyQualifiedName was the value I was looking to declare in my configuration – but I missed that there were TWO square brackets (‘[[ ]]’) around the generic type parameter(s). 



The double square bracket makes sense – now.  What we have is the generic type, FaultException`1, then a set of square brackets to contain all the generic type parameters, with each generic type parameter enclosed in square brackets.  This is described in the help documentation for Type.AssemblyQualifiedName property.


 


Now that we have all the required syntax in order, we can get back to using the EL Exception Handling block to declaratively describe which exceptions we want to apply our “Retry Message” policy for:



<exceptionPolicies>
<add name="Retry Message">
<exceptionTypes>
<add type="System.ServiceModel.FaultException`1[[ELEaxPolicy.ConsultantNotFoundFault, ELEaxPolicy, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]], System.ServiceModel, Version=3.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089"
postHandlingAction="None" name="ConsultantNotFoundFault" />
</exceptionTypes>
</add>
<exceptionPolicies>



This will allow me to centralize my exception processing and declaratively determine if I want to try retry that message/operation:




catch (Exception eax)
{

bool continueProcessing = ExceptionPolicy.HandleException(eax, "Retry Message");

if (!continueProcessing)
{
Debug.WriteLine("Retry Message");
}
else
{
// if we aren't going to handle the exception
// let it propagate.
//
throw;
}

}



What About Slinging Code?



I mentioned earlier that you can solve this problem directly code.  If you only have a limited number of exceptions you need to react to, then you can consider doing a little manual runtime determination to figure out if the Exception you just caught is of interest.



With a little helper magic (but no chickens) you can figure it out:




/// <summary>
/// Create a FaultException<> that contains the genericTypeParameter.
///
/// Straight from Anderson's brain!
///
/// </summary>
/// <param name="genericTypeParameter">The generic type parameter to use when constructing FaultException<>.</param>
/// <returns>The type representing the generic FaultException containing the generic type parameter.</returns>
static Type GetTypeAsFaultException(Type genericTypeParameter)
{
//Create this for use later in creating the final type
Type genericFault = typeof(FaultException<>);

return genericFault.MakeGenericType(genericTypeParameter);
}

This allows us to still catch Exception and figure out if we need to do any special processing in our handler.  Its not as declarative as using the Enterprise Library Exception Handling Block, but it might do in a pinch.



catch (Exception eax)
{

Type consultantNotFoundFault = GetTypeAsFaultException(typeof(ConsultantNotFoundFault));

if (consultantNotFoundFault == eax.GetType())
{
// retry operation/message
//
Debug.WriteLine("Retry Message");
}
}

You can also use similar logic if you find yourself needing to roll your own exception policy in the event you have some business reason that wouldn’t allow you to use the Exception Handling Block directly. 

Wednesday, September 10, 2008

DFWCSUG 09/10/2008 Meeting - WCF in .NET 3.5 SP1 and MSMQ with WCF

Join us at the DFW Connected Systems User Group meeting this Wednesday as Darshan Singh, a Senior Developer with Intuit, presents on the topics of WCF in .NET 3.5 SP1 and MSMQ with WCF.

Don't forget, FREE food, drinks and a prize drawing!

If you haven't already, please take a moment to Join DFWCSUG on CodeZone!

Sunday, August 24, 2008

Cloud Computing

Cloud computing in the enterprise is a favorite lunch topic of myself and a few friends.  It comes up occasionally and is always entertaining (at least for me) to position ourselves on why it will, or will not, become in demand.  In this particular context, we are looking at provider hosted cloud computing.  Meaning the 'cloud' is hosted outside the security boundary of the enterprise[1]

I found[2] this article by Russell Jones, titled "Executives Avoiding Cloud Computing In Droves" that does an excellent job encapsulating several reasons why cloud computing will be a hard sell for the enterprise.  They are all valid reasons: security, data ownership and control.  Though there are probably a handful of mission critical applications already being consumed as a service (e.g. payroll).

I'm not so sure its actually a 'hard sell' as it is that enterprises tend to move slower on adoption.  The big adopters of the cloud are going to be smaller customers.  Your small businesses who don't have the resources to provide the necessary application infrastructure themselves.   An example might be a manufacturer who uses Quickbooks needing to integrate PO's between distributors, or distributors needing to manage PO's and Inventory between retailers.  Amazon Simple Queue Service is a cloud based implementation that allows integration with all parties without requiring 100% uptime of any actual participant.  That's the platform that will enable smaller business grow more readily to meet any business demand, or requirements, without significant capital expense or operating costs[3].  Need to exchange messages greater than 8K in size?  Then maybe Amazon's Simple Storage Service is a better fit, or more likely a combination of the two.

Of all the points in the article, the one about availability jumped out at me.  This one just seems more of a scare tactic than any of the other reasons.  Outages occur, even within the enterprise.  Even with big enterprises.  However, within an enterprise, your more likely to have access to the people who now how to plan for, and manage, these events. But they still happen. A smaller company is much less likely to have these kind of resources.  In this case, a hosted solution is likely a better option for a small, to medium sized, business.

The only other thing that jumped out at me was the citation that cloud computing was the last thing on a CIO's list.  I'd whole heartedly agree with that statement.  However, I'm pretty sure that managing/reducing the cost of IT is not low on that list, and is likely very closer to the top.  Opportunities for reducing those costs don't come around everyday and the cloud might just be one such opportunity. 

No, the article for me anyway, still shows the cloud in an interesting, and exciting light.  Is that to say that everything belongs in the cloud?  Not today.  Maybe not tomorrow either.  Are there still kinks to be worked out?  You bet!  That's not to say that it won't be fun to figure out what exactly we can, and can't, successfully run in the cloud! 

 

 

[1] 'Cloud' computing may already be showing up in your enterprise today. Your very own private cloud.  As an application owner, you may find that you are already outsourcing services such as storage, data access, security, etc, to teams that don't directly report to your cost center.  Your already relying on 'outside' resources to manage mission critical pieces of your application.  This application operation experience is not likely to be that much different, in theory, between hosting services locally vs. the cloud, except you've got an extension to call if things are not running right.  This is an oversimplification, but something to think about.

[2] Article showed up in Arnon's Shared Items.  Arnon's Shared Items is like a firehose of interesting, and industry related, topics.  There is some overlap with other existing feeds I subscribe to, but this is usually a good launch point over coffee, during a break, or whenever I've got free time.

[3] There are still going to be development costs, and these are not necessarily cheap.

Thursday, August 21, 2008

C# Extension Methods

ScottGu has a quick introduction to C# Extension Methods that can be found here.   Extension Methods are a neat .NET 3.0 language feature that allows you to decorate an existing CLR class without having access to the code, or requiring a subclass.  Check out ScottGu's article for the good stuff.

I don't remember why I thought of the fact that the C# string object doesn't have a Reverse method (a.k.a. strrev), though you can probably think of [n] methods that other framework classes should have some method, or another.  So writing an extension method for the C# string object was my original foray into using extension methods.

public static string Reverse(this string inputString)
{
char[] response = new char[inputString.Length];
char[] inputChars = inputString.ToCharArray();

for (int i = (inputString.Length - 1), j = 0; i >= 0; i--, j++)
{
response[j] = inputChars[i];
}

return new string(response);
}

Which lets me do something very cool like:



string input = "Welcome to the Pink Palace!";

Debug.WriteLine("Input string : " + input);
Debug.WriteLine("Output string: " + input.Reverse());



 


The output of which is:


    Input string : Welcome to the Pink Palace!

    Output string: !ecalaP kniP eht ot emocleW


 


While that's pretty cool, it got me to thinking a little more.  I recently needed the ability to be able to filter whether a piece of code should handle an exception or not.  While thinking about the best way to handle that I thought I could apply an extension method to the Exception type and add the extension 'ShouldHandle'.  The result of which looks something like this:


 



public static bool ShouldHandle(this Exception eax, IEnumerable<string> exceptionList)
{

bool shouldHandle = false;
string exceptionType = eax.GetType().ToString();

foreach (string type in exceptionList)
{
if (type == exceptionType)
{
shouldHandle = true;
break;
}
}

return shouldHandle;
}






Then in the client code I'm thinking of, I can do something along the lines of:




try
{
throw new InvalidOperationException("test");
}
catch (Exception eax)
{
if (eax.ShouldHandle(exceptionList))
{
// handle the exception
//
}
else
{
throw;
}
}



 


I think the jury is still out on whether this code is more understandable than that helper library everyone is packing.  You know the one I'm talking about, MyCompany.Utilities.ReverseString, and the like.  I kinda like extension methods from a readability standpoint.  Extension methods usage is one of those things that may explode once it gets a little wider audience.


The complete code listing:




using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Diagnostics;

namespace Shobu.Extensions
{

public static class MyExtensions
{
/// <summary>
/// Returns the reversed string.
/// </summary>
/// <param name="inputString">The instance of the string.</param>
/// <returns>The reversed string.</returns>
public static string Reverse(this string inputString)
{
char[] response = new char[inputString.Length];
char[] inputChars = inputString.ToCharArray();

for (int i = (inputString.Length - 1), j = 0; i >= 0; i--, j++)
{
response[j] = inputChars[i];
}

return new string(response);
}

/// <summary>
/// Determines whether or not the caller should handle the exception.
/// </summary>
/// <param name="eax">The exception to evaluate.</param>
/// <param name="exceptionList">The list of strings to copmare against.</param>
/// <returns>True, the caller should handle the exception. False, the caller should allow exception to bubble to another handler.</returns>
public static bool ShouldHandle(this Exception eax, IEnumerable<string> exceptionList)
{

bool shouldHandle = false;
string exceptionType = eax.GetType().ToString();

foreach (string type in exceptionList)
{
if (type == exceptionType)
{
shouldHandle = true;
break;
}
}

return shouldHandle;
}

}

class Program
{

static void Main(string[] args)
{


string input = "Welcome to the Pink Palace!";

Debug.WriteLine("Input string : " + input);
Debug.WriteLine("Output string: " + input.Reverse());


List<string> exceptionList = new List<string>
{
"System.Exception",
"System.ArgumentException",
"System.InvalidOperationException",
"System.ServiceModel.CommunicationException"
};



Exception e1 = new Exception("test 1");
Exception e2 = new ArgumentException("test 2");

// since we have extended the base class Exception, this is still valid
//
InvalidOperationException e3 = new InvalidOperationException("test 3");

Debug.WriteLine("should handle: " + e1.ShouldHandle(exceptionList));
Debug.WriteLine("should handle: " + e2.ShouldHandle(exceptionList));
Debug.WriteLine("should handle: " + e3.ShouldHandle(exceptionList));

try
{
throw new InvalidOperationException("test");
}
catch (Exception eax)
{
if (eax.ShouldHandle(exceptionList))
{
// handle the exception
//
}
else
{
throw;
}
}

}

}
}

Saturday, August 16, 2008

Service Broker As A Queuing Technology

Thanks to Eric V., Jennifer X., Hari, Sanjay and Mark M. for helping me get jump started with this technology!

It started out simple enough.  My first bug (‘issue’, ‘OFI’, ‘work item’) on the new team.  Ah, did they snooker me but good.   “Super Cool Service (server broker version) possible data loss” is how the title read.  Fixing the possible data loss scenario was the easy part.  Having to learn anything about Service Broker?  Ah, well that’s where our story begins…

I’ve only spent a week with this technology, so I’m sure I’m missing the finer points. In a nutshell,Service Broker is one option for transactional, message queuing for clients that have access to Sql Server (no ‘disconnected clients’ – though this might work using Sql Server Express on the client). SSSB provides the ordered delivery of messages and no practical limits to the size of the message; first in-first out (FIFO) order. Service Broker also supports remote transactional receives (so does MSMQ with Windows Server 2008 – but this is about Service Broker).

However, tooling support Service Broker is lagging.  Without proper tools (designers, wizards and debuggers) Service Broker applications can become complex quickly.  There is no real concept of tracking, journaling, or auditing built into to Service Broker.  That being said, if your comfortable with Sql Profiler you can probably get pretty far.  Then adding features like auditing, or journaling, of messages can be added in a typical Sql fashion. 

One thing that I think needs to be clarified is Service Broker being a pub/sub architecture.  Unless you consider pub/sub capability the ability to publish to a single service and receive by a single service.  Then yeah…you’ve got a pub/sub architecture.  If you consider pub/sub the ability to publish one event and [n] number of subscribers, then your going to need to look more into traditional MSMQ, or an actual broker like BizTalk.  The product may have ‘broker’ in the title, but I’m not quite sure what’s meant by that. 

The closest I’ve come in the last week to getting pub sub out of SB is to use a stored procedure on the receiving queue to fan out a message to additional subscribing services.  I’ve not been able to get this to work in anyway that feels natural.  That subscriber information has to be stored somewhere and again, no tooling.

One thing for sure, SB does have the ability to shutdown a queue after 5 failed attempts to receive a message.  That being the case, you’re going to need to check the status of your queues.

-- check status of queues
--
SELECT
[name],
is_activation_enabled,
is_receive_enabled,
is_enqueue_enabled,
is_retention_enabled
FROM sys.service_queues WITH(NOLOCK)
Just like the referenced supporting links state, you’ll find yourself frequently referring to the sys.transmission_queue, which is like the ‘outgoing’ queue concept with MSMQ.  Watch the error_desc column.

When things go wrong, and they will, you’ll need a way to quickly clean up all the messages that are in a half-baked state.  This snippet can be used for any queue.  This is simply the best snippet to have in your pocket while learning Service Broker.


-- clean transmission queue; can be used for any queue
-
DECLARE @conversationHandle UNIQUEIDENTIFIER
WHILE EXISTS (SELECT 1 FROM sys.transmission_queue)
BEGIN
SET @conversationHandle = (SELECT TOP 1 conversation_handle FROM sys.transmission_queue)
END CONVERSATION @conversationHandle WITH CLEANUP
END


Want to know who your broker instance is conversing with at any given time?  This snippet will show both conversation end points. 


-- view all conversations and their states
--
SELECT * FROM sys.conversation_endpoints WITH(NOLOCK)

 

Want to know what the existing routes are for the current broker instance?

SELECT * FROM sys.routes WITH(NOLOCK)
WHERE broker_instance IN (SELECT [service_broker_guid] FROM sys.databases WITH(NOLOCK))

 

Want to know what your service broker instance identifier is?

SELECT [name], service_broker_guid FROM sys.databases WITH(NOLOCK)

Service Broker and Xml Data


This caused no end of consternation initially so I thought I’d actually call it out here.  When creating a Service Broker message type, you have the option of specifying what validation, if any, to apply. You need to keep in mind what your option is, because it could have effects on both senders and receivers.


If VALIDATION=WELL_FORMED_XML and a sender sends an invalid xml message, using the NVARCHAR data type (which some DAL will do when best guessing data type), then this message will hang in the sys.transmission_queue and not be delivered to the receiving queue.  There will be no error message in the error_desc column either. 


-- send an invalid xml message
--

DECLARE @dialog_handle UNIQUEIDENTIFIER,
@Message NVARCHAR(MAX) ;

-- here is our invalid message
--
SET @Message = '<message text="this is a test/>';


BEGIN DIALOG @dialog_handle
FROM SERVICE [SenderService]
TO SERVICE 'ReceiverService'
ON CONTRACT [CustomerContract] ;

SEND ON CONVERSATION @dialog_handle
MESSAGE TYPE [Customer]
(@Message) ;

END CONVERSATION @dialog_handle;

Changing the above snippet to use XML data type will prevent the scenario.  So, when dealing with xml data, use the xml data type and you will catch those invalid xml errors much earlier as assigning xml to an xml data type it must be well formed by default.  The other thing you can do is make sure your sending service (code, or sql) is at least making sure its valid xml.


Summary


Just to sum things up, Service Broker is an option when choosing a queuing technology on the Microsoft platform that will deliver on the following requirements:



  • Handle message size greater than 4MB

  • FIFO
  • Remote, transactional receives
  • Easy to make highly available due to very nature of any production Sql environment

Service Broker has also been suspected, but not confirmed, of the following adverse conditions:



  • Hair loss

  • Headaches
  • Emotional despair

I started to title this post “The Little Queue That Almost Could”, then I thought that’s just being mean and showing off my ignorance of what I am sure is a wonderful technology.   As I get the opportunity to understand more, i’ll clear up any misconceptions here.


I am very interested in building BizTalk, err pub/sub and tracking capabilities around a Service Broker infrastructure, so I’ll likely be revisiting this topic again in the future.  If you have any suggestions, questions, corrections, or comments,  please reply to this thread!



References


I have several online references, as well as Sql Books Online, that I continually use during this learning adventure.  You can find these via http://delicious.com/zachbonham/ServiceBroker.

Monday, August 11, 2008

DFWCSUG 08/13/2008 Meeting - Workflow as a Service

Join us at the DFW Connected Systems User Group meeting this Wednesday as Anderson Imes presents on the topic of Workflow as a Service.

Don't forget, FREE food, drinks and a prize drawing!

If you haven't already, please take a moment to Join DFWCSUG on CodeZone!

Am I a Zune or an iPod?

Just give me my music and nobody gets hurt.

I own a Zune, it was given to me.  My wife owns a late generation iPod Nano Video.  The Zune software is hands down better than the iPod iTunes, which I consider just a hair breadth away from being crapware, and the device UI is every bit as good as the iPod (IMHO).  You can develop for it, which looks like is just starting to take off.  My 30GB Zune is, however, one ugly brick - but its first generation. 

Would I buy a new Zune?  Probably, but if I did, I would purchase a smaller edition, maybe the 8GB - I am not a video watcher on that postage stamp, so its just extra weight I'm going to carry around.  I can ditch that by slimming down and still hold all the essential music I might need.  That brings me on par with the 8GB iPod Nano Video.  Though having two media devices like this is not without compatibility problems around the 'media library'.

SuperSite blog post AntiZune spurred this post and you can check out the Zune vs. iPod comparison.

Monday, August 04, 2008

Wally Wallington - The Forgotten Technology

Has Wally really figured out how ancient wonders such as Stonehenge and the Great Pyramids were built?  It seems to good to be true, but after watching the video and reading his site, you can easily imagine ancient construction teams using the same methods demonstrated here. 

If you had to choose between aliens or some forgotten technology, maybe Occam's Razor does kick in.  I know I was left wondering why I haven't seen Wally on the Discovery channel, or the History channel?

Of course, it all looks like black magic, or alien technology, if you only have a rudimentary  idea (at best) about some of the forces at work.

Friday, August 01, 2008

The First Law Series, Joe Ambercrombie

The Blade Itself (The First Law: Book One)

Before They are Hanged (The First Law: Book Two)

Last Argument Of Kings (The First Law: Book Three)

Joe Ambercrombie

Pyr Publishing

This series came as a recommendation from my cousin. He was right, these are a great read!  I plowed through books one, and two, in less than two weeks.  I am anticipating the release of book three in September.

A mark of a good author seems to be when each book in a series is better than the last.  While there aren't as many books in this series as, say, Steven Eriksons The Mazalan Book of the Fallen series, its exceeded expectations already.

Highly recommended!

Sunday, July 27, 2008

Winds of Change

This is mostly a personal post, so if that bothers you dear reader, turn away, turn away now! 

Before we go too far, it must be said that I am grateful for the experiences I have had with Supply Chain.  My knowledge, experiences and skills, I owe in a large part to opportunities (there are no challenges, only opportunities) presented to me during my tenure in Supply Chain. 

I’ve recently transitioned from an architectural role to one focusing primarily on development.  This was a completely voluntary transition as the goals of a supply chain architect, and my personal happiness, have been conflicting for some time.  The desire to write code, every day, did not leave me much of an option with Supply Chain.  Fortunately, the opportunity to do something about it came up this year, with minimal amount of impact to my family’s lifestyle.

The opportunity I mention was a developer opening on another team here at Mary Kay: the Sales and Marketing CRM team.  SAM-CRM is on the custom development side of the house (vs. packaged applications).  My role is to be one of developer with the group responsible for delivering application platform and tools. 

What was the impact to my family or career?

Honestly, the impact to my family should be non-existent except that I should be happier around the dinner table!  I still get up and go to work at the same company, the benefits are still the same, heck, even my office number will remain the same.  The possibility of travel will likely remain about the same, but probably fewer industry conferences and less vendor interaction. 

The impact to my career will remain to be seen.  There are two immediate changes though: the loss of a title  and the loss of an office.

The loss of the office is the easiest to handle.  My office was typical in that it was a dark hole, or would have been if not for the harsh fluorescent lighting, where I stored stuff.  It wasn’t even a good place to handle conference calls because either my voice was too loud, or the walls too thin, so I managed to irritate everyone around me during them.  Of course, no one said anything until I was moving, so I can’t do anything about that now!  Sorry guys! 

My new workspace is completely different than the old. On the outside edge of a sea of cubes, I face blue sky, mildly obstructed by building elements, I at least have natural lighting – I can feel brain cells regenerating just passing through the area.

The loss of the title was the hardest for me to accept – only in that I thought about it for all of the space of the conversation about it.  I’m not big on titles, but this one was mine.  Roles are a completely different matter – I absolutely believe in roles on a team, but not titles.  So, the loss of the title seemed somewhat arbitrary, but one I can live with.   Especially since I never got business cards with ‘architect’ on them! :)

I will certainly miss my old team but am extremely excited about my new team and the opportunities they will bring!  

Saturday, June 28, 2008

Microsoft PDC 2008 - Tips

The Microsoft Professional Developers Conference is a great event for lead developers, architects and technology managers, or really anyone influencing strategy. Read more about why YOU want to be there and get registered today!

Its un-confirmed if I'll be making it to PDC this year, but that minor detail notwithstanding, here is my list of tips for those that do!

  • PDC sells out like no other Microsoft event. I'd be very careful about waiting until August to register for the event as its in October. The early bird discount ($200) runs out August 15th.
  • When you register, be sure to register for a pre-conference event. Just like PDC these too can sell out. If you wait too long, then your first choice might not be available.
  • Taxis are cheaper than renting a car - unless you know the area and REALLY get around after hours. Roughly $60 to get from LAX to Universal Studios.
  • The Sheraton and Hilton are walking distance (5 minutes) to the attendee party at Universal Studios. They probably aren't as swanky as some of the other hotels though.
  • Don't miss the keynote. You can get a lot out of the keynote on what the hot sessions/technology is from the keynote. We already know that 'Oslo' is going to be hot - the keynote should bear that out and help you focus on what pieces of 'Oslo' you need. Anything touching the cloud will likely be hot, too.
  • Go early for breakfast. Take the first bus every morning. Breakfast is a great time to interact with other attendees. Oddly enough, attendees that will be up until 2AM every morning will be at their chattiest over breakfast. If you wait until the evening social events, the same attendees will be soaking in a beer and more likely to talk about their favorite 'futbol'/football team than anything else.
  • Take advantage of MS provided session planning tools. Always have a backup session picked out in case your first pick turns out to be a dud. If you wait much more than 15 minutes to make your go/nogo decision, then you might as well stay put unless your alternate is right next door. The convention center is a BIG place.
  • Hold off on buying anything from the store until near the end of the week. The prices start dropping fast by Thursday.
  • Don't pack heavy. You'll get plenty of crap to bring home. I swear some guys actually don't pack and just wear whatever is passed out at as swag.
  • Take a laptop - but don't break it out in session. Now, this might just be me, but I think laptops should actually be outlawed during session and cell phone suppression technology in place. There are a lot of attendees, very little room and very limited power outlets. That means you'll be rubbing elbows with the guy next to you, having a laptop out isn't going to win you any popularity points. That power cord you've got running through the aisle? I'll trip over it every time. I have a hard enough time just circumnavigating all the laptop bags of the attendees (it fits nicely under the seat, BTW). Kudos for thinking you'll be able to keep up with the presenter during demos! They are prepared, and your not....and you'll irritate the crap out of everyone around you with your 'clakity clack clack' typing while trying. That LCD display is distracting, too. Need to check emails? There are banks of them around the facility. Ok...just a pet peeve and I'm done. You'll want your laptop to noodle with all the information you've received, either during breaks, or in the evening.
  • Take a pad and pencil. Its quieter than your OneNote note taking.
  • Have fun. Its a great experience!

Thursday, June 26, 2008

DFWCSUG 07/09/2008 Meeting - Windows Workflow Foundation Policy and Rules

The July meeting summary has been posted on the DFW Connected Systems User Group site!  Mark Dunn, an INETA speaker, will be presenting on WF policy and rules.  Be sure to RSVP so that we have an accurate head count for food and drinks.  See you there!

 

On another note, one of my new responsibilities, other than prize drawings, is to update the site - so please bear with me if you see anything out of place! 

If you have a topic your interested in either presenting on, or learning about, please drop me a line.  Its possible that I'll be hitting YOU up, so you might as well be preemptive by letting me know what your interested in!

Friday, June 20, 2008

Back to Basics - Object Oriented Design Principles

I had a brief discussion yesterday which really drove home how far removed I have been from what I would consider core knowledge for a developer.  I'm not saying that I had my 'Developer Card' pulled completely, but if it was based solely on the merits of my ability to verbalize those core concepts, it would have been a close call.

I'm not talking about defining Encapsulation/Abstraction, Inheritance or Polymorphism.  I believe these concepts are abstract enough that I can provide a definition, and example, to demonstrate.  No, I'm talking about implementation details, specifically with C#, and some of the nuances between keywords such as 'abstract', 'virtual', 'override' and 'new'.  These are the concepts that I realized were no longer in my immediate 'cache' and had long ago been swapped to disk, and that disk was sorely fragmented!

As part of a refresher course, I'll walk through some of these core skills and techniques and create some examples.  I'll post them up here as I work my way through them mainly because once I've written something down, my retention period is much longer.

Encapsulation/Abstraction

When I think of encapsulation, I tend to think of hiding implementation details behind some sort of contract.  C# examples of this type of contract could be in the form of a method signature (delegates),  an interface, or an abstract class.  An implementation of this contract should hide the details of the 'dirty work'.   This type of encapsulation is usually implemented using some sort of inheritance, which allows us treat collections of different types the same (polymorphism).

Inheritance

When thinking of inheritance, I usually think of two different types: interface and implementation.  Interface inheritance is when I have defined a contract (via an interface) - I'm not making any guarantees about implementation - that's left up to the interface implementation.  There is also implementation inheritance using a base class or an abstract class.  When I derive a class from an existing base class, my new class will inherit any public properties/methods of the base class.  My derived class may have the opportunity to extend the base class in some way if any base methods are marked as virtual by use of override.  When leveraging interface inheritance, I'm defining a way to plug in different implementations at runtime.

Polymorphism

Inheritance usually has overlap with polymorphism.  If I have declared an interface, or base class, which defines some method/property, I often want to treat a collection of these instances as one.  Polymorphism is the language feature that allows this.

 

Anyway, I sat down last night and wrote some code to help brush up on implementation of some of these concepts.  The code is what I remember from what I call the 'classic shapes' example.  An early mentor using the 'shapes' example to demonstrate many of these concepts back in the C++ days, so its still one of my favorites.  Others have been the automobile (transport), animal, etc...but shapes is still my favorite because way back then it carried over into a hobby of mine: video programming.

 

Download the source code from here: BacktoBasics-ObjectOrientedDesignPrinciples.zip.

Any questions, comments, or concerns, please let me know.

Sunday, June 08, 2008

Mary Kay at the Dallas Connected Systems Group

A couple of us from Mary Kay will be 'presenting' at the DFW Connected Systems Group held at the Microsoft, Las Colinas facility, on Wednesday (6/11). You can read more about it here.

I use the term 'presenting' a little loosely - I anticipate more of an overview of the last few years of using BizTalk at Mary Kay (the good, the bad, and the ugly), with plenty of time for Q&A.

See you there!

Thursday, June 05, 2008

New Technologies From TechEd this Week

[disclaimer: I've not looked at any of these technologies, only read about them, but they are on my todo list!]

TechEd always seems to be the place to get up to speed on existing technologies, while PDC is the place to see new technologies.  However, this week during TechEd Developer Conference there were at least two new technologies worth checking out:

.NET StockTrader 2.0 has finally been released.  One of the (many) interesting things about this is its Configuration Service.  The Configuration Service is a capability that can be reused with your .NET applications today.  It provides some really great features like centralized configuration management and distributed cache.  During earlier announcements surrounding Oslo, it was mentioned that the StockTrader Configuration Service might make its way into the Oslo stack.

Project "Velocity" looks to be a project focused on providing a distributed, in-memory, cache with possible support for both managed, and unmanaged, code. 

How does "Velocity" contrast with StockTrader's Configuration Service?  Are they competing technologies?  Is the caching goodness of "Velocity" being married with Configuration Services "centralized configuration management" what is being baked into Oslo

While Configuration and Caching are two separate capabilities, developers often cache configuration and there are some awesome synergy possibilities there!

No, I didn't make it to TechEd this year - I am holding out for a return to PDC 2008!

Wednesday, May 21, 2008

The Netflix Player - by Roku

The Netflix Player - by Roku is just the sort of device I've been waiting for and its capable of streaming HD content - wirelessly.  Luckily, I've got a Ethernet drop behind the TV!  It would have been nice if they spent a little more time on the form factor to make it look less like a black brick.

Of course, the dream was to have Netflix stream directly to my XBox 360.  (WARNING: Complete hypothesis alert!) I mean, Netflix already has what appears to be customized version of Windows Media Player that they require you to use installed inside of Internet Explorer.  What so hard about delivering that on the XBox 360?

All that being said, I think I know what I want for this years Fathers day!

Tuesday, May 20, 2008

Travelin man ... travelin man, yes I am.

Finally turned in my expense reports for what seems like the last month on the road! 

First week on the road, I found myself in Monterrey, Mexico.  Lots of watching the business team in action, working through process flows, etc.  Few brief moments in the sun when it was my turn on deck for vendor related discussions.   The business team really had it all sewn up, so I was mainly there for window dressing.  We say "potato", they say "patata", its all good.

Second week on the road, I found myself in Redmond, Washington.   Beautiful place, really, just so far away from everything - like Dallas! Very intense week with the Connected Systems guys/gals, who were great hosts!  The unusual aspect (besides invitation only) was that I was traveling with a co-worker from another silo.  GASP!  I know what your thinking...how could "they" possibly risk the cross pollination contamination of a free exchange of ideas across those business silos?  That goes against the very grain of corporate America!?  Still not sure how we managed to get that past the powers that be.

Met some great people and had some great discussions! 

Then it was back to Dallas briefly.  Like the weekend.

Third week, found me in an overnighter to Atlanta, Georgia, for another vendor. 

No lost luggage, no really bad flights, some crazy cabbies, but no bad food and some really good notion motion!  Back to work!

Sunday, April 20, 2008

Monterrey, Mexico

As part of the overall strategy for (cosmetic) world domination, I'll be visiting Mary Kay Mexico to meet with vendors regarding interfaces with our Supply Chain ERP application and getting some overdue business process exposure!  This is the beginning of what should be less than a 12 month deployment to Mexico for Mary Kay Supply Chain.

Viva la Mexico!

Mary Kay Mexico

Dallas Morning News Story on Mary Kay, Mexico

Friday, April 11, 2008

Mary Kay Arbor Day

One of the great benefits of a company like Mary Kay is how giving they are back to the local communities, as well as other organizations. Check out one of the many ways that Mary Kay gives back.

Thanks, Richard!

Mary Kay Arbor Day Video

Friday, April 04, 2008

Comparison: Polling vs. Eventing

Polling vs. Eventing comes up every so often, and I believe that as a general rule, if you can choose events over polling, your generally better off. I think that most people agree with this, yet we still have a number of polling scenarios across our environments. Many were conscience decisions, many were the result of 'legacy' applications.

As a mental exercise, I set out to do the math.

Specifically using an web service resource that has been designed to be polled. Something akin to a queue. Our hypothetical web service will have a single operation, GetMessage. Each time we call it, if there is a message waiting to be processed, a message gets returned. If no message is available for processing, effectively a NullMessage is returned.

Lets forget about how this scales for the moment and just get an idea of the amount of operations to process 100 messages in both a polling and eventing scenario.

For the example, lets assume that I'm calling the GetMessage operation, 1/minute as my 'poll':

  • 1 GetMessage requests/minute
  • 60 GetMessage requests/hour
  • 1440 GetMessage requests/day

Each invocation of GetMessage, regardless of whether or not data is available, is processed down through the application stack: web service, business object and data tier. This puts unnecessary load on the resources involved: CPU, Memory, Disk and Network - across multiple systems. That can be a pretty expensive operation, given that it might not even produce any actual data.

If GetMessage supports multiple clients, say with operation GetMessage(clientId), to retrieve any messages for a specific client. Now, that same IO overhead increases exponentially for each client thats calling GetMessage.

Assuming (3) clients, and that all of three (3) are polling GetMessage(clientId) with the same frequency, our requests now look something like this:

  • 3 GetMessage requests/minute
  • 180 GetMessage requests/hour
  • 4320 GetMessage requests/day

Start imaging if there are more clients, or they are polling more frequently. We may find ourselves in a position of having to have hardware in place just to mainly support operations that don't produce any business value.

With an eventing scenario, to process the same 100 messages, I have 100 operations. This should give better overall resource utilization.

With anything, there are tradeoffs that have to be understood and made.

Wednesday, April 02, 2008

On Designing and Deploying Internet-Scale Services

I came across On Designing and Deploying Internet-Scale Services via Arnon's Shared Items.

I am always interested in hearing what some of the challenges are for some of the big 'guys' out there and how they were met. Microsoft, Google, Amazon, eBay...how do they design not only applications, but data architectures and infrastructure, to support the user loads that they have placed on them.

This is a decent summary of things to think about, but given that it only weighs in at 12 pages, there's too much to cover. Of course, not one person is responsible for the depth of this knowledge, but team(s) of such people. Still, I'd pay good money to be able to spend 2 weeks with the guys and gals that build these systems just to see how its done!

Hmmm....corporate internships? Now thats an idea! Hey, Amazon! I want to come spend my summer working along side your team! Hear that Microsoft? Google?? [echoes through virtual space]. In return, I'll ...well, I guess I'd do just about anything. I can imagine being about as excited as my kids would be to get to go to Disney World!

Do they have 'training' camps for small to medium enterprises, where they can send resources to apprentice at larger enterprises (or notable industry 'experts'), if you will, for a time? You get some of this from industry events, but thats merely scratching the surface when compared to living and breathing it.

Thursday, March 27, 2008

On the Road to SOA: Service Repository and Service Locator

The Service Locator is one of the core pieces of an integration fabric that I feel needs to be available when building a common services based architecture. This is a shared service, preferably enterprise wide, which is a runtime component for the Service Registry.  The root of the problem is that managing the physical end points is something of a burden. 

This is missing on Windows and is something that maybe Oslo will support. A baked solution is needed as a standard for the Windows platform, whether it’s a platform service in Windows, or a server-based service like in IIS. 

I’m still researching resolutions for Service Repository and Service Locator capabilities in our environment. However, instead of just complaining that I don’t have something readily available I decided it was roll up the sleeves time.   I also don't relish shelling out dollars to vendors when I'm not completely convinced that these capabilities would be used, at least across business silos, even if they were already available in the enterprise.

I used David Pallmann’s blog entry Design Patterns in WCF: Service Locator as the launch point for the PoC. I think it may have been originally published on NetfxExperts.com, but is currently unavailable.

Some basic requirements I have are:

Service Repository

  • Ability to store metadata about a service
  • Centrally Located – well known location
  • Ability to query
  • Need UI

Service Locator

  • Needed to query Service Repository
  • Minimal client dependencies
  • Needs to be abstracted away from the actual repository implementation

For the Service Repository, I chose SharePoint Server 2007 and a basic List structure - though I also used the xml file based repository along with Pallmann's sample.

I have a love/hate relationship with SharePoint, but several of our biggest value-adds back to the business have been delivered on top of SharePoint. For my initial repository, it met all of my requirements right out of the gate, with no work on my part, and I even gained several features that I hadn’t thought much about at the time, including:

  • Security – use AD infrastructure to lock down service metadata changes. This is important for change management.
  • Auditing – get auditing capabilities built into SharePoint through versioning.
  • Workflow – get some rudimentary workflow for free that enable change approval (submissions, modifications) through SharePoint’s support for document approval.
  • Notification – get alerts for free through email on item changes, including RSS subscription support.
  • Views – really nice to have. Create custom views, again for free, on services that are important to you – maybe by organization, or environment.

The key pieces of information I’m looking to capture (taken directly from Pallmann) are:

  • Service Name – friendly service name.
  • Organization – company name, like Mary Kay, but could also be an external partner like DHL.
  • Environment – development, staging, production; we actually have environments that support specific production implementations (almost a code line) - e.g. IDEV3, ICONFIG3 and PRODR3.
  • Zone – intranet, internet (LAN, WAN, Internet).
  • Security - what is the security supported by the service.
  • Address – what is the physical end point of the service.
  • Binding – what is the binding supported by the service.
  • Contract – what is the contract implemented by the service.

After building the proof of concept, I thought of additional fields that might important to include:

  • Region – this is a confused term in our environment, it doesn’t map 1:1 between physical (data center) and business entities (geographical region/subsidiary). For our purposes, it’s likely to mean the physical data center that hosts the service.
  • Department – cost center within an organization.
  • Contact - initially assuming that the entry creator is the contact person, as this is captured with SharePoint

The 'hardest' part about using SharePoint was getting a grip on CAML.  I still don't have a grip on CAML, but this post on Querying Data with the CAML Query Builder V2 meant I didn't have to slow down too long. 

The current SharePoint implementation looks something like this:

 

You can view the UI for adding a new service here.

You can view the UI for viewing an existing service here.

 

The Service Locator is currently a C# client library, but could just as easily be encapsulated behind a service to avoid deploying bits to a developer desktop – and likely one of my next steps.  Though I did start down the road of adding a custom ConfigurationSection that supports service configuration like so:

<services>
<service name="Demo Service">
<context property="Environment" value="Development"/>
<!-- can get more specific by adding additional context -->
<context property="Organization" value="Mary Kay"/>
<context property="Contract" value="SupplyChain.IDemo"/>
</service>
</services>



 


When the client is promoted, its easier to update relatively static meta data values like what environment the service is running in rather than maintaining a list of urls, or addresses.  Ultimately, I'd like to just have a reference to my configuration store, that is centrally located, and then have the service auto-config from that.


The client side access looks something like this:

ServiceLocator locator = new ServiceLocator();

DemoService proxy = new DemoService();

// resolve the endpoint location based
// on some configuration context
//
proxy.Url = locator.Resolve("Demo Service");



I arbitrarily chose the Spring.NET Framework as an Inversion of Control (IoC) container. This is an area that I’ve been noodeling around in as we are missing the capability from our core development library. I reviewed a couple (without actually installing) before deciding on Spring.NET. I may revisit this post PoC to see what offerings are actually pluggable with Enterprise Library.  This made it easy to swap out the connection to the repository (Xml or SharePoint) through configuration.


Service Locator for Enterprise Library didn’t have any documentation and I ran out of time by the time I found it.

 

Does it support UDDI?  No.  Unfortunately, I've had a hard time wrapping my head around UDDI - I installed the Microsoft SDK.  I've enabled the UDDI Service on my development server, but it just seemed overly complex (and confusing) for the basic service support I was looking for.  We have big ERP and WMS implementations, which are still immature in their service offerings, but eventually standardization on something like UDDI may prove beneficial long term.

Anyway, this was a quick way to add a Service Repository and Service Locator to my teams capabilities.  I'll post a revised update after we've had some burn in - I'm sure there will be many enhancements/changes along the way.