Wednesday, April 18, 2007

ART Testing

Wow...its hard to believe that I've been sequestered away in a small room with our Application Readiness Team (ART) for almost 8 weeks now!

I am currently responsible for the go-live readiness of our J.D. Edwards global implementation of standardized business processes ("Project Starburst"). In a nutshell, our business and technology teams have defined core business processes that will be executed in 4 regional data centers. We've significantly customized J.D. Edwards, DSI (RF scanning) and Integration. I need to know how these processes will execute to understand areas that will need to be tuned (code) or whether or not scale up/out is required and be able to present the results to my management team.

I hear you 'agilists' out there cringing...why wait until the middle of the implementation to discover the answer to these questions? That, my friends, is another story...

For now, this is just a follow up to a previous post: Mary Kay Labs and don't forget to check out Mary Kay ART Strategy.

Somehow I continue to get sucked into engagements like this...at Mary Kay, this is my 4th such engagement for Supply Chain. Hmm....again, another story...

To catch you up...

We have built a complete testing environment at Mary Kay to support activities such as this; the result of the hard work of several people (thanks, girls and guys!). This includes enough hardware to deploy our applications to production-size hardware. I won't bore you with the hardware specifications, but I can if you like! :) Just drop me a line...

We have a total of 37 scripts to date emulating a total of ~50 users (virtual) across 5 functional
areas for everything but integration. Thats a special beast unto itself...fortunately, we've had a lot of experience in this area!

Each script has had a specific data set staged for its execution. We have created enough data to support these 'day in the life of' activities. Each script was used to stage data for downstream activities, so the data was created using the application, not any 'tricks' (although we have a few of those).

Each script is written to stand alone and also to be scaled up as necessary. For example, I can have a single virtual user processing 100 'iterations' of Inbound Receive Case, or two users processing 50 'iterations', or 100 simultaneous users processing 1 iteration of Inbound Receive Case, all without changing the script or the data set.

This attention to detail when creating scripts is not required if you are have very fixed requirements. However, if you want to be able to scale up as necessary. e.g. lets scale up the number of users across each functional area until we find a breaking point (application or infrastructure) then it pays off in script re-usability.

Also, we have a number of administrative and configuration based scripts that aren't actually used during the load test. For example, we are running with a copy of our staging environments data. Our staging environment was configured by humans and not all of our finished good products have the same consistent configurations (e.g. unit of measure conversions, standard case sizes, expiration dates, etc..). When testing, your always better off with known data than unknown. So, we have a script that will go out and configure a particular part type (component, finished good) a particular way for consistency (again, using the application NOT direct manipulation). This is an asset from this engagement that can be handed back to the business team to assist with their daily tasks.

Oh, there is lots more where this came from but my brain has sufficiently cycled down for the night....see ya!

No comments:

Post a Comment