My photo
Carlsbad, CA, United States

Saturday, May 30, 2009

Definition of Done

“If you don't know where you are going, you will wind up somewhere else.” – Yogi Berra

When asked how a project is going, most programmers will offer one of two discreet responses. It’s either “I just started looking at the code” or “I’m done, I just need to clean up a few things.” Upon further investigation, I have found that done means the hard part has been figured out and there is usually an IDE output window or a funky test web page running on localhost that can demonstrate this status.

The problem is that the hard part is really the fun part, and the actual hart part is the “…I just need to clean up a few things.” So, to remind me and the rest of the team what done means, we have the following definition posted prominently on the wall.

1) Unit Tested

This doesn’t need much of an explanation, but having a formal definition posted on the wall is a good reminder for a team new to unit testing.

Unit testing by itself is important, but the real boost comes from using a CI server. We use TeamCity for our .NET projects and phpUnderControl for php projects.

2) Acceptance Tested

For our team, acceptance testing means that we deploy our new code to a demo server and write Selenium tests for it. We export the selenium tests as phpUnit fixtures that CruiseControl will run whenever our svn repository is updated. Before the selenium tests are run, we need to update our demo server, so we have CruiseControl call http://ourdemoserver.com/svn-update.php first.

3) Packaged For Deployment

For our .NET projects, we use a homegrown tool for packaging and deploying. It can stop/start IIS register/unregister COM+ objects and rollback across a farm. For php projects, we simply hand roll a zip file and use some lightweight scripts to unpack the files on the server with rollback ability.

4) No Increased Technical Debt

I ask myself if this code is going to be an asset that makes us stronger and able to respond more quickly to future business opportunities, or is it going to be a fragile liability that I will need to carefully tip-toe around 30 seconds after it’s deployed.

Just like a parent thinks their kid is the cutest kid ever, it can be hard to look at your own work when you’ve got your head wrapped around it and come to terms with the fact that you’re about to deploy some legacy code. I usually grab a coworker and walk through things while paying close attention to the only valid measurement of code quality.

wtfm

Wednesday, May 27, 2009

Lunch-n-Learn Videos

These are the lunch-n-learn videos we’ve watched in the last few months (that I can remember). They are listed roughly in the order watched with the most recent ones at the top.

The Joys and Pains of a Long Lived Codebase – Jeremy Miller

Kona 3: Learning Behavior Driven Development (BDD) – Rob Conery

Best Practices in Javascript Library Design – John Resig

Facebook: Science and the Social Graph - Aditya Agarwal

Digg, An Infrastructure in Transition – Joe Stump

Ajax Performance – Douglas Crockford

High Performance Web Sites: 14 Rules for Faster Pages – Steve Souders

Agile Project Management: Lessons Learned at Google – Jeff Sutherland

10 Ways to Screw Up with Scrum and XP – Henrik Kniberg

The Principles of Agile Design by Bob Martin – Robert Martin

Introduction to Domain Specific Languages – Martin Fowler

10 Ways to Improve Your Code – Neal Ford

The Renaissance of Craftmanship – Robert Martin

Sunday, May 24, 2009

Martinizing is Not Refactoring

A friend of mine, Keith, used the term martinizing (as in Uncle Bob) for the process of cleaning code. The term has taken on a very specific meaning and it’s worth a few words.

Martinizing is similar to refactoring in that it does not change the observable behavior of the code, but the goal is different. When I refactor, I am changing the design, usually in an effort to add a new feature in an open closed manner.

When I martinize, I am telling a story. The most important story being the desired behavior, but also a story of the hard earned knowledge acquired along the way. If I spend hours distilling some business concept. I want to leave a trail for the next guy to understand that a simple property assignment or conditional statement isn’t so simple. And of course the perfect way to punctuate that message is with a well written unit test.

Sunday, May 17, 2009

Don’t Eat a Donut on the Way Home From the Gym

homer_donut

I must warn you, this post is about unit testing software, not donuts. With that said, I’ve been pondering the dilemma of how much time to spend writing unit tests and today I had a moment of clarity that I couldn’t help but share.

I was pairing with a coworker on Friday and we were both feeling a bit unproductive because we had spent so much time writing unit tests for a relatively small bit of code. In fact we spent more time writing tests than we spent writing the code to make the tests pass. To make matters worse, we spent more time refactoring the unit tests than we did refactoring the code that made the tests pass. We were pining over the readability of the tests as if the test were more important than the code and that left me with an uneasy feeling over the weekend.

Today I found a bit of inner peace as I embraced the the notion that the tests are more important than the code that makes them pass. Think about that for a moment. The challenge in writing software is not the implementation. Syntax, structures and algorithms are the easy parts. The real hard earned knowledge won through experience comes in the form of specifications; Understanding the intricate complexities and desired behavior of your software in the wild.

Writing software can be like swimming in the ocean when a thick fog rolls in. The mechanics of swimming are learned easily, but the real challenge is knowing which direction to swim so that you get back to the shore before you run out of energy. After a day of programming, the real asset is not the implementation of your feature, it’ s the increased understanding of your problem domain. This understanding is captured in the form of executable requirements.

When there is a bug in your system, chances are it’s a bug in your understanding of how your system should behave under a particular set of conditions. Burying an innocuous if statement in the middle of some method deep in the stack is a horrible way to reap the reward of day spent spelunking through code. It’s like eating a donut on the way home from the gym.

The legacy you leave is the the unit test, it tells the story of the hard fought knowledge and the readability of your test is more important than the readability of your implementation. Tell the next developer what’s important through an executable specification that reads like one.

Sunday, May 10, 2009

5 Ways to Fail at Pair Programming

shoes 

The only hard part is convincing your boss that it’s not a colossal waste of time, right? Well that’s the first hurdle and yes it’s a doosie, but it’s just the price of admission. The real fun starts when you sit next to your partner and try to prove your boss wrong. After a year or so in the trenches, I’ve learned a thing or two on how to mess it up.

5. Researching new technology

“Click on that link, no wait scroll up, over there, wait I wasn’t done reading that…” That’s not pair programming, that’s insanity. Recently we were working on a project that involved a custom Firefox browser skin. Neither of us had experience with browser skins, so there was a lot of Googling and tutorial reading. As soon as we caught ourselves reading blogs and whatnot, we would split up for 15 minutes and then compare notes.

4. Getting your environment going

“Hey buddy, are you ready to start pairing on that project? Let’s see, now where is the source code for the project we’re working on… Hmm, this doesn’t compile – I think I need to install the latest version of that library.” Kill me now.

3. Paralysis

FrightenedMan400

One of my favorite authors, Kurt Vonnegut would write clever things like “The man looked… guilty isn’t the right word, but it’s the first one that comes to mind.”

With pair programming, you can’t just sit there because you don’t know where to start or you can’t think of the right variable name. Just type something. You have to get the problem solving out of your head and into a medium that two people can have a conversation about it. Maybe people are afraid to type something in front of a peer if it’s not perfect. Borrow a technique from Kurt Vonnegut and as you are type, just say “I don’t want the code to look like this…” and write some awful switch case statement.

Once there is something on the screen, you and your partner are instantly aligned and solving the same problem. Now you can discuss factories and polymorphism and all kinds of heady solutions.

2. Not doing TDD

Have you ever watched an artist paint a picture and thought to yourself “what the heck is that”. Then with the next brush stroke you realize you’re looking at the profile of someone’s face. Well watching someone write code can be horribly worse than that.

Pair programming is a conversation, not a seminar or window into someone’s brain. The best way to keep things at a conversational level is to work top down which is exactly what TDD forces you to do. You must consider your code from the client perspective and challenge yourself to keep your code on purpose.

Even more importantly, TDD gives you and your buddy an “out” at regular intervals. You write a test, you make a test pass. Either way, you have many small victories over the course of a few hours and you can switch pairs at well defined feel-good stopping points.

1. Not using a timer

pprobot I used to work with a guy that was always competing with something. A mutual friend at the office was doing the 3-day breast cancer walk. If anyone in the office donated money, he would up his donation $1 higher. On the softball team, his jersey number was #1. When we started playing ping-pong at lunch, he bought a $600 ping-pong robot to practice with at home. So when he started using a timer while programming I just figured it was good old Paul racing against time.

Then I paired with him on a project. Right after wanting to strangle him for not having his environment setup, we set his timer for 30 minutes and started on our project. When the timer went off, we stopped mid keystroke and switched seats. This did a few things. First, it kept Paul engaged the whole time because he knew he would be in the hot seat in a few minutes, and I can be a bit of a bully so it was easier for me to back off knowing I’d get my change soon enough. One of the more surprising benefits, however, was simply the fact that we would sit for a minute and agree exactly what we were doing before starting the timer which really got us in gear and drastically cut down on tangents.

Using a timer creates a magical dynamic that I cannot do justice with words. It’s like TDD, it’s not about the tests – it’s about the code you write to make it testable. You have to try it. Period.

“If you can do a half-assed job of anything, you're a one-eyed man in a kingdom of the blind.” – Kurt Vonnegut

Saturday, May 9, 2009

Unit of Work with Unity and ASP.NET MVC

I was recently asked how I get the context of  “this” in the UoW relating to the current page request.

howididit

Before I get into the guts, I would like to provide a little context. My application has 20+ databases scattered across 4 machines.

IRepository<Customer> customerRepository; // customer database on server 1
IRepository<Package>  packageRepository; // customer database on server 1
IRepository<Contact>  contactRepository; // contact database on server 2
So, I might ask for a Customer object and a Package object and I want to get the same ISession for both and if I ask for the same Customer twice, I want to get the one from the 1st level cache (I’m using NHibernate). If I ask for a Contact object, I will get a different ISession. All opened sessions are managed by my UoW. So, when the page request is complete, I call UoW.Commit and all sessions are committed.
 
The “magic” if you will, happens in the global.asax. I was nosing around in Rhino.Commons for inspiration and adapted a technique I saw there. This is how it looks:
public class GlobalApplication : HttpApplication
{
    private static IUnityContainer container;

    public GlobalApplication()
    {
        BeginRequest += new EventHandler(GlobalApplication_BeginRequest);
        EndRequest += new EventHandler(GlobalApplication_EndRequest);
    }

    protected void Application_Start(object sender, EventArgs e)
    {
        RegisterRoutes(RouteTable.Routes);

        container = new UnityContainer();
        container.AddNewExtension<PolicyInjectorContainerExtension>();
        container.AddNewExtension<HttpRequestLifetimeCoreContainerExtension>();
        container.AddNewExtension<WebMvcContainerExtension>();

        ControllerBuilder.Current.SetControllerFactory(new UnityControllerFactory(container));
    }

    void GlobalApplication_BeginRequest(object sender, EventArgs e)
    {
        var unitOfWork = container.Resolve<IUnitOfWork>();
        unitOfWork.Start();
    }

    void GlobalApplication_EndRequest(object sender, EventArgs e)
    {
        var unitOfWork = container.Resolve<IUnitOfWork>();
        unitOfWork.Commit();
    }
}

I register the UoW with an HttpRequestLifetimeManager so I get a new instance for each request.

Container.RegisterType<IUnitOfWork<ISession>,
	NHibernateUnitOfWork>(new HttpRequestLifetimeManager());
My NHibernateRepository gets injected with the UoW for the current HttpRequest and when the request is complete, the global.asax commits the whole thing.
public class NHibernateRepository<T> : IRepository<T>
{
    protected ISession session;

    public NHibernateRepository(IUnitOfWork<ISession> unitOfWork)
    {
        session = unitOfWork.GetContextFor<T>();
    }

    ...

    public virtual void Save(T obj)
    {
        session.Save(obj);
    }
}

Now, this is all the context of an ASP.NET MVC Controller, but I have a similar issue for other (non-web) services. In that context I am using AOP and decorating a particular method with [UnitOfWork], which looks like:

public class UnitOfWorkCallHander : ICallHandler
{
    private IUnitOfWork<ISession> unitOfWork;

    public UnitOfWorkCallHander(IUnitOfWork<ISession> unitOfWork)
    {
        this.unitOfWork = unitOfWork;
    }

    public int Order { get; set; }

    public IMethodReturn Invoke(IMethodInvocation input, GetNextHandlerDelegate getNext)
    {
        unitOfWork.Start();

        try
        {
            return getNext()(input, getNext);
        }
        finally
        {
            unitOfWork.Commit();
        }
    }
}

In that context I use a PerThreadLifetimeManager for the NHibernateUnitOfWork, and code ends up looking like:

[UnitOfWork]
public void Process(Job job)
{
    ...
}

You know, there is really no reason why you couldn’t do the same thing in the MVC context. You could basically ditch the global.asax event hooha and just annotate the Controller task:

[UnitOfWork]
public ActionResult ControllerTaskThatRequiresUoW()
{
    ...
}

It’s more explicit than using the global.asax technique and it would allow you to specify different UoW behavior on each controller task:

[UnitOfWork(IsolationLevel.ReadCommitted]
public ActionResult SomeTaskThatShouldNotReadUncommitedData()
{
}

[UnitOfWork(IsolationLevel.ReadUncommitted)]
public ActionResult AnotherTaskWithDifferentRequirements()
{
}

I’d like to try this out next time I get back into ASP.NET MVC, the global.asax event technique felt a little too magical and I don’t think Uncle Bob would approve.

Saturday, May 2, 2009

An Order Processing Pipeline in ASP.NET MVC

Lately, I can’t seem to shake the pipeline pattern. It keeps popping up in all my applications. It’s like when I think about buying a new car and then I start seeing them everywhere on road and think – have there always been that many out there?

So, here it is in action. Below is a controller from an ASP.NET MVC application. The method accepts an order in the form of xml for provisioning products in our system.

public ActionResult EnqueueOrders(string orderXml)
{
    var context = new OrderContext(orderXml);

    var pipeline = new FilterPipeline<OrderContext>();
    pipeline.Add(new RewriteLegacyUsernameFeature(productFinder));
    pipeline.Add(new ValidateOrderXml());
    pipeline.Add(new ValidateReferenceCode(customerFinder));
    pipeline.Add(new IgnoreOnException(new ValidateRemoteAccountUsernames(packageRepository)));
    pipeline.Add(new IgnoreOnException(new ValidateRemoteAccountUniqueEmail(packageRepository)));
    pipeline.Add(new EnqueueOrders(customerFacade)).When(new OrderHasNoErrors());
    pipeline.Execute(context);

    var xml = BuildResponseFromOrderContext(context);

    return new XmlResult(xml);
}

Most of it is just validation logic, but there are a few meatier pieces. Take a look a the first filter – RewriteLegacyUsernameFeature. If you’ve ever published an API that accepts xml, then you’ve probably wanted to change your schema 5 minutes later. The pipeline is great way to deal with transforming legacy xml on the way in.

Next is the implementation of the FilterPipeline which is essentially a chain of responsibility with a few convenience features.

public class FilterPipeline<T> : IFilter<T>
{
    private IFilterLink<T> head;

    public void Execute(T input)
    {
        head.Execute(input);
    }

    public IFilterConfiguration<T> Add(IFilterLink<T> link)
    {
        var specificationLink = new SpecificationFilterLink<T>(link);

        AppendToChain(specificationLink);

        return specificationLink;
    }

    public IFilterConfiguration<T> Add(IFilter<T> filter)
    {
        return Add(new FilterLinkAdapter<T>(filter));
    }

    public void AppendToChain(IFilterLink<T> link)
    {
        if (head == null)
        {
            head = link;
            return;
        }

        var successor = head;

        while (successor.Successor != null)
        {
            successor = successor.Successor;
        }

        successor.Successor = link;
    }
}

You might be scratching your head and wondering what this business is with both IFilter<T> and IFilterLink<T>. IFilter<T> is just a simpler version of IFilterLink<T> that doesn’t require the implementer to deal with calling the next link in the chain. The subject will always pass through, no short-circuiting, hence the pipeline.

My favorite part  is the SpecificationFilterLink<T> which is a decorator that uses a Specification to decide if the filter should be invoked. So you can do little readable snippets like:

pipeline.Add(new EnqueueOrders(customerFacade)).When(new OrderHasNoErrors());

Maybe this post will get it out of my system and I can move on to other solutions. It’s just so handy…