Testing DateTime.Now - Introducing Wrap and Override

Monday, 2 February 2009

Abstract:
The student said to his zen master: "What time is it?"
"It is TimeWarper.Now", the master replied.
"What time is that?", asked the student.
"It is whatever time I need it to be.", said the master.
And the student was enlightened.


Quite often you'll want your application to save a timestamp of when something occured. Due to the nature of time this presents a special set of challenges when it comes to testing.

Let's look at an example for logging exceptions.
Here's an initial attempt at a test:


[Test]
public void TestLogException_LogsCorrectValues()
{
try
{
throw new AbandonedMutexException("The poor mutex was abandoned!");
}
catch (Exception ex)
{
// Log the exception
ExceptionLog log = _exceptionLogger.LogException(ex);

// Assert that the logged values are correct
Assert.AreEqual("System.Threading.AbandonedMutexException", log.ExceptionType);
Assert.AreEqual("The poor mutex was abandoned!", log.Message);
Assert.That(!string.IsNullOrEmpty(log.StackTrace));
Assert.AreEqual(1, log.NumOccurences);
Assert.AreEqual(DateTime.Now, log.FirstOccurrence); // <-- Mind the dates!
Assert.AreEqual(DateTime.Now, log.LastOccurrence); // <-- Mind the dates!
}
}
That looks straightforward enough doesn't it?
Let's make the test work:


public class ExceptionLogger
{
public ExceptionLog LogException(Exception ex)
{
ExceptionLog log = new ExceptionLog();
log.ExceptionType = ex.GetType().ToString();
log.Message = ex.Message;
log.NumOccurences = 1;
log.StackTrace = ex.StackTrace;
log.FirstOccurrence = DateTime.Now;
log.LastOccurrence = DateTime.Now;
return log;
}
}

public class ExceptionLog
{
public string ExceptionType { get; set; }
public string Message { get; set; }
public string StackTrace { get; set; }
public DateTime FirstOccurrence { get; set; }
public DateTime LastOccurrence { get; set; }
public int NumOccurences { get; set; }
}
Running the test, we see that it fails!



The line failing is:
Assert.AreEqual(DateTime.Now, log.FirstOccurrence);
With this message:
NUnit.Framework.AssertionException:   
Expected: 2009-02-02 11:39:42.820
But was:  2009-02-02 11:39:42.787


As you can see 33 milliseconds have past between saving DateTime.Now to log.FirstOccurence and comparing the results to DateTime.Now in the Assert.AreEqual method.
This makes perfect sense. Time tends to pass even for very fast computers.

But what do we do about it? There are a few possible solutions that spring to mind:

Remove the Assert - We could just decide it's too much trouble to test and move on. That's not very nice. If we go down that road we could just stop testing right now and finish the code.

Make the Assert less precise - We could just strip the milliseconds. Who cares about them anyway? We could do that, but there's a 3,3% chance that the test is run in the milliseconds that overlaps one second changing to another. This would cause the test to inexplicably fail every now and again. Unacceptable.

Overload ExceptionLogger.LogException 
We could get control of what DateTime to set by overloading LogException like this:


[Test]
public void TestLogException_LogsCorrectValues()
{
try
{
throw new AbandonedMutexException("The poor mutex was abandoned!");
}
catch (Exception ex)
{
DateTime now = DateTime.Now; //<-- New variable!

// Log the exception
ExceptionLog log = _exceptionLogger.LogException(ex, now);

// Assert that the logged values are correct
Assert.AreEqual("System.Threading.AbandonedMutexException", log.ExceptionType);
Assert.AreEqual("The poor mutex was abandoned!", log.Message);
Assert.That(!string.IsNullOrEmpty(log.StackTrace));
Assert.AreEqual(1, log.NumOccurences);
Assert.AreEqual(now, log.FirstOccurrence); // <-- Compare now instead
Assert.AreEqual(now, log.LastOccurrence); // <-- Compare now instead
}
}

...

public class ExceptionLogger
{
public ExceptionLog LogException(Exception ex)
{
return LogException(ex, DateTime.Now);
}

public ExceptionLog LogException(Exception ex, DateTime timeOfException)
{
ExceptionLog log = new ExceptionLog();
log.ExceptionType = ex.GetType().ToString();
log.Message = ex.Message;
log.NumOccurences = 1;
log.StackTrace = ex.StackTrace;
log.FirstOccurrence = timeOfException; //Use the passed in value
log.LastOccurrence = timeOfException; //Use the passed in value
return log;
}
}

This solution is actually not too shabby. The test passes:



But it comes with two drawbacks:
  • It's making the method signature more complex. People using ExceptionLogger might not be sure if they're required to pass in a datetime and what it should be.
  • We're not actually testing that DateTime.Now is used as the timestamp in the default case.

Let's look at an alternative!

The Solution: Bending time and space

Let's face it, DateTime.Now is one of those things that gets called a lot and that you don't want to think about. The previous solution works well for a test this small but in more complex scenarios we probably have enough stuff to throw around that we don't want to complicate things further by passing confusing DateTime arguments that are not really needed.

To get complete control of time itself for unit testing here is a class I always create within two hours of joining a software project:


public class TimeWarper
{
private static DateTime? _now;

/// <summary>
/// TimeWarper.Now is a utility for getting the current time.
/// In production code it simply wraps DateTime.Now, but allows the current
/// DateTime to be overridden for unit testing purposes.
/// </summary>
public static DateTime Now
{
get
{
return _now ?? DateTime.Now;
}
set
{
_now = value;
}
}
}
This is a comon trick known as Wrap and Override. 
You wrap the thing you want to fake in your unit test and override it with something under your control. In this case we just want to make time stand still so we can accurately measure it.

Here's how it looks if we use it in the example above:


[TestFixture]
public class ExceptionLoggerTests
{

private ExceptionLogger _exceptionLogger;

[SetUp]
public void SetUp()
{
TimeWarper.Now = DateTime.Now; //Freeze!
_exceptionLogger = new ExceptionLogger();
}

[Test]
public void TestLogException_LogsCorrectValues()
{
try
{
throw new AbandonedMutexException("The poor mutex was abandoned!");
}
catch (Exception ex)
{
// Log the exception
ExceptionLog log = _exceptionLogger.LogException(ex);

// Assert that the logged values are correct
Assert.AreEqual("System.Threading.AbandonedMutexException", log.ExceptionType);
Assert.AreEqual("The poor mutex was abandoned!", log.Message);
Assert.That(!string.IsNullOrEmpty(log.StackTrace));
Assert.AreEqual(1, log.NumOccurences);
Assert.AreEqual(TimeWarper.Now, log.FirstOccurrence); // <-- TimeWarper!
Assert.AreEqual(TimeWarper.Now, log.LastOccurrence); // <-- TimeWarper!
}
}
}

...

public class ExceptionLogger
{
public ExceptionLog LogException(Exception ex)
{
ExceptionLog log = new ExceptionLog();
log.ExceptionType = ex.GetType().ToString();
log.Message = ex.Message;
log.NumOccurences = 1;
log.StackTrace = ex.StackTrace;
log.FirstOccurrence = TimeWarper.Now; //Tada.wav!
log.LastOccurrence = TimeWarper.Now; //Tada.wav!
return log;
}
}

There you have it.
Freezing time for unit tests is a great way to make them simpler. You can deal with DateTimes as a value type rather than having to declare multiple variables. Simply refer to any unique point in time you want to set or compare against as TimeWarper.Now.AddMinutes(x) 


Super mega quickstart for Unit Testing with Visual Studio .NET

Sunday, 1 February 2009

Abstract:
  • This is a quickstart for setting up a testing environment in Visual Studio
  • Step 1 - Create a new class libary for your tests.
  • Step 2 - Add a reference to nunit.framework.dll
  • Step 3 - Get a test runner like the one in Resharper, TestMatrix or NUnit.
  • Step 4 - Start writing tests and run them with your test runner!
  • Step 7 - Here are some mocking frameworks: Rhino mocks, NMock, TypeMock, Moq
  • Step 10 - Profit!

For the benefit on any beginners stumbling across this blog, here is what you need to get started writing Unit Tests on the .NET platform. 

Step 1 - Create a new class library in Visual Studio 
Call it something along the lines of YourProjectNamespace.Testing

Step 2 - Download the NUnit framework.
The NUnit framework is pretty much the industry standard for running Unit Tests on the .NET platform.
Microsoft has created something called MSTest that nobody uses. To be honest I haven't looked into it very much and I see no reason to start now. :)




Then copy bin/nunit.framework.dll to your test project bin folder (that is: add it to your ThridPartyLibraries folder and create a reference to it in your testing project)

Step 3 - Get a test runner
To run unit tests you need a program to run them, preferably one integrated with Visual Studio.

I have found the test runner that comes with Resharper to be the nicest one out there right now.
Resharper is a plugin for Visual Studio that unfortunately is not free, but is probably one of the best investments you can make as a developer. Read all the hype on their homepage and give it a spin. 

If you don't want to use resharper to run your tests here are some alternatives:
TestMatrix - Another commercial tool that works.
NUnit - NUnit comes with a free test runner that is functional but crude. Only use it if you have to.

If none of these work for you use the power of google.

Step 4 - Create a unit test!

Wohoo, let's get cracking!
As an example, let's take the testing of a class validating the format of an email address.

Start by creating a file in your test project called EmailValidatorTests.cs

Copy and paste this code into the file:



using System;
using NUnit.Framework;

namespace OmgWtfTdd.Testing //You'll want to change this to match your project
{

[TestFixture] //This tells the test runner that this class contains tests
public class EmailValidatorTests
{

private EmailValidator _emailValidator;

[SetUp]
public void SetUp()
{
// The code in SetUp will be run before the execution of each unit test.
// Here we create things that will be used in multiple tests
_emailValidator = new EmailValidator();
}

[TearDown]
public void TearDown()
{
// The TearDown method is run after each test completes.
// Use it to tear down anything not caught by normal garbage collection
}

[Test] // <-- Look! A Test!
public void TestIsValidEmail_NoInput()
{
Assert.IsFalse(_emailValidator.IsEmailValid(String.Empty));
Assert.IsFalse(_emailValidator.IsEmailValid(null));
// Assert is a way of evaluating a condition
// If the Assert is not fullfilled the test fails.
// If an unexpected exception is thrown the test fails.
} // <-- If the test manages to get all the way to the end the test succeeds!

[Test]
public void TestIsValidEmail_TrueForValidEmails()
{
Assert.IsTrue(_emailValidator.IsEmailValid("erik@thisisnotmyrealemail.com"));
Assert.IsTrue(_emailValidator.IsEmailValid("erik@providerwithtwodots.co.uk"));
Assert.IsTrue(_emailValidator.IsEmailValid("erik.rydeman@dotsinthename.com"));
}

[Test]
public void TestIsValidEmail_Badformats()
{
Assert.IsFalse(_emailValidator.IsEmailValid("@noname.com"));
Assert.IsFalse(_emailValidator.IsEmailValid("noprovider@.com"));
Assert.IsFalse(_emailValidator.IsEmailValid("noprovidertall@"));
Assert.IsFalse(_emailValidator.IsEmailValid("nodomain@lol."));
Assert.IsFalse(_emailValidator.IsEmailValid("erik@domain.reallylongtopdomain"));
Assert.IsFalse(_emailValidator.IsEmailValid("wheredidtheatgo.com"));
}
}

/// <summary>
/// This class would normally be placed somewhere in your project.
/// The classes in the Testing project is for testing only and not used in production code.
/// </summary>
public class EmailValidator
{
public bool IsEmailValid(string email)
{
// This line is what you'd normally start out
// with writing your tests before your code:
//throw new NotImplementedException();

// I've cheated and started a bit. See if you can improve it!
return email.Contains("@");
}
}
}


That's a Unit Test for you! Actually that's three unit tests. Let's see if they work:

To run the tests with Resharper:
Right click somewhere in EmailValidatorTests and select "Run Unit Tests" from the menu.
Or: In the top menu select Resharper/Unit Testing/Run All Tests From Solution

To run the tests with the NUnit Test Runner:
Compile the Testing project, open the NUnit Test runner and find the dll for the testing project.
Run the tests.

The result should look like this:



In order for you to be able to test your environment I have made on of the tests succeed already. Note that following the principle of writing tests first you should always see it fail before you fix it.

As you can see the IsEmailValid method is quite incomplete. As an exercise, try making the two other tests succeed!

Step 7 - Get a mocking framework!

Since this post is about setting up your test environment let's skip ahead a few steps and briefly mention another thing you'd typically need to do in a project.

In any non trivial project you'll usually come accross the need to fake some classes in order to be able to test other classes. One way to do this is to create mock objects. We'll talk about this later, but here are some mocking frameworks you can use:
Rhino Mocks - My current favourite. Easy to use, powerful, strongly typed mocking.
NMock - Uses strings to specify which methods to mock. Not too elegant but very easy to understand.
TypeMock - Very powerful and useful but a bit magical. Requires integration into the VS environment which can sometimes cause problems.
Moq - Haven't tried it yet but it sure looks very nice.

They all pretty much do the same thing so use the one you like!
If you wouldn't know what to do with any of them just yet, don't worry. Start writing some regular tests and we'll tackle mocking later!

Step 10 - Profit!

There, if you've completed all the steps above you now have the power to write Unit Tests!
Use it wisely.

Test Driven Development - What is it good for?

Friday, 30 January 2009

Abstract:
  • This post is an introduction to TDD and its impact on software projects
  • Test Driven Development is about writing your unit tests before you code, letting the tests act as a driving force of how your code is shaped.
  • TDD is closely coupled with the practice of Refactoring. Having a test suite in place enables you to make refactorings at the project level while capturing and fixing the functionality that gets broken by this.
  • The need to write isolated Unit Tests forces you to lower the dependency between your components. This makes code easier to reuse and understand.
  • The big benefits of TDD are of a long term nature but requires an initial investment of time. Sacrificing practices such as TDD, that aim to improve project quality, incurs technical debt that you will eventually have to pay.

So what is TDD? Here comes an overview and some thoughts about it.
This is ground that is already pretty well covered elsewhere, but I'm sure someone will find this useful or interesting.

What is Test Driven Development?
TDD is the act of writing unit tests before you write code. When adding a new feature you follow these steps:
  1. Red - Write a Unit Test that exercise functionality that you want to exist before you create the classes or methods that will fulfill that functionality. This will probably result in your project not compiling. With the least possible effort you can, make it compile, then run the test and see it fail.
  2. Green - Make your test pass by modifying the classes and methods you're testing. Do not add any functionality that is not required to make the test work. If your current test does not test a desired piece of functionality make a note of it and write another test later.
  3. Refactor - Since you've only done what is necessary to make your test work, chances are that your code is not very pretty. Refactor the code to make it prettier, remove any duplication you can find etc, and run your test again to verify that the functionality you wrote the test for is still in place.
This page goes into more detail, has some pretty diagrams and many wise things to say about it.

What are the benefits of Test Driven Development?
Kent Beck in the first sentence of his classic book defines the goal of TDD as: Clean code that works
This really sums it up quite nicely.

Clean code
Since the unit tests drive the production code the code will only contain the functionality dictated by the tests. By adding only the necessary code in small increments we avoid creating a system that is overly generalized, containing speculative functionality that is designed before we're sure there's an actual need for it.
Now, if you're working in a big project you might find this way of doing things worrying. You may think that going into a project using no upfront design at all is surely a way of getting a tangled mess designed to work for the benefit of a short term solution to a series of isolated problems. This argument has merit, and here are a few important aspects to consider about how TDD and evolutionary design impacts project architecture:

Refactor
the third step of RGR must not be forgotten. The design of software architecture is a tricky business. When you start a project you will do well to have a general structure that dictates where new classes will be created, and common problems often have well known solutions that are readily applicable. More often than not though, design is something you discover in your code as a project grows. You may find that duplication of code leads you to create common classes and superclasses to enable reuse and eliminate duplication. Having a suite of unit tests covering the requirements of your existing functionality enables you to change the structure into something that is more elegant and works better. Large refactorings at the project level can be done with a greater degree of confidence if you have tests that will fail if a piece of functionality suddenly gets broken from a change. 
And how do you ensure that the unit tests will cover the existing functionality and catch errors with large refactorings? Why by adhering strictly to the principle of not adding code that is not covered by unit tests. That is easier said than done as unit tests can be cumbersome to write, but keep in mind that the benefit they bring is not only at the micro level for the specific functionality under test, but at the macro level enabling you to do some pretty funky stuff with your project architecture as development progresses.

Lowering dependencies
A Unit Test should by definition test a small unit of functionality. This is tricky. Classes tend to have dependencies on other classes, that in turn depend on other classes still. You may be able to create an instance of a Business Object without creating an instance of anything else, but what about cases where you want to test a method combining logic that spans several BOs, calls to the database, calling remote APIs and websites, or simply accessing the web context not available in a test?
The requirement to be able to write these kinds of unit tests without touching on external systems forces you to construct your architecture in a way that is loosely coupled, programming towards interfaces and abstractions rather than the actual objects themselves. Classes should move towards only having one well defined responsibility which increase the cohesion between the classes of the system and makes them part of an easy to understand, powerful whole that is easy to weave together and reuse.

...that works
The most first benefit that springs to people's mind when they hear the word Unit Testing, is the verification that the system actually works correctly. As I have tried to outline above, the big impact of TDD on a project is more extensive than that, but the fact remains that the immediate result you will see from writing a unit test is that the code you have written works in the way outlined by the test. 

What is the cost of TDD and when is it not worth paying?
So, in summary: Test Driven Development is the dog's bollocks and should always be used in all projects. Thank you for reading and... HEY! What about the downside? Surely there is a price to pay?

There is, and the price is time. And time is money. Writing tests before you write code usually takes more time initially than just writing code. TDD is something that requires a commitment at the team level in order to reap the full benefits from it, and initially you may have to spend time in getting your team members up to speed.

Benefit talk again: The time spent on writing unit tests is time spent on building an asset: Your test suite. The gain is a long term one in the ability to refactor with confidence, fight software rot, reduce the number of bugs in the system and the time spent looking for them. As is the way with investments, you pay a high price in time initially to free up time later. 

Quick and dirty - There is another kind of investment you can make: paying a small price in time initially with the consequence of having to invest more time later. There are numerous occasions on which this makes perfect sense. The trick is to know when those are. 
The bottom line for software success is judged by the fulfillment of business requirements. Having a gorgeous test suite and fantastic structure but an unfinished system makes no sense at all. If the goal of the project is to produce something that is better than nothing in a short time you need to make compromises with the rigidity of practices you apply.
However, having an evolutionary project stagnate under software rot due to an endless series of quick fixes and shortcuts makes equally small sense. The concept of technical debt is a powerful metaphor for reasoning about these things. Don't take a loan if you can't afford to eventually pay it back.

The road to hell is paved with good intentions
To apply TDD can be difficult and there are numerous things you can do with it that puts you in a straitjacket rather than giving you wings. It is quite possible to device inefficient tests that take ages to construct and ages to run, while providing minimal benefit. This however is a question of implementation and I will leave it for another day.

So, in summary: Test Driven Development is the dog's bollocks, but should be applied as part of an overall strategy in projects where the aim is to build solutions of high quality that are easy to maintain and expand.

I suspect that for a large number of people this post has been preaching to the choir, but it would be interesting to hear more opinions and experiences of when TDD is not worth it. My own views are tainted by my very positive experience of TDD so I am bad at taking the other side of the argument. Drop a comment either way!

The first post - About the blog

Abstract:
  • Hello!
  • Every post in this blog will have an abstract like this to make stuff quickly graspable for people googling.
  • This blog is mainly about TDD, which I love. It is also about .NET and architecture.
  • The target audience is people new to TDD, as well as experienced practitioners interested in software architecture.

Hi there internet wanderer and welcome to my blog!
Since this is the first post here comes the mandatory introductory snippet, setting the tone for the posts that will follow.

What's it about?
As you may have guessed from the title, TDD will play a central part in the musings of this blog. OMG and WTF will play a lesser part, sticking to the background. :)

Test Driven Development is a big passion of mine, and in my work as a web developer I have practiced it daily for a long time, tackling obstacles with it, leaned on it, built both great and foolish things in its name, and felt the benefits of it through its presence and absence in various projects.

In this blog I will talk about the problems and solutions related to TTD I'm currently obsessing about, hopefully sparking discussion from other practitioners who've been there, and sparking ideas and questions in developers who haven't.

TDD is not something that lives in isolation however, and this blog will also be about software architecture, new technologies on the .NET platform and other interesting things. 

Who is the blog for?
TDD has become quite a fashionable doctrine to subscribe to in the past couple of years. For many organisations it is an integral part of the development process, and most books from learned men takes the presence of a test suite for granted. Something to lean on for refactoring, measure the code coverage of and use as a measure of the quality of the code.

For many developers however, TDD is a buzzword, and writing unit tests is something they've heard they ought to be doing, but not something that's done in practise. Indeed, if you're not writing any unit tests, how do you begin? Most projects where unit testing is absent does not have an architecture that lends itself well to start writing tests, so how are the developers in the project supposed to learn unit testing even if they want to? Unit testing in such projects often requires you to break existing dependencies, something that is hard to do at the best of times.

The only real way to learn something is through practice. This blog aims to give practical examples of common ways to write Unit Tests, targeted at people with limited experience. I will try to provide examples of the basics, and show what you need to do to start incorporating tests into both your existing and new projects.

For Test Driven people who are already flying, this blog aims to provide interesting material on architecture and ways to make your life and coding easier and more elegant. We'll see how that goes. :)