Home » Development » Testing (Page 2)

Category Archives: Testing

Hasty Impressions: dotCover 1.1

This post is part of a continuing series chronicling my search for a .NET coverage tool.

Today I’m looking at the first candidate: JetBrains dotCover.

I tried dotCover 1.1, integrated with ReSharper 5.1 running in VS2008.

The Cost

A lifetime license, with 1 year of free upgrades is $199 $149 – a special introductory price.

This isn’t usurious, but considering that ReSharper C# edition, a tool that changes the way I work every single day, is $249, it’s enough.

VS integration

cover with dotCover

This is where I expected dotCover to shine, and it didn’t disappoint – the integration with Visual Studio (and with ReSharper) was excellent. The first thing I noticed was an extra “Cover with dotCover” item in the ReSharper test menu (triggered from the yellow and green ball things). I clicked it, and it ran my tests, bringing up the familiar Unit Test results window.

Once the tests ran, there was pause while dotCover calculated the coverage info, and then the bottom pane filled in with coverage results: green/red bars by every method in the covered assemblies. Clicking on the methods warps to the source code, which is also highlighted – covered statements have a green background, and uncovered statements have red. In fact, every source file opened in the IDE has the highlighting.

dotCover BookFinder tests

Finding tests that cover code

The most interesting feature that dotCover has is the ability to identify which tests covered which lines of code. I’m not entirely sold on this, thinking it more of a gimmick than anything else. When I first heard about it, I thought “I don’t care which test covered which line, so long as the lines are covered. I’m here to see what isn’t covered.”. Yes, I think in italics sometimes.

Still, I gave it a go. Right-clicking on a line of code (once coverage has been run) brought up a tiny menu full of covered lines of code. I don’t know why, but it made me happy. I suppose one could use this from time to time to make sure a new test case is exercising what it’s supposed to, but normally I can tell that by how a new test fails, or by what I’ve typed just before the test starts working. Worst case, I could always debug through a single test – something made very easy by the ReSharper test runner.


There was one aspect of this feature that I could imagine someone using – the ability to run the tests that cover a line of code. All that’s needed is to hit the “play” button on the “Show Covering Tests” popup. If the full suite of tests takes a very long time to run, this could be useful. Still, it doesn’t do much for me personally – if my tests took that long to run, I’d try speed them up. If nothing else, I would probably just run the test fixture designed to test the class or method in question, instead of my entire bolus of tests.

So, running tests that cover some code is a cool feature, but it’s not that useful. I’d rather see something like the automatic test runs and really cool “what’s covered” information provided by Mighty-Moose.

Command Line Execution

Covering an application from the command line is pretty straightforward. I used this command to see what code my BookFinder unit tests exercised:

dotcover cover /TargetExecutable=nunit-console.exe /TargetArguments=.\BookFinder.Tests.dll /Output=dotCoverOutput /Filters=+:BookFinder.Core

BookFinder.Core is the only assembly I was interested in – it holds the business logic. “cover” takes multiple include and exclude filters, even using wildcards for assemblies, classes, and methods.

One quite cool feature is to use the help subcommand to generate an XML configuration file, which can be used to specify the parameters for the cover command:

dotCover help cover coverSettings.xml

will create a coverSettings.xml file that can be edited to specify the executable, arguments, and filters. Then use it like so:

dotCover cover coverSettings.xml

without having to specify the same batch of parameters all the time.

Joining Coverage Runs

Multiple coverage snapshots – perhaps from running tests on different assemblies, or just from performing different test runs on the same application – can be merged together into a comprehensive snapshot:

dotCover merge /Source snapshot1;snapshot2 /Output mergedsnapshot

Just include all the snapshots, separated by semicolons.

XML Report

After generating snapshots and optionally merging them, they can be turned into an XML report using the report command:

dotcover report /Source=.\dotCoverOutput /Output=coverageReport.xml

There are options to generate HTML and JSON as well.

Note that if there’s only one snapshot, the “merge” step is not needed. In fact, there’s even a separate analyse command that will cover and generate a report in one go.

No Auto-Deploy

There’s no auto-deploy for dotCover – it needs to be installed. And since it’s a plugin, Visual Studio is a requirement. This is a small inconvenience for developers and our build servers. Having to put VS on all our test machines is a bit of a bigger deal – definitely a strike against dotCover.

TypeMock Isolator support in the future

The dotCover 1.1 doesn’t integrate with Isolator 6. Apparently dotCover’s hooks are a little different than many other profiles (nCover, PartCover, …). I’ve been talking to representatives from both TypeMock and JetBrains, though, and they tell me that the problem is solved, and an upcoming release of Isolator will integrate with dotCover. Even better, a pre-release version that supports the latest dotCover EAP is available now.

IIS

dotCover covers IIS, but only by using the plugin – this means that the web server has to have Visual Studio and dotCover installed, and it’s a manual step to invoke the coverage. In the JetBrains developer community there’s a discussion about command-line IIS support, but no word from JetBrains staff on when this might come.

Statement-level coverage

As Kevin Jones notes, dotCover reports coverage of statements coverage, not sequence points. This means that a line like this:

return value > 10
      ? Colors.Red
      : Colors.White;

Will report as completely covered, even if it’s executed only once – in order to ensure an accurate coverage report for this idea, the ?: would have to be replaced by an if-else block.
This isn’t necessarily a major strike against the tool, but it’s worth knowing, as it will skew the results somewhat.

Conclusion

Pros:

  • awesome IDE integration
  • XML/HTML/JSON reports
  • report merging
  • IIS profiling

Cons:

  • moderate price
  • no auto-deploy
  • no Isolator support—yet

Overall, I like the tool. I’m a little disappointed by the lack of auto-deploy and the inability to run IIS coverage from the command line, but those problems can be worked around. I was very impressed with the in-IDE support as well as the automatically generated configuration files using the “help” subcommand.
Ordinarily, the I’d say the current lack of Isolator support is a deal-breaker, but I recently demoed the product to some colleagues, and they went bonkers for the IDE integration. I guess I’ll be writing JetBrains and TypeMock looking for the betas.

Can you cover me? Looking for a .NET coverage tool

Recently at the Day Job, my boss’s boss has been on a “code confidence” kick. We’ve always done various levels of automated and manual unit, integration, issue, system, and regression testing, but he’s looking to improve the process. Part of this push involves getting better at measuring which tests exercise what parts of the code. We want to know this for the usual reasons: we can identify gaps in our testing, or more likely find opportunities to cover some areas earlier in the testing cycle. It’d be nice to know that a particularly critical section of code has been adequately exercised by the per-build unit tests, without having to wait for nightly integration testing or wait even longer for a human to get their grubby mitts on it.

To that end, I’m looking for a .NET coverage tool to dazzle us with tales of test runs. Over the next little while, I’ll look at a few candidates, summarize my findings, and hopefully come up with a winner.

Considerations

Here are some factors that will influence me. Some of these may be negotiable, if a candidate really shines in other areas.

  • We’d like to see coverage information in our build reports, so the tool should run from the command line.
  • It’d be easier to put the coverage info our our build reports if the coverage reports were in XML.
  • I really prefer a product that has an auto-deploy, so it can be bundled with the source tree and new developers or build servers just work. You may remember the pains I went to to auto-deploy TypeMock Isolator.
  • While I’m on the subject, one of our products uses Isolator as its mocking framework, so the coverage tool should be able to link with TypeMock Isolator.
  • We have a web services layer, which will be exercised by unit tests, but if we could gather stats on the layer as it’s being exercised by the client-facing portion, that would be gravy. To that end, it should be possible to cover IIS.
  • When I used TestDriven.NET + NCover a few years ago, I enjoyed being able to quickly see what my tests covered. This isn’t a requirement of our current initiative, but IDE integration would be a bonus.
  • Price is a factor. Money’s available, but why spend if you don’t have to? Or at least, why not pay less for an equivalent product.

The Candidates

Googling has lead me to these candidates, which I’ll be examining in the next little while:

Update: I picked one.

Growing an MVVM Framework in 2003, part IV – Unit Tests

This post is from a series on my experiences starting to grow an MVVM Framework in .NET 1.1.

Full source code can be found in my Google Code repository.

In parts 1 and 3 (and 2, but I like part 3 better) I showed a tiny “framework” for binding View properties and events to properties and methods on a ViewModel. In addition to avoiding the tedium and noise of wiring up events by hand, I’d hoped to implement a structure that would make unit testing easier. Let’s see how that went.

Event handlers just work. Almost

Recall that event handlers are defined on the ViewModel as plain old methods that happen to take a specific set of arguments – usually object and something that derives from EventArgs. This means that nothing special has to be done in order to exercise the methods during a unit test. The test doesn’t have to trick the ViewModel into registering with an event or anything. The test just calls the method. And if the method doesn’t care much about its arguments like FindClick doesn’t, you can pass in nonsense:

public class BookListViewModel
{
    public void FindClick(object sender, EventArgs e)
    {
        ICollection books = bookDepository.Find(TitleText.Value);
        IList bookListItems = BookListItems.Value;

        bookListItems.Clear();
        foreach ( string book in books )
        {
             bookListItems.Add(book);
        }
    }
}

public class BookListViewModelTests
{
    [Test]
    public void CallFindClick()
    {
        vm.FindClick(null, null);
    }
}

Of course, this isn’t much of a test. Usually we’ll want to set up some initial state for the ViewModel, and verify that the correct actions have been taken. In fact, as things stand, the property fields will all be null, so TitleText.Value and BookListItems.Value will error out.

Putting something behind the properties

Most event handlers will need to access the properties on the ViewModel, so the tests must hook up the properties.

Provide stub properties

Last time I mentioned that the PropertyStorageStrategy would bring value. This is it. Recall the definitions of the ListProperty and the PropertyStorageStrategy:

public class ListProperty: Property
{
    public ListProperty(PropertyStorageStrategy storage): base(storage)
    {}

    public IList Value
    {
        get { return (IList) storage.Get(); }
        set { storage.Set(value); }
    }
}

 public interface PropertyStorageStrategy
 {
     object Get();
     void Set(object value);
 }

The ListProperty (and BoolProperty and StringProperty) merely consult a PropertyStorageStrategy to obtain a value and they cast it to the correct type. Providing a dumb strategy that, instead of proxying a property on a View control, just holds a field will produce a property that can be used in tests:

public class ValuePropertyStrategy: PropertyStorageStrategy 
{
      private object obj;

      public ValuePropertyStrategy(object initialValue)
      {
         this.obj = initialValue;
      }

      public void Set(object value) { obj = value; }
      public object Get() { return obj; }
}

Then the test fixture setup can bind properties to the ViewModel:

[SetUp]
public void SetUp()
{
    vm = new BookListViewModel(new Control(), new FakeBookDepository());
    vm.TitleText = new StringProperty(new ValuePropertyStrategy(""));
    vm.BookListItems = new ListProperty(new ValuePropertyStrategy(new ArrayList()));
    ...
}

And tests can be constructed to provide initial property values (if the default isn’t good enough) and interrogate them afterward.

[Test]
public void FindClick_WithTitleG_FindsEndersGame()
{
    vm.TitleText.Value = "G";
    vm.FindClick(null, null);

    Assert.IsTrue(vm.BookListItems.Value.Contains("Ender's Game"));
}

Auto-wiring the properties

This works, and pretty well. There’s not that much noise associated with setting up the fake properties. Still, why should there be any? After so much trouble to remove the tedious wiring up from the production code, it seems wrong to leave it in the testing code.
Also, I’m against anything that adds a barrier to writing tests. And having to hand-wire a few (or a dozen) properties before you can start testing is definitely a barrier.

So, let’s write a little code to handle the tedium for us.

public class ValuePropertyBinder
{
      public static void Bind(ViewModelBase viewModel)
      {
          foreach ( FieldInfo field in viewModel.PropertyFields() )
          {
              ValuePropertyStrategy propertyStorageStrategy = new ValuePropertyStrategy(MakeStartingValue(field.FieldType));

              ConstructorInfo propertyConstructor = field.FieldType.GetConstructor(new Type[] {typeof (PropertyStorageStrategy)});
              object propertyField = propertyConstructor.Invoke(new object[] {propertyStorageStrategy});
              field.SetValue(viewModel, propertyField);
          }
      }

      private static object MakeStartingValue(Type fieldType)
      {
         Type propertyType = fieldType.GetProperty("Value").PropertyType;
         
         if ( propertyType == typeof(IList) ) { return new ArrayList(); }
         if ( propertyType == typeof(string) ) { return ""; }
         if ( propertyType == typeof(bool) ) { return false; }
         else
         { 
              throw new NotImplementedException("no known starting value for type " + propertyType);
         }
      }
}

This is very similar to the wiring we’ve seen before – find property fields, construct an object to implement the property, and hook it up. The only thing likely to need attention in the future is MakeStartingValue. A new property type(like DateTime), will require an expansion to the if chain. But that should be very infrequent.

Now it’s much easier to use the ViewModel in tests:

[SetUp]
public void SetUp()
{
   vm = new BookListViewModel(new Control(), new FakeBookDepository());
   ValuePropertyBinder.Bind(vm);
}

An alternative: brute force and ignorance

This approach didn’t occur to me until the project was over. Sigh.
The production code works by binding the ViewModel to a View. The test setup could do that. I’d taken pains to keep any kind of code or behaviour out of the View, so there shouldn’t be any side effects, and there’s no need to show any of the GUI elements. Honestly, the technical downsides seem pretty limited.

Even so, I don’t like this solution. For the BookFinder application, the View is simple enough that I’m confident the approach would work, but I have concerns over using it in a more complex application. Also, I prefer to reduce the amount of auxiliary production code that’s used in tests. In the off chance that something does go wrong, it’s nice to be able to have a small set of production code to look at

Summing up

With the ValuePropertyBinder (or much-maligned “just bind the ViewModel to the actual Model”), tests are really easy to set up and run. As easy as writing the production code. And they’re readable. The only troublesome dependencies are the models. Totally worth the effort.

AutoTest.Net updated – now (and then) notices broken builds

I received a useful comment on Friday’s post about AutoTest.Net. In the wee hours of Saturday, Greg Young, wrote to say

It should detect broken builds without any problem. We have been running it daily for about 1.5 months.

Perhaps you could grab me via email and reproduce it?

Well, I wasn’t going to pass up that offer. Off to GMail!

7:15 I grabbed him
7:20 he was making specific requests for additional information, the output of test runs through the console runner, and the like.
8:00 he had dived into the code to verify that things were working as they should, and asked for a sample project that exhibited the bug.
8:20 I sent the code
8:31 I e-mailed that I’d accidentally sent a project that complied
8:34 Greg reproduced the problem
8:54 he sent me a replacement .zip file
9:04 it worked!

As soon as I broke the compilation, the monitor lit up, showing me which project failed and where:

[Info] 'AutoTest.Console.ConsoleApplication' Preparing build(s) and test run(s)
[Info] 'AutoTest.Console.ConsoleApplication' Error: D:\bconrad\Documents\Source\BlogExamples\2010-11-autotest\BookFinder\BookFinder.Core\BookListViewModel.cs(50,17) CS1002: ; expected [D:\bconrad\Documents\Source\BlogExamples\2010-11-autotest\BookFinder\BookFinder.Core\BookFinder.Core.csproj]
[Info] 'AutoTest.Console.ConsoleApplication' Ran 1 build(s) (0 succeeded, 1 failed) and 0 test(s) (0 passed, 0 failed, 0 ignored)

It turns out that the bug had already been fixed on trunk version of the code, but for some reason hadn’t been built into the Windows installer. Turnaround time: 1 hour 49 minutes from my initial e-mail, and that included:

  • me drifting off to other tasks between e-mails, increasing delays
  • a session of trying to work around GMail hating the zip file I tried to send
  • a delay imposed by my having sent a bad test project

I’m sure those things added a good half hour to the required time.

Then he spent another 40 minutes on a non-existent problem that I reported. I’d left an older AutoTest.Net WinForms monitor running during the debugging, so when things finally settled down, I got a pair of toasts from Growl – one reporting build failures, and one reporting successful builds when there weren’t any.
When I discovered that, Greg was already installing a new Growl for Windows to try it out. And he was very gracious about my error and his wasted time.

I’m hardly the first to point it out, but this is one of the great things about open software. It’s great getting that kind of service so quickly. And on a weekend no less.

Will this encourage me to use AutoTest.Net

Sure. My primary complaint with it has been resolved.
Moreover, I’d be even more inclined to see what comes of Mighty Moose, now that I see the dedication of the developers behind it.

Hasty impressions: Continuous testing using AutoTest.NET

Rinat Abdullin recently posted about Mighty Moose and AutoTest.NET, two projects for continuous testing in the .NET/Mono space. My interest was immediately piqued, as I’m a huge fan of continuous testing. I’ve been using py.test to run my Python unit tests for years now, almost solely because it offers this feature.

I’m taking a look at AutoTest.Net first. Mostly because it’s free. If I’m going to use something at home, it won’t be for-pay, and the Day Job has been notoriously slow at shelling out for developer tools.

Update: there was a bug that had been fixed on trunk, but not in the installer that I used. AutoTest.Net is better at detecting broken builds than I report below.

Setting up AutoTest.NET

Download and installation were straightforward. I opted to use the Windows installer package, AutoTest.Net-v1.0.1beta (Windows Installer).zip. I just unzipped, ran the MSI, let it install both VS 2008 and VS 2010 Add-Ins (the other components are required, it seems), and that was that.

Then I cracked open the configuration file (at c:\Program Files\AutoTest.Net\AutoTest.config). I just changed two entries:

  • BuildExecutable, and
  • NUnitTestRunner

That’s it. Well, for the basic setup.

Running the WinForms monitor

I opened a command prompt to the root of a small project and ran the WinForms monitor, telling it to look for changes in the current directory.

& 'C:\Program Files\AutoTest.Net\AutoTest.WinForms.exe' .

The application started, presenting me with a rather frightening window

I mean, it makes sense. I have neither built nor run yet, so what did I expect? Still, I was taken aback by the plainness of it. Only temporarily daunted, I then hit the tiny unlabelled button in the northeast corner and got a new window. This was less scary.

Everything seemed to be in order. I hadn’t specified MS Test or XUnit runners, nor a code editor. It says it’s watching my files. So let’s test it.

Mucking with the source

It’s supposed to watch my source changes and Do The Right Thing. Let’s see about that.

A benign modification to one test file

I changed the text in one of my test files. No functionality was changed – it was purely cosmetic. AutoTest.Net noticed, rebuilt the solution, and ran the tests! Pretty slick. Things moved quickly, but here’s what I saw from the application:

A benign modification to one “core” file

Next I changed the text in one of the core files – this file is part of a project that’s referenced by the BookFinder GUI project, and the test project. Again, this was a cosmetic change only, just to see what AutoTest.NET would do.
It did what it should – built the three projects and ran the tests. See?

A core change that breaks a test

So, now I’ll modify the core code in a way that breaks a test.
It picks up the change, builds, tests, and does a really nice job of showing me the failure. I see the test that failed, and when I click it, am presented with the stack trace, including hyperlink to the source.

Unfortunately, clicking the hyperlink didn’t go so well:

That was a little disappointing. On the brighter side, hitting “Continue” did continue, with no seeming ill-effects.

Redemption

Confession time. I hadn’t checked the CodeEditor section of the configuration file. As it turns out, it had a slightly different path to my devenv than the correct one. I fixed up the path and tried again. This time, clicking on the hyperlink opened devenv at the right spot.

So the problems was ultimately my fault, but I can’t help but wish for more graceful behaviour – how about a “I couldn’t find your editor” dialogue? Ah, well. The product’s young. Polish will no doubt come.

I repaired the code that broke the tests, and AutoTest.Net was happy again after rebuilding and rerunning the tests.

Syntax Error

For my last test, I decided to actually break the compile. This was kind of disappointing. It claimed to run the 3 builds and the tests, and said that everything passed. I’m not sure why this would be – I was really hoping for an indication that the compilation failed, but nope. Everything was rainbows and puppies. Spurious rainbows and puppies.

The VS Add-In

There’s an add-in. You can activate it under the “Tools” menu. It looks and behaves like the WinForms app.

The Console Monitor

I am used to running py.test in the console, so I thought I’d check out AutoTest’s console monitor next. I started it up, made a benign change, and then made a test-breaking change. Here’s what I saw:

[Info] 'Default' Starting up AutoTester
[Info] 'AutoTest.Console.ConsoleApplication' Starting AutoTest.Net and watching "." and all subdirectories.
[Warn] 'AutoTest.Console.ConsoleApplication' XUnit test runner not specified. XUnit tests will not be run.
[Info] 'AutoTest.Console.ConsoleApplication' Tracker type: file change tracking
[Warn] 'AutoTest.Console.ConsoleApplication' MSTest test runner not specified. MSTest tests will not be run.
[Info] 'AutoTest.Console.ConsoleApplication'
[Info] 'AutoTest.Console.ConsoleApplication' Preparing build(s) and test run(s)
[Info] 'AutoTest.Console.ConsoleApplication' Ran 3 build(s) (3 succeeded, 0 failed) and 2 test(s) (2 passed, 0 failed, 0 ignored)
[Info] 'AutoTest.Console.ConsoleApplication'
[Info] 'AutoTest.Console.ConsoleApplication' Preparing build(s) and test run(s)
[Info] 'AutoTest.Console.ConsoleApplication' Ran 3 build(s) (3 succeeded, 0 failed) and 2 test(s) (1 passed, 1 failed, 0 ignored)
[Info] 'AutoTest.Console.ConsoleApplication' Test(s) failed for assembly BookFinder.Tests.dll
[Info] 'AutoTest.Console.ConsoleApplication'     Failed -> BookFinder.Tests.BookListViewModelTests.FindClick_WithTitleG_FindsEndersGame:
[Info] 'AutoTest.Console.ConsoleApplication'

Not bad, but I have no stack trace for the failed test. Just the name. I’m a little sad to lose functionality relative the WinForms runner. I know I wouldn’t be able to click on source code lines, but still.

Gravy – Hooking up Growl

Undeterred by the disappointing performance in the Syntax Error test, I soldiered on. I use Growl for Windows for notifications, and I was keen to see the integration. I went back to the configuration file and input the growlnotify path. While I was there, I set notify_on_run_started to false (after all, I know when I hit “save”), and notify_on_run_completed to true. Then I fixed my compile error and saved the file.
In addition to the usual changes to the output window, I saw some happy toast:

Honestly, with a GUI or text-based component around, I’m not sure how much benefit this will be, but I guess I can minimize the main window and so long as tests keep passing, I can get some feedback. Still it’s kind of fun.

Impressions

I really like the idea of this tool. I love the idea of watching my code and continuously running the tests. The first steps are very good – I like the clickonable line numbers to locate my errors, and I think the Growl support is cute, but probably more of a toy than an actual useful feature.

Will I Use It?

Not now, and probably never at the Day Job. The inability to detect broken builds is pretty disappointing.
Also, at work, I have ReSharper to integrate my unit tests. I’ve bound “rerun the previous test set” to a key sequence, so it’s just as easy for me to trigger as it is to save a file.

At home? Maybe. If AutoTest.Net starts noticing when builds fail, then I probably will use it when I’m away from ReSharper and working in .NET.

Auto-deploying TypeMock Isolator Without Trashing the Installation

At the Day Job, we use TypeMock Isolator as the isolation framework for the client portion of our flagship product. Historically we’d used version 3, but recently I had the opportunity to upgrade the code and build system to use the 2010 (or “version 6”) edition.

Backward Compatibility

I was very pleased to see that no code changes were required with the upgrade. Sure, we’d like to start using the new Arrange-Act-Assert API, and to trade in the method name strings for the type-safe lambda expressions, but I didn’t want to have to run back and convert everything today. And I didn’t. Typemock Isolator appears to be backward compatible (at least as far as the feature set we use goes).

Auto-Deployment

In fact, the whole exercise of moving up to 2010 would’ve been over in almost no time were it not for one thing—we need to auto-deploy Isolator. The reasons are several:

  • we have many dozen people working on the product, spread across four teams and three offices all over the world, so coordinating the installation is tricky
  • some people have a need to occasionally build the product, but don’t actively develop it – imposing an install on them seems rude
  • some of our developers actively oppose unit testing, and I didn’t want to give them any more ammunition than I had to

We’d had a home-grown auto-deploy solution working with Isolator 3, but it was a little clunky and some of the details of the Isolator install had changed, so it wasn’t really up to auto-deploying 6. Fortunately, I found a Typemock Insider blog post about auto-deploying.

We use Apache Ant for our builds, but it was no trouble to shell out to an MSBuild task to auto-deploy Isolator:

<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <PropertyGroup>
    <TypeMockLocation>path\to\TypeMock\Isolator\files</TypeMockLocation>
    <NUNIT>path\to\nunit-console.exe</NUNIT>
  </PropertyGroup>  

  <Import Project="$(TypeMockLocation)\TypeMock.MSBuild.Tasks"/>

  <Target Name="RegisterTypeMock">
    <TypeMockRegister Company="MyCompany" License="XXX-XXX" AutoDeploy="true"/> 
    <TypeMockStart/>
    <Exec ContinueOnError="false" Command="$(NUNIT) $(TestAssembly)"/>
    <TypeMockStop Undeploy="true"/>
  </Target>
 </Project>

Build Server Licenses

This worked really well – I was testing the tests on my local machine, watching Isolator auto-deploy and auto-undeploy. Everything was great, until I realized: we have two licenses—one for developers, and one for build servers. It only seemed right to use the appropriate one depending on whether we were building on a developer’s machine or a build server. Fortunately, all our build servers set a specific environment variable, so it was a simple matter to have MSBuild pick the correct one.

Undeploying Isolator Mangles the Installed Instance

Even though we’re providing a mechanism for auto-deploying Isolator, some developers will prefer to install it in order to use the Visual Studio AddIn to aid debugging. I’d heard that undoing the auto-deployment could wreak havoc with the installed version of Typemock Isolator, and that it’s sometime necessary to repair the installed instance. A little testing, with the help of a coworker, showed this to be the case. Worse, it appeared that the auto-deploy/undeploy broke his ability to run the product in the IDE – as soon as the process started, it would end, with a “CLR error 80004005”. Disabling the Isolator AddIn made the error go away.

So it looked like we’d need to figure out how not to break installed Isolator instances while still supplying auto-deployment when it’s needed. Searching found nothing promising, so I resorted to Registry spelunking. Unfortunately, the installed Isolator and auto-deployed Isolator make very similar Registry entries – there was nothing that I felt confident basing “Is Isolator installed?” on. After poking around and coming up short, I fell back to using the filesystem. By default, Isolator is installed in %ProgramFiles%\TypeMock\Isolator\6.0, so I decided to use that as the determinant. I’d feel dirty doing this for code destined for a customer’s site, but I can live with telling our own developers that if they choose to install Isolator, they should install it in the default location or face the consequences.

Still, if anyone comes up with a more reliable way to determine if Isolator is installed, please post it in the comments.

Putting it all Together

Here’s the MSBuild file I ended up with. It uses the correct license based on machine type, and only auto-deploys/undeploys when Isolator isn’t installed – existing installations are left alone.

<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <PropertyGroup>
    <TypeMockLocation>path\to\TypeMock\Isolator\files</TypeMockLocation>
    <NUNIT>path\to\nunit-console.exe</NUNIT>

    <!-- Used to detect TypeMock installs. -->
    <UsualTypeMockInstallDir>$(ProgramFiles)\TypeMock\Isolator\6.0</UsualTypeMockInstallDir>

    <!-- 
         Only deploy Typemock if it's not already in the usual install dir.

         If developers install Typemock, they should install it in the
         default location in order to help the build system decide
         whether or not we need to auto-deploy (since auto-deploy and
         undeploy can corrupt the TypeMock VisusalStudio Add-In, and
         interfere with the ability to run programs in the IDE.
      -->
    <DeployTypeMock>false</DeployTypeMock>
    <DeployTypeMock Condition="!Exists('$(UsualTypeMockInstallDir)')">true</DeployTypeMock>

    <License>XXX-XXX</License>
    <License Condition="'$(BuildServer)' != ''">YYY-YYY</License>
  </PropertyGroup>
  <Import Project="$(TypeMockLocation)\TypeMock.MSBuild.Tasks"/>
  <Target Name="RegisterTypeMock">
    <TypeMockRegister Company="MyCompany" License="$(License)" AutoDeploy="$(DeployTypeMock)"/> 
    <TypeMockStart/>
    <Exec ContinueOnError="false" Command="$(NUNIT) $(TestAssembly)" />
    <TypeMockStop Undeploy="$(DeployTypeMock)"/>
  </Target>
</Project>

Automated Testing using App Engine Service APIs (and a Memcaching Memoizer)

I’m a fan of Test-driven development, and automated testing in general. As such, I’ve been trying ensure that the LibraryHippo code has an adequate set of automated tests before deploying new versions.

Importing Google App Engine Modules

Unfortunately, testing code that relies on the Google App Engine SDK is a little tricky, as I found when working with one of the LibraryHippo entities. There’s an entity called a Card, which extends db.Model and represents a user’s library card.

The Card definition is not entirely unlike this:

class Card(db.Model):
    family = db.ReferenceProperty(Family)
    number = db.StringProperty()
    name = db.StringProperty()
    pin = db.StringProperty()
    library = db.ReferenceProperty(Library)

    def pin_is_valid(self):
        return self.pin != ''

Unfortunately, testing this class isn’t as straightforward as one would hope. Suppose I have this test file:

from card import Card

def test_card_blank_pin_is_invalid():
    c = Card()
    c.pin = ''
    assert not c.pin_is_valid()

It fails miserably, spewing out a string of import errors. Here’s the tidied-up stack:

> from card import Card
> from google.appengine.ext import db
> from google.appengine.api import datastore
> from google.appengine.datastore import datastore_index
> from google.appengine.api import validation
> import yaml
E ImportError: No module named yaml

Not so good. Fortunately, it’s not that hard to find out what needs to be done in order to make the imports work:

import sys
import dev_appserver
sys.path = dev_appserver.EXTRA_PATHS + sys.path 

from card import Card

def test_card_blank_pin_is_invalid():
    c = Card()
    c.pin = ''
    assert not c.pin_is_valid()

Now Python can find all the imports it needs. For a while this was good enough, since I wasn’t testing any code that hit the datastore or actually used any of the app Engine Service APIs.

Running the App Engine Service APIs

However, I recently found a need to use Memcache to store partially-calculated results and decided (like everyone else) to write a memoizing decorator to do the job. There’s enough logic in my memoizer that I felt it needed an automated test. I tried this:

import sys
import dev_appserver
sys.path = dev_appserver.EXTRA_PATHS + sys.path 

from google.appengine.api import memcache
from gael.memcache import *

def test_memoize_formats_string_key_using_kwargs():
    values = [1, 2]
    @memoize('hippo %(animal)s zebra', 100)
    def pop_it(animal):
        return values.pop()

    result = pop_it(animal='rabbit')
    assert 2 == result

    cached_value = memcache.get('hippo rabbit zebra')
    assert 2 == cached_value

(gael is Google App Engine Library – my extension/utility package – as it grows and I gain experience, I may spin it out of LibraryHippo to be its own project.) Again, it failed miserably. Here’s a cleaned-up version of the failure:

> result = pop_it(animal='rabbit')
> cached_result = google.appengine.api.memcache.get(key_value)
> self._make_sync_call('memcache', 'Get', request, response)
> return apiproxy.MakeSyncCall(service, call, request, response)
> assert stub, 'No api proxy found for service "%s"' % service
E AssertionError: No api proxy found for service "memcache";

This was puzzling. All the imports were in place, so why the failure? This time the answer was a little harder to find, but tenacious searching paid off, and I stumbled on a Google Group post  called Unit tests / google apis without running the dev app server. The author had actually done the work to figure out what initialization code had to be run in order to get have the Service APIs work. The solution relied on hard-coded paths to the App Engine imports, but it was obvious how to combine it with the path manipulation I used earlier to produce this:

import sys

from dev_appserver import EXTRA_PATHS
sys.path = EXTRA_PATHS + sys.path 

from google.appengine.tools import dev_appserver
from google.appengine.tools.dev_appserver_main import ParseArguments
args, option_dict = ParseArguments(sys.argv) # Otherwise the option_dict isn't populated.
dev_appserver.SetupStubs('local', **option_dict)

from google.appengine.api import memcache
from gael.memcache import *

def test_memoize_formats_string_key_using_kwargs():
    values = [1, 2]
    @memoize('hippo %(animal)s zebra', 100)
    def pop_it(animal):
        return values.pop()

    result = pop_it(animal='rabbit')
    assert 2 == result

    cached_value = memcache.get('hippo rabbit zebra')
    assert 2 == cached_value

There’s an awful lot of boilerplate here, so I tried to clean up the module, moving the App Engine setup into a new module in gael:

import sys

def add_appsever_import_paths():
    from dev_appserver import EXTRA_PATHS
    sys.path = EXTRA_PATHS + sys.path 

def initialize_service_apis():
    from google.appengine.tools import dev_appserver

    from google.appengine.tools.dev_appserver_main import ParseArguments
    args, option_dict = ParseArguments(sys.argv) # Otherwise the option_dict isn't populated.
    dev_appserver.SetupStubs('local', **option_dict)

Then the top of the test file becomes

import gael.testing
gael.testing.add_appsever_import_paths()
gael.testing.initialize_service_apis()

from google.appengine.api import memcache
from gael.memcache import *

def test_memoize_formats_string_key_using_kwargs():
    ...

The Decorator

In case anyone’s curious, here’s the memoize decorator I was testing. I needed something flexible, so it takes a key argument that can either be a format string or a callable. I’ve never cared for positional format arguments – not in Python, C#, Java, nor C/C++ – so both the format string and the callable use the **kwargs to construct the key. I’d prefer to use str.format instead of the % operator, but not until App Engine moves to Python 2.6+

def memoize(key, seconds_to_keep=600):
    def decorator(func):
        def wrapper(*args, **kwargs):
            if callable(key):
                key_value = key(args, kwargs)
            else:
                key_value = key % kwargs

            cached_result = google.appengine.api.memcache.get(key_value)
            if cached_result is not None:
                logging.debug('found ' + key_value)
                return cached_result
            logging.info('calling func to get '  + key_value)
            result = func(*args, **kwargs)
            google.appengine.api.memcache.set(key_value, result, seconds_to_keep)
            return result
        return wrapper
    return decorator

Faking out Memcache – Unit Testing the Decorator

The astute among you are probably thinking that I could’ve saved myself a lot of trouble if I’d just faked out memcache and unit tested the decorator instead of trying to hook everything up for an integration test. That’s true, but at first I couldn’t figure out how to do that cleanly, and it was my first foray into memcache, so I didn’t mind working with the service directly.

Still, the unit testing approach would be better, so I looked at my decorator and rebuilt it to use a class rather than a function. It’s my first time doing this, and it’ll probably not be the last – I really like the separation between initialization and execution that the __init__/__call__ methods give me; I think it makes things a lot easier to read.

def memoize(key, seconds_to_keep=600):
    class memoize():
        def __init__(self, func):
            self.key = key
            self.seconds_to_keep=600
            self.func = func
            self.cache=google.appengine.api.memcache

        def __call__(self, *args, **kwargs):
            if callable(self.key):
                key_value = self.key(args, kwargs)
            else:
                key_value = self.key % kwargs

            cached_result = self.cache.get(key_value)
            if cached_result is not None:
                logging.debug('found ' + key_value)
                return cached_result
            logging.info('calling func to get '  + key_value)
            result = self.func(*args, **kwargs)

            self.cache.set(key_value, result, self.seconds_to_keep)
            return result

    return memoize

Then the test can inject its own caching mechanism to override self.cache:

class MyCache:
    def __init__(self):
        self.cache = {}

    def get(self, key):
        return self.cache.get(key, None)

    def set(self, key, value, *args):
        self.cache[key] = value

def test_memoize_formats_string_key_using_kwargs():
    values = [1, 2]
    @memoize('hippo %(animal)s zebra', 100)
    def pop_it(animal):
        return values.pop()

    cache = MyCache()
    pop_it.cache = cache
    result = pop_it(animal='rabbit')
    assert 2 == result

    cached_value = cache.get('hippo rabbit zebra')
    assert 2 == cached_value

And that’s it. Now I have a unit-tested implementation of my memoizer and two new helpers in my extension library.