Following up the last post, about our process, here’s how we do our development. During development we do try to use as much XP Practices as we can.
We have Continuous Integration using CruiseControl that also runs all our unit tests with NUnit. We also set it up to show code coverage using NCoverExplorer (currently 87 %). We have two targets per project. One target builds and runs tests whenever something is committed to our Subversion repository and the other target does the same thing every night and also deploys it to our test server.
We have Automated Acceptance Tests using Selenium. Unfortunately we haven’t succeeded in making our nightly build target run them after the deployment as we intended to, but instead we run them manually when we come in the morning and once during lunch. Some why they time out nine times out of ten when we have the build server start the test…
We try to use TDD as much as possible. Other than NUnit we use the mock library Rhino Mocks to make all unit tests self contained. We also have the plug in ReSharper installed on everyone’s Visual Studio 2008 to get support for running NUnit tests, extended refactoring menu and enforced coding standard. ReSharper also gives us pointers on better ways to code, like pointing out that a method could be made static. It also gives some, imho, bad ideas though out of the box that needs to be changed in the configuration, for example removing curly brackets around one line clauses. Read more on my opinion about that particular issue on my personal blog, The Tommy Code.
We do quite a lot of Pair Programming. Initially we did virtually everything in pairs, but the further we get with the project the less we’ve done in pairs. “Just fixing it” is, the way I see it, not as important to do with a navigator. When in pairs we try to follow the Pomodoro rule of taking a break after every 25 minutes to make it less intense and to be able to stay focused. We don’t make estimates or task lists for the day though, just having a timer (Pomodairo) that tells us to take a break every 25 minutes.
We also switch pairs frequently, achiving collective code ownership for the entire project. Unfortunatly that doesn't include Front-end (html, css, javascript) or Test (Acceptance testing and manual tests), which are each done by a specific person. We have done some pair programming with one of them and one from the rest of the team, but far less than I would have wanted. We also had everyone in the team write one Acceptance Test each do know how it's done, which was a great idea.
We keep design as simple as possible, but no simpler, and always try to remind each other when we notice someone taking height for more than we need to do at this moment in their implementation. We of course try our best to follow the DRY principle, coding everything once and only once. I’ve noticed that it gets harder with unit tests with a lot of mocking, which may differ slightly between the different test cases, so I have to admit that there are some principle violations in the test assembly. Also, of course, we try to avoid if-statements.
To be able to have the unit tests covering as much code as possible, getting high Code Coverage, we do our web pages in a design pattern inspired by The Humble Dialog Box, an article by Michael Feathers. That leaves our aspx files, including the code behind, as stupid as possible and puts all the logic in a composer object in a separate project with no knowledge of the HTTP context. We first had an idea about using ASP.NET MVC but decided that we had enough new elements in our project with all the XP stuff.
That concludes the walk through of how we do this. We’d love to hear your thoughts about it and will be happy to answer any questions. Use the comment field!
Showing posts with label Continuous Integration. Show all posts
Showing posts with label Continuous Integration. Show all posts
Thursday, February 11, 2010
Wednesday, January 27, 2010
The Art of Unit Testing
To deal with the issue I mentioned in the post TDD is Hard earlier I bought the book The Art of Unit Testing by Roy Osherove. It helped a lot! Osherove writes plenty about how to make the tests cope with changes in the code and about writing maintainable tests.
He also mentioned Test Coverage tools as a good way to build reliable tests. Making sure every line is covered by a test doesn't test all possible results tho, but it's a good start. For example if you want to test a method that validates email addresses you might just have a RegExp. That row will be covered by the first test you write. Still I wouldn't find it reliable by just making a test with one email address. Here I didn't find any solution in the book, so I'd be interested to hear how you make sure your test is reliable.
How ever, I downloaded the last free version of NCover Explorer (1.4.0.7). It showed to be a very competent tool that let me find a bunch of untested code paths, and also a few completely untested classes. We also added a simple version to our Cruise Control project that gives us a figure with the current coverage (88 % right now). We have not excluded code that are excluded from the unit tests with the Ignore attribute though, so the real coverage is higher. We use the Ignore attribute for some integration tests that require a VPN connection to work.
Among the tips from the book I found the most important one was to have at most one mock object in a test, while other fake objects should be stubs. He also claimed that a test should have only one Assert verifying the outcome, but I'm not sure I agree with that. He is the expert tho, but I guess I need to burn my own fingers before I agree with that.
As a conclusion I would recommend this book to anyone. The one who wants to start doing unit testing will probably get the most from the book, but I think that also the one who is just curious about it and definatly the one who's been doing it for some time (that's me!) will have a good read too. I bet that even the expert might get some new insights while reading it. And it's a quick read - I think I spent less than 10 hours to read it all, except the appendixes.
He also mentioned Test Coverage tools as a good way to build reliable tests. Making sure every line is covered by a test doesn't test all possible results tho, but it's a good start. For example if you want to test a method that validates email addresses you might just have a RegExp. That row will be covered by the first test you write. Still I wouldn't find it reliable by just making a test with one email address. Here I didn't find any solution in the book, so I'd be interested to hear how you make sure your test is reliable.
How ever, I downloaded the last free version of NCover Explorer (1.4.0.7). It showed to be a very competent tool that let me find a bunch of untested code paths, and also a few completely untested classes. We also added a simple version to our Cruise Control project that gives us a figure with the current coverage (88 % right now). We have not excluded code that are excluded from the unit tests with the Ignore attribute though, so the real coverage is higher. We use the Ignore attribute for some integration tests that require a VPN connection to work.
Among the tips from the book I found the most important one was to have at most one mock object in a test, while other fake objects should be stubs. He also claimed that a test should have only one Assert verifying the outcome, but I'm not sure I agree with that. He is the expert tho, but I guess I need to burn my own fingers before I agree with that.
As a conclusion I would recommend this book to anyone. The one who wants to start doing unit testing will probably get the most from the book, but I think that also the one who is just curious about it and definatly the one who's been doing it for some time (that's me!) will have a good read too. I bet that even the expert might get some new insights while reading it. And it's a quick read - I think I spent less than 10 hours to read it all, except the appendixes.
Thursday, October 22, 2009
Continuous Integration
For a few days we've been working to get the build server ready. We find it crucial to the entire project that the build and deploy works flawless, also continuous integration is mandatory in an XP project. Currently we test the continuous integration setup with a small template project for Microsoft Office SharePoint Server 2007 (MOSS).
So far we've managed to get Cruise Control.Net to trigger on commits to our Subversion repository. We already do this for our other projects so it was an easy task. What differs is that it also runs all unit tests and later it will also run all the automated acceptance tests.
It was quite nice to see the red/green bars of failed and successful NUnit tests in Cruise Control after a code commit to Subversion.
Before the build was actually working I was worried MOSS would make me install the whole Share point package on to the build server just to build and create a deploy package.
But, no worries, making it build was quite simple I just had to copy Microsoft.SharePoint.dll to our assemblies folder (this is our lib or libraries folder).
Tommy and I paired up to figure out how to use WSPBuilder without the need to install Windows SharePoint Services (WSS) to the build server. According to the WSPBuilder documentation it is required. First time we run WSPBuilder it complained that some assemblies where missing. By adding the -DLLReferencePath parameter referencing our library assemblies for WSPBuilder it was outputting the wsp file we wanted.
Next up is to get the wsp package to install nightly on the test web server. We don't believe we can do deploys every commit as MOSS is not super fast when deploying wsp packages.
Is this wise or not? Would you put the deployment into the continuous builds? Please give us your opinion.
Subscribe to:
Posts (Atom)