Showing posts with label TDD. Show all posts
Showing posts with label TDD. Show all posts

Thursday, February 11, 2010

How we do this - development

Following up the last post, about our process, here’s how we do our development. During development we do try to use as much XP Practices as we can.

We have Continuous Integration using CruiseControl that also runs all our unit tests with NUnit. We also set it up to show code coverage using NCoverExplorer (currently 87 %). We have two targets per project. One target builds and runs tests whenever something is committed to our Subversion repository and the other target does the same thing every night and also deploys it to our test server.

We have Automated Acceptance Tests using Selenium. Unfortunately we haven’t succeeded in making our nightly build target run them after the deployment as we intended to, but instead we run them manually when we come in the morning and once during lunch. Some why they time out nine times out of ten when we have the build server start the test…

We try to use TDD as much as possible. Other than NUnit we use the mock library Rhino Mocks to make all unit tests self contained. We also have the plug in ReSharper installed on everyone’s Visual Studio 2008 to get support for running NUnit tests, extended refactoring menu and enforced coding standard. ReSharper also gives us pointers on better ways to code, like pointing out that a method could be made static. It also gives some, imho, bad ideas though out of the box that needs to be changed in the configuration, for example removing curly brackets around one line clauses. Read more on my opinion about that particular issue on my personal blog, The Tommy Code.

We do quite a lot of Pair Programming. Initially we did virtually everything in pairs, but the further we get with the project the less we’ve done in pairs. “Just fixing it” is, the way I see it, not as important to do with a navigator. When in pairs we try to follow the Pomodoro rule of taking a break after every 25 minutes to make it less intense and to be able to stay focused. We don’t make estimates or task lists for the day though, just having a timer (Pomodairo) that tells us to take a break every 25 minutes.

We also switch pairs frequently, achiving collective code ownership for the entire project. Unfortunatly that doesn't include Front-end (html, css, javascript) or Test (Acceptance testing and manual tests), which are each done by a specific person. We have done some pair programming with one of them and one from the rest of the team, but far less than I would have wanted. We also had everyone in the team write one Acceptance Test each do know how it's done, which was a great idea.

We keep design as simple as possible, but no simpler, and always try to remind each other when we notice someone taking height for more than we need to do at this moment in their implementation. We of course try our best to follow the DRY principle, coding everything once and only once. I’ve noticed that it gets harder with unit tests with a lot of mocking, which may differ slightly between the different test cases, so I have to admit that there are some principle violations in the test assembly. Also, of course, we try to avoid if-statements.

To be able to have the unit tests covering as much code as possible, getting high Code Coverage, we do our web pages in a design pattern inspired by The Humble Dialog Box, an article by Michael Feathers. That leaves our aspx files, including the code behind, as stupid as possible and puts all the logic in a composer object in a separate project with no knowledge of the HTTP context. We first had an idea about using ASP.NET MVC but decided that we had enough new elements in our project with all the XP stuff.

That concludes the walk through of how we do this. We’d love to hear your thoughts about it and will be happy to answer any questions. Use the comment field!

Wednesday, January 27, 2010

The Art of Unit Testing

To deal with the issue I mentioned in the post TDD is Hard earlier I bought the book The Art of Unit Testing by Roy Osherove. It helped a lot! Osherove writes plenty about how to make the tests cope with changes in the code and about writing maintainable tests.

He also mentioned Test Coverage tools as a good way to build reliable tests. Making sure every line is covered by a test doesn't test all possible results tho, but it's a good start. For example if you want to test a method that validates email addresses you might just have a RegExp. That row will be covered by the first test you write. Still I wouldn't find it reliable by just making a test with one email address. Here I didn't find any solution in the book, so I'd be interested to hear how you make sure your test is reliable.

How ever, I downloaded the last free version of NCover Explorer (1.4.0.7). It showed to be a very competent tool that let me find a bunch of untested code paths, and also a few completely untested classes. We also added a simple version to our Cruise Control project that gives us a figure with the current coverage (88 % right now). We have not excluded code that are excluded from the unit tests with the Ignore attribute though, so the real coverage is higher. We use the Ignore attribute for some integration tests that require a VPN connection to work.

Among the tips from the book I found the most important one was to have at most one mock object in a test, while other fake objects should be stubs. He also claimed that a test should have only one Assert verifying the outcome, but I'm not sure I agree with that. He is the expert tho, but I guess I need to burn my own fingers before I agree with that.

As a conclusion I would recommend this book to anyone. The one who wants to start doing unit testing will probably get the most from the book, but I think that also the one who is just curious about it and definatly the one who's been doing it for some time (that's me!) will have a good read too. I bet that even the expert might get some new insights while reading it. And it's a quick read - I think I spent less than 10 hours to read it all, except the appendixes.

Friday, November 27, 2009

Easing up the XP

We have not reached the desired productivity level just yet even though development is faster for each day. Since we have a challenging dead line we have decided to ease up a little on the XP to get faster forward. This is what we did.

We first had a retrospective with the guys in the team to get their idea of what is slowing each of them down. Pair Programming and inexperience with TDD was the key problems. Some said that coding in pair takes longer than doing it alone since you need to explain everything you do. Some said that thinking about creating the test first probably took three times as long as just doing it.

The actions we decided on was
  1. If you get stuck with the Test First, just do the code and write the test after.
  2. If you feel that you don't add anything while Pairing, go to the task board and start investigating the next task.
  3. Don't ask too much and don't tell too much while Pair Programming. Stick to the problem you're working on.

Thursday, November 12, 2009

TDD is hard

Writing the tests isn't the hard part. At least not for me. It's when you have them. I often find myself do my refactorings first and then correct the tests to work again once refactoring is done.

How do you work with refactoring in a TDD project?

Another hard part is to stop writing the tests - to find the decent level where you have good tests and remain productive. I could spend an entire day writing tests to validate some data, but that wouldn't be very productive. Probably 15 minutes may produce the code needed with just a small number of tests - adding more tests would make sure that even more cases work, but they all probably do from the start.

When do you settle and think a test is good enough and move on to the next function?

Friday, October 30, 2009

Current number of Unit Tests

I just added a new Gadget in the sidebar, where we'll show the current number of Unit Tests in our project. We're starting now at 21 unit tests.

Friday, October 9, 2009

Cannonball into XP

A few weeks ago we were finally granted to use eXtreme Programming on a new project. It's a web application with a fair amount of business functionality. We have decided for the Cannonball-approach as Kent Beck refers to in his paper “Getting Started with XP”.

We both have some initial understanding about XP and have been anxious to try it out, but haven’t had support until now. We have experience from some of the practices, but not all, and have spent the first few weeks reading up on more details, like Extreme Programming Explained (Kent Beck) and Scrum and XP from the trenches (Henrik Kniberg).

Currently we use continuous integration on all projects already, and as senior developers we have plenty of experience with refactoring. We have used the Scrum process, and thereby Stories and Iterations amongst other things. Tommy have also been experimenting with TDD for a couple of years.

As for our concerns, none of us have done much pair programming or automated acceptance tests before. Another issue is our organization at Creuna which does not map directly to XP. Being consultants having an on site customer can be difficult for example.

Commenting this post we would love to get your input on how you started with XP. Also, during this project, don't hesitate to give us comments how you would solve similar problems.