Friday, May 28, 2010

Where's my team?

When the application was put into production the original team split up and the developers was put on different projects. Most of the new projects was build on the same foundation and a lot of the experience could be reused. Those projects was much smaller and we did figure that we could build them in parallel instead of keeping the team and building them one after the other. Looking back I'm not sure this was the right decision and will never know. Anyway, we had to support the old application to fix bugs and add features on request.

New features!

Our customer wanted a few new features after the application went into production. With all the automated tests we could be quite confident that adding features would not break the application. We added new filtering and sorting functions, added feed forward for the filters and new fields for a few lists on web pages.

Bugs?

Yes, we did have one or two minor bugs, and one that really did make us reflect on our testing methods.
The bug removed a lot of data our customer had entered, as you can imagine this is not what they expected. Also, the first time it happened we could not believe it, as it was impossible for us to replicate. Until it happened a few times again. We did have unit test and automated acceptance test for the scenario and were confident the bug would have been trapped, that obviously was not true.

So, what was the case? We have three buttons, one "yes" and one "no" and one "regret". When clicking "yes" or "no" the "yes" and "no" buttons are hidden and the "regret" button is shown. Clicking the "regret" button restores the state so you can click "yes" or "no" again. A quite simple scenario, right?
Well, opening another browser, clicking the "yes" button and switch back to your first browser you still can click the "yes" or "no" button as that page has not been reloaded. Our backend logic is built to always allow you to click the "no" button, and that is correct in this case. Clicking the "no" button (from the first browser session) while actually being in the regret state made a call to a function that should never been called. This function was an update statement that updated more records than it should. We later found that due to a refactoring a parameter was removed from a SQL statement and never put back. Our unit tests tested the function with correct parameters but only using one record. This was a mistake, the test should have made the same test with more than one record.

But, while one of the bugs where severe there have been so few bugs that the effort put down into testing have payed off, a lot!

What's next?

From all the stuff I've learnt from this project I see Pair Programming and Unit Testing as two of the most important lessons as a developer. There are a lot of other practices that is good, but may fall in the category of project management.
So, next up is to convince others that Pair Programming and Unit Testing is really great stuff.

Monday, March 1, 2010

Project delivered - on time!

March 1 is here, and we are LIVE! We also have a demonstration video of the public web on YouTube, narrated by Danny Saucedo of EMD. Check it out!



E-tjänsten för val till förskoleklass har öppnat (stockholm.se)

Thursday, February 11, 2010

How we do this - development

Following up the last post, about our process, here’s how we do our development. During development we do try to use as much XP Practices as we can.

We have Continuous Integration using CruiseControl that also runs all our unit tests with NUnit. We also set it up to show code coverage using NCoverExplorer (currently 87 %). We have two targets per project. One target builds and runs tests whenever something is committed to our Subversion repository and the other target does the same thing every night and also deploys it to our test server.

We have Automated Acceptance Tests using Selenium. Unfortunately we haven’t succeeded in making our nightly build target run them after the deployment as we intended to, but instead we run them manually when we come in the morning and once during lunch. Some why they time out nine times out of ten when we have the build server start the test…

We try to use TDD as much as possible. Other than NUnit we use the mock library Rhino Mocks to make all unit tests self contained. We also have the plug in ReSharper installed on everyone’s Visual Studio 2008 to get support for running NUnit tests, extended refactoring menu and enforced coding standard. ReSharper also gives us pointers on better ways to code, like pointing out that a method could be made static. It also gives some, imho, bad ideas though out of the box that needs to be changed in the configuration, for example removing curly brackets around one line clauses. Read more on my opinion about that particular issue on my personal blog, The Tommy Code.

We do quite a lot of Pair Programming. Initially we did virtually everything in pairs, but the further we get with the project the less we’ve done in pairs. “Just fixing it” is, the way I see it, not as important to do with a navigator. When in pairs we try to follow the Pomodoro rule of taking a break after every 25 minutes to make it less intense and to be able to stay focused. We don’t make estimates or task lists for the day though, just having a timer (Pomodairo) that tells us to take a break every 25 minutes.

We also switch pairs frequently, achiving collective code ownership for the entire project. Unfortunatly that doesn't include Front-end (html, css, javascript) or Test (Acceptance testing and manual tests), which are each done by a specific person. We have done some pair programming with one of them and one from the rest of the team, but far less than I would have wanted. We also had everyone in the team write one Acceptance Test each do know how it's done, which was a great idea.

We keep design as simple as possible, but no simpler, and always try to remind each other when we notice someone taking height for more than we need to do at this moment in their implementation. We of course try our best to follow the DRY principle, coding everything once and only once. I’ve noticed that it gets harder with unit tests with a lot of mocking, which may differ slightly between the different test cases, so I have to admit that there are some principle violations in the test assembly. Also, of course, we try to avoid if-statements.

To be able to have the unit tests covering as much code as possible, getting high Code Coverage, we do our web pages in a design pattern inspired by The Humble Dialog Box, an article by Michael Feathers. That leaves our aspx files, including the code behind, as stupid as possible and puts all the logic in a composer object in a separate project with no knowledge of the HTTP context. We first had an idea about using ASP.NET MVC but decided that we had enough new elements in our project with all the XP stuff.

That concludes the walk through of how we do this. We’d love to hear your thoughts about it and will be happy to answer any questions. Use the comment field!

Tuesday, February 9, 2010

How we do this - process

I get a lot of questions about how we do this project. I’m going to start with describing the process in this post and am already preparing a follow up about the actual development. Comments and questions about our process are most welcome, but if it’s something about the development you might be better off wait for that post. That being said, here’s how we do it, the process part.

Our cycle (re)starts on Thursdays. Each Thursday before lunch we meet our customer to make sure we're on the right track. Since we have decided on two week iterations, every second Thursday we first have a sprint demo and then (re)prioritize all remaining stories in the backlog. The other Thursday we just discuss issues from the first half of the iteration and status of other ongoing tasks - i.e. stuff that people outside the team are doing. That may be setting up the production servers in the customers’ environment for example.

On the sprint ends we have a retrospective where the whole team, except the customer, get together and discuss the sprint. At first we had it before the customer meeting with the demo, but we always had to rush it in the end to get done before the customer came. Once we invited the customer to join or retrospective, but since the involvement during sprints is so low the customer didn't have much to say. Hence we decided we'd keep doing without and moved the retrospective to after lunch. The time before the customer arrives we now use to prepare for the demo.

The next step in the process is the task breakdown of the top prioritized stories. We get the whole team together and the Interaction Designer, who has somewhat the role of the on-site customer, will walk us through the stories and show the prototype. We take the the stories one-by-one and after the walkthrough we break down the story to tasks. When the breakdown is complete we play Planning Poker to estimate each task in measure of days, with half a day being the lowest estimate. Tasks a lot smaller than that we try to merge together.

When we had the retrospective before lunch we had task breakdown after lunch. I felt we needed some more slack to really end the first sprint before starting a new one though, so that was another reason for changing the meeting calendar. As we moved the retrospective to the after lunch slot, we now break down the stories on Friday after our daily standup meeting. I feel that it gives us a lot better rhythm. That also leaves some air on Thursdays to execute some ideas from the retrospective and deal with some technical debt.

Speaking of the daily standup we have it each morning at 9.15 except for Mondays, when we have a general meeting for the entire company at that time, and have our standup after that. We have all done Scrum before and have learned to value the daily standup meeting a lot. That’s about all we took from Scrum that doesn’t exist in XP though, except maybe the role Board Master that we invented to have some responsibility for our Kanban board.

We have the standup meeting in front of the Kanban board. Our board now has the columns Selected, Development, Test and Done, where the Development column is split into three sub columns - In progress, Trash and Ready for Test. The trash column is the only addition from our original board. It is used for collecting all tasks before they are all done. When all tasks are done we will move the story card to Ready for Test and the trashcan can be emptied.

Wednesday, January 27, 2010

The Art of Unit Testing

To deal with the issue I mentioned in the post TDD is Hard earlier I bought the book The Art of Unit Testing by Roy Osherove. It helped a lot! Osherove writes plenty about how to make the tests cope with changes in the code and about writing maintainable tests.

He also mentioned Test Coverage tools as a good way to build reliable tests. Making sure every line is covered by a test doesn't test all possible results tho, but it's a good start. For example if you want to test a method that validates email addresses you might just have a RegExp. That row will be covered by the first test you write. Still I wouldn't find it reliable by just making a test with one email address. Here I didn't find any solution in the book, so I'd be interested to hear how you make sure your test is reliable.

How ever, I downloaded the last free version of NCover Explorer (1.4.0.7). It showed to be a very competent tool that let me find a bunch of untested code paths, and also a few completely untested classes. We also added a simple version to our Cruise Control project that gives us a figure with the current coverage (88 % right now). We have not excluded code that are excluded from the unit tests with the Ignore attribute though, so the real coverage is higher. We use the Ignore attribute for some integration tests that require a VPN connection to work.

Among the tips from the book I found the most important one was to have at most one mock object in a test, while other fake objects should be stubs. He also claimed that a test should have only one Assert verifying the outcome, but I'm not sure I agree with that. He is the expert tho, but I guess I need to burn my own fingers before I agree with that.

As a conclusion I would recommend this book to anyone. The one who wants to start doing unit testing will probably get the most from the book, but I think that also the one who is just curious about it and definatly the one who's been doing it for some time (that's me!) will have a good read too. I bet that even the expert might get some new insights while reading it. And it's a quick read - I think I spent less than 10 hours to read it all, except the appendixes.