Showing posts with label Acceptance Testing. Show all posts
Showing posts with label Acceptance Testing. Show all posts

Thursday, February 11, 2010

How we do this - development

Following up the last post, about our process, here’s how we do our development. During development we do try to use as much XP Practices as we can.

We have Continuous Integration using CruiseControl that also runs all our unit tests with NUnit. We also set it up to show code coverage using NCoverExplorer (currently 87 %). We have two targets per project. One target builds and runs tests whenever something is committed to our Subversion repository and the other target does the same thing every night and also deploys it to our test server.

We have Automated Acceptance Tests using Selenium. Unfortunately we haven’t succeeded in making our nightly build target run them after the deployment as we intended to, but instead we run them manually when we come in the morning and once during lunch. Some why they time out nine times out of ten when we have the build server start the test…

We try to use TDD as much as possible. Other than NUnit we use the mock library Rhino Mocks to make all unit tests self contained. We also have the plug in ReSharper installed on everyone’s Visual Studio 2008 to get support for running NUnit tests, extended refactoring menu and enforced coding standard. ReSharper also gives us pointers on better ways to code, like pointing out that a method could be made static. It also gives some, imho, bad ideas though out of the box that needs to be changed in the configuration, for example removing curly brackets around one line clauses. Read more on my opinion about that particular issue on my personal blog, The Tommy Code.

We do quite a lot of Pair Programming. Initially we did virtually everything in pairs, but the further we get with the project the less we’ve done in pairs. “Just fixing it” is, the way I see it, not as important to do with a navigator. When in pairs we try to follow the Pomodoro rule of taking a break after every 25 minutes to make it less intense and to be able to stay focused. We don’t make estimates or task lists for the day though, just having a timer (Pomodairo) that tells us to take a break every 25 minutes.

We also switch pairs frequently, achiving collective code ownership for the entire project. Unfortunatly that doesn't include Front-end (html, css, javascript) or Test (Acceptance testing and manual tests), which are each done by a specific person. We have done some pair programming with one of them and one from the rest of the team, but far less than I would have wanted. We also had everyone in the team write one Acceptance Test each do know how it's done, which was a great idea.

We keep design as simple as possible, but no simpler, and always try to remind each other when we notice someone taking height for more than we need to do at this moment in their implementation. We of course try our best to follow the DRY principle, coding everything once and only once. I’ve noticed that it gets harder with unit tests with a lot of mocking, which may differ slightly between the different test cases, so I have to admit that there are some principle violations in the test assembly. Also, of course, we try to avoid if-statements.

To be able to have the unit tests covering as much code as possible, getting high Code Coverage, we do our web pages in a design pattern inspired by The Humble Dialog Box, an article by Michael Feathers. That leaves our aspx files, including the code behind, as stupid as possible and puts all the logic in a composer object in a separate project with no knowledge of the HTTP context. We first had an idea about using ASP.NET MVC but decided that we had enough new elements in our project with all the XP stuff.

That concludes the walk through of how we do this. We’d love to hear your thoughts about it and will be happy to answer any questions. Use the comment field!

Wednesday, December 9, 2009

Selenium not a success story

Since the beginning of this project our Test Manager has been developing automated acceptance tests for our User Stories using Selenium. While it is great to have automated acceptance tests, it is not so great when you can't get them to run on your build server. The whole point of automated acceptance tests is to get feedback as soon as possible so you can correct either the tests or the code that broke them.

Our goal is to run the Selenium server as a Windows service. First I read this post http://clearspace.openqa.org/thread/17226 where I got the impression it was not possible to get Selenium to run as a service. Then I read this post http://clearspace.openqa.org/message/39530 telling that it was possible. Anyhow we have not got the Selenium to run successfully as a service, or rather, it does run but not reliable.

What are our options?
We decided to timebox the life of the Selenium server experiments to a maximum of a week. During that time we will set another developer to look at the problem to get a new perspective. There are three options to try here:
1. Get the Selenium server to run as a service. As we already know it might not be possible.
2. Always login to the server and start the Selenium server manually and not log out. The Selenium server works fine when you are interactively logged in.
3. Try to start the Selenium server using a scheduled task. I don't think we have tried this.

Another option than getting the acceptance tests to run on the build server is to run the tests on the developers machine, just like with unit tests.

I will give you our final decision in a future post and would of course appreciate your feedback on how you are working with automated acceptance tests and what tooling you use.

Friday, October 23, 2009

Presenting XP - our XP

I just held my fourth presentation of XP this month. The first was on the startup meeting, the second with the team, the third was for the customer and today it was for the entire tech department of Creuna Sweden.

The first occation was alot about selling the idea to the company. That presentation was based on the article What is XP by Ron Jefferies along with a couple of slides I added about why this was good for us, both in this project and in the long run. On the rest of the presentations I have just had one slide, One Page XP by Bill Wake, and talked freely about XP from upper left to lower right, adding things as I pass the different stages.

The second occation was to get the team started, trying to figure our the roles and responsibilities among us. What are the Interaction Designer, the Test Leader and the Project Manager supposed to do? We've decided on having the Interaction Designed being in charge of the user stories, and coordinating them with the customer. Along with the Test Leader she then writes proposed Acceptance Tests for each story. The Test Leader is then in charge of implementing the Automated Acceptance Test using Selenium and also doing the manual testing where Selenium can't be used. The Project Manager mainly do XP while playing The Planning Game and keeping the Overall Schedule. Other than that she keeps track of how we're doing with time and estimates and arrange everything around to let the rest of the team focus on producing - she's somewhat like a Scrum Master.

The third was more of a introduction where we wanted the customer to know what to expect and to get to know how much the customer wanted to be involved. We got the customer to agree on sitting one day every week at our office, and also got to know that more involvment in the story creation was wanted. We settled for a process where Angelina will produce the stories and then walk the customer through them in a separate meeting before we plan. It's a solution we all think will work really good.

This last one was more educational to let everyone else at the company know what XP is, what it can do for us and what pros and cons we've found so far. About half of the group knew briefly what XP is on before hand, but only a few had more than shallow knowledge. The most discussion was when I came to the Design Philosophy box in the lower left where we talked about Good Design versus Simple Design. I stated that all Design Principles still are valid and that Simple Design just states that you shouldn't make your design "future proof". An example that came up was that you might not want to use a Factory for creating objects if you only can see need for one implementation, which I agreed to but added that you'd anyway would make that class implement an Interface that others use to access it - making it easy to add a Factory later on if needed while it isn't adding code that isn't needed. I found it peculiar that no discussion at all was raised around Pair Programming - except our thought about how to deal with new developers added to the team.

Thursday, October 22, 2009

The Planning Game

Today we played our first Planning Game. We started this morning with playing Planning Poker to estimate all the stories we've come up with. It turned out really good, but we didn't manage to estimate all the stories on the hour we had before the customer arrived for the Release Planning and Iteration Planning. A small pile of assumed low priority stories remained, but we probably managed to estimate more stories this way than would have been the case if we had estimated the usual way.

Simon, Tommy and Angelina playing Planning Poker

When the customer arrived we first prioritized all the stories we had, including the ones that we didn't have time to estimate. Once the priorities were in place it was a no-brainer to choose what to include in the first iteration. We simply calculated how much time we would be able to spend during the iteration and then selected stories from the top until we reached the available time. The stories with high priority were all estimated by the morning session. We also created a release plan with some set dates for different parts of the project.

Moving on to Iteration Planning we had a bunch of proposed Acceptance Tests prepared to define each story. These were discussed by the Whole Team, including the customer. We slightly redefined a few of the stories by changing details in the proposed tests, and also came up with some extra tests needed to prove the function. Along with the definition of Acceptance Tests we also performed the Task Breakdown of each story. Here it seemed that the developers were the only ones really needed though, so in the future we might do that after the Iteration Planning.

Friday, October 9, 2009

Cannonball into XP

A few weeks ago we were finally granted to use eXtreme Programming on a new project. It's a web application with a fair amount of business functionality. We have decided for the Cannonball-approach as Kent Beck refers to in his paper “Getting Started with XP”.

We both have some initial understanding about XP and have been anxious to try it out, but haven’t had support until now. We have experience from some of the practices, but not all, and have spent the first few weeks reading up on more details, like Extreme Programming Explained (Kent Beck) and Scrum and XP from the trenches (Henrik Kniberg).

Currently we use continuous integration on all projects already, and as senior developers we have plenty of experience with refactoring. We have used the Scrum process, and thereby Stories and Iterations amongst other things. Tommy have also been experimenting with TDD for a couple of years.

As for our concerns, none of us have done much pair programming or automated acceptance tests before. Another issue is our organization at Creuna which does not map directly to XP. Being consultants having an on site customer can be difficult for example.

Commenting this post we would love to get your input on how you started with XP. Also, during this project, don't hesitate to give us comments how you would solve similar problems.