Test-Driven JavaScript Development with JsUnit

The last time I used JsUnit was when I first joined Talis. At the time my colleague Ian Davis asked me to write a JavaScript client library for one of our platform API’s to make it easy for developers to perform bibliographic searches. It wasn’t a particularly difficult task and I did it relatively easily. It was around the same time that Rob was extolling the virtues of Test Driven Development to me, and to try to prove his point we agreed to do an experiment: he asked me to set aside the library I had written and to see if I could develop the library again using test driven development. It meant I had to figure out how to unit test JavaScript, and thats when I found JsUnit. I did the exercise again and even I was impressed with the results. By having to think about the tests first, and design the interface to the library as I wrote each test it evolved very differently to my original solution. Consequently it was also far superior.

Anyway fast forward two and half years and I find myself in a similar situation. We have only just begun to start writing bits of JavaScript code based around prototype.js to help us create richer user experiences in our products if we detect that JavaScript is enabled in the browser. This now means I want to ensure that we are using the same rigour when writing these bits of code as we do in all other parts of the application – just because its JavaScript and executed inside the browser this doesn’t mean it shouldn’t be tested.

I’ve just spent the morning getting JsUnit installed and figuring out how to get it to run as part of a continuous integration process, as well as thinking about how to write tests for some slightly different scenarios. Here’s what I’ve discovered today:

Installing JsUnit

Couldn’t be easier … go to www.jsunit.net and download the latest distribution, and extract into a folder on your system somewhere, lets say
/jsunit for now. The distribution contains both the standard test runner as well as jsunit server which you will need it if you want to hook it into an ant build.

Writing Tests

In JsUnit we place our tests in a HTML Test Page which is the equivalent of a Test Class, this test page must have a script reference to the jsUnitCore.js so the test runner knows its a test. So lets work through a simple example. Let’s say we want to write a function that returns the result of adding two parameters together. The Test Page for this might look like this:

  2. <html>
  3.  <head>
  4.   <title>Test Page for add(value1, value2)</title>
  5.   <script language="javascript" src="/jsunit/app/jsUnitCore.js"></script>
  6.   <script language="javascript" src="scripts/addValues.js"></script>
  7.  </head>
  8.  <body>
  9.     <script language="javascript">
  10.     function testAddWithTwoValidArguments() {
  11.         assertEquals("2 add 3 is 5", 5, add(2,3) );
  12.     }
  13.   </script>
  14.  </body>
  15. </html>

For now lets save this file to /my-jsunit-tests/addTest.html

To run the test you need to point your browser at the following local url:


The test will not run since we haven’t defined the add function. Let’s do that (very crudely):


Now if you go to that URL it will run the test and report that it passed. Excellent, we’ve written a simple test in JavaScript. Now lets extend this a little, lets say I want to write something more complicated like a piece of JavaScript that uses Prototype.js to update the DOM of a page. Is this possible? Can I do that test first? It turns out that you can …

Lets say we have a div on the page called ‘tableOfContents’ and we want to use Prototype.js to dynamically inject a link onto the page that says [show] and lets say we want to write a function that will toggle this link to say [hide] when the user clicks on it, this link will also set the visible state of the table of contents itself which for now we’ll say is just an ordered list (OL). Our test page is going to be slightly more complex …

  2. <html>
  3.  <head>
  4.   <title>Test Page for multiplyAndAddFive(value1, value2)</title>
  5.   <script language="javascript" src="/jsunit/app/jsUnitCore.js"></script>
  6.   <script language="javascript" src="scripts/prototype/prototype-"></script>
  7.   <script language="javascript" src="scripts/tableOfContents.js"></script>
  8.  </head>
  9.  <body>
  10.     <div id="tableOfContents">
  11.     <h2 id="tableOfContentsHeader">Table of contents</h2>
  12.     <ol id="list-toc">
  13.     </ol>
  14.     </div>    
  15.     <script language="javascript">
  16.     function testTOC()
  17.     {
  18.         var title = $(‘lnkToggleTOC’).title;
  19.         assertEquals("should be Show the table of contents", "Show the table of contents", title);
  21.         toggleTableOfContents();
  23.         var title = $(‘lnkToggleTOC’).title;
  24.         assertEquals("should be Hide the table of contents", "Hide the table of contents", title);
  26.     }
  27.   </script>
  28.  </body>
  29. </html>

There are some differences in this test. Firstly the html contains some markup, that I’m using as the containers for my table of contents. The table of contents has a header and the contents in the form of an empty ordered list. Now I know that I want the javascript to execute when the page is loaded, so I’ve written this test to assume that the script will run and will inject and element called ‘linkToggleToc’ which is the show/hide link next to the heading. Therefore the first line of the test uses prototype.js element selector notation to set a local variable called title to the value of the title of the element that has the id ‘linkToggleToc’. If the script failes to execute then this element will not be present and the subsequent assert will fail. If the assert succeeds, then we call the toggleTableOfContents function and then repeat the same evaluation only now we are checking to see if the link has been changed.

The code for tableOfContents.js is as follows:

  1. span class=”st0″>’load’‘Show the table of contents’‘Hide the table of contents’‘list-toc’).hide();
  2.     $(‘tableOfContentsHeader’‘inline’‘a’, { ‘id’: ‘lnkToggleTOC’, ‘href’: ‘javascript:toggleTableOfContents()’, ‘title’: titleShowTOC, ‘class’: ‘ctr’ }).update("[show]");
  4.     $(‘tableOfContentsHeader’‘after’‘list-toc’‘lnkToggleTOC’).update(‘[show]’);
  5.         $(‘lnkToggleTOC’‘lnkToggleTOC’).update(‘[hide]’);
  6.         $(‘lnkToggleTOC’).title = titleHideTOC;
  7.     }
  8. }

Now if we run this test in the same way we executed the previous test it will pass. I accept that this example is a bit contrived since I know it already works and I’ve skimmed over some of the details around it. The point I’m trying to make though is that you can write unit tests for pretty much any kind of JavaScript you need to write, even tests for scripts that do dom manipulation, or make AjaxRequests etc.

Setting up the JsUnit server so you can run it in a build

JsUnit ships with its own ant build file that requires some additional configuration before you can run the server. The top of the build file contains a number of properties that need to be set, here’s what you set them to ( using the paths that I’ve been using in the above example)

  2. <project name="JsUnit" default="create_distribution" basedir=".">
  4.   <property
  5.     name="browserFileNames"
  6.     value="/usr/bin/firefox-2" />
  8.   <property
  9.     id="closeBrowsersAfterTestRuns"
  10.     name="closeBrowsersAfterTestRuns"
  11.     value="false" />
  13.   <property
  14.     id="ignoreUnresponsiveRemoteMachines"
  15.     name="ignoreUnresponsiveRemoteMachines"
  16.     value="true" />
  18.   <property
  19.     id="logsDirectory"
  20.     name="logsDirectory"
  21.     value="/my-jsunit-tests/results/" />
  23.   <property
  24.     id="port"
  25.     name="port"
  26.     value="9001"  />
  28.   <property
  29.     id="remoteMachineURLs"
  30.     name="remoteMachineURLs"
  31.     value="" />
  33.   <property
  34.     id="timeoutSeconds"
  35.     name="timeoutSeconds"
  36.     value="60" />
  38.   <property
  39.     id="url"
  40.     name="url"
  41.     value="file:///jsunit/testRunner.html?testPage=/my-jsunit-tests/tocTest.html" />
  42. </project>

You can then type the following command in the root of the jsunit distribution to launch the jsunit server, executes the test, and outputs a test results log file, formatted just like JUnit, and reports that the build was either successful or not if the test fails.

  ant standalone_test

Remember that in this example I’ve used a simple Test Page, however JsUnit, like any XUnit framework allows you to specify Test Suites, which is how you would run multiple Test Pages. Also the parameters in the build file woudn’t be hardcoded in you continuous integration process but would rather be passed in, and you would want to call it from your projects main ant build file … all of which is pretty simple to configure, once you know what is you want to do and what’s possible.

Human Aspects of Agile Software Engineering

This is a very interesting way of looking at Agile. The talk is a based on an assumption – that the people involved in software development processes deserve more attention. I agree with the many of the points made in the talk, particularly around the intangible nature of the software we build. Which is one of the reasons we are still, as an industry, trying to discover the best practises, and a true sense of engineering discipline.

Its well worth watching this talk, especially to get a different perspective on Agile, focusing on the social and cognitive effects. She used the Prisoners Dilemma to illustrate some of these points and to be honest I had never thought of Software Engineering in those terms, yet as a metaphor it fits so very well,

in software engineering this is reflected in the fact that software is an intangible product so sometimes we cannot verify that our colleagues behave in the same way … so what Agile software development does is it banishes the inability to verify that co-operation is reciprocated, by increasing the process transparency, in such a way it banishes the working assumption of the prisoners dilemma.

Automated Testing Patterns and Smells

Wonderful tech talk by Gerard Meszaros who is a consultant specialising in agile development processes. In this particular presentation Gerard describes a number of common problems encountered when writing and running automated unit and functional tests. He describes these problems as “test smells”, and talks about their root causes. He also suggests possible solutions which he expresses as design patterns for testing. While many of the practices he talks about are directly actionable by developers or testers, it’s important to realise that many also require action from a supportive manager and/or system architect in order to be really achievable.

We use many flavours of xUnit test frameworks in our development group at Talis, and we generally follow a Test First development approach, I found this talk beneficial because many of the issues that Gerard talks about are problems we have encountered and I don’t doubt every development group out there, including ours, can benefit from the insight’s he provides.

The material he uses in his talk and many of the examples are from his book xUnit Test Patterns: Refactoring Test Code, which I’m certainly going to order.

Holding a Program in One’s Head

from an excellent new essay by Paul Graham

Good programmers manage to get a lot done anyway. But often it requires practically an act of rebellion against the organizations that employ them. Perhaps it will help to understand that the way programmers behave is driven by the demands of the work they do. It’s not because they’re irresponsible that they work in long binges during which they blow off all other obligations, plunge straight into programming instead of writing specs first, and rewrite code that already works. It’s not because they’re unfriendly that they prefer to work alone, or growl at people who pop their head in the door to say hello. This apparently random collection of annoying habits has a single explanation: the power of holding a program in one’s head.

Whether or not understanding this can help large organizations, it can certainly help their competitors. The weakest point in big companies is that they don’t let individual programmers do great work. So if you’re a little startup, this is the place to attack them. Take on the kind of problems that have to be solved in one big brain.

so very true … !

Automated regression testing and CI with Selenium PHP

I’ve been doing some work this iteration on getting Selenium RC integrated into our build process so we can run a suite of automated functional regression tests against our application on each build. The application I’m working on is written in PHP, normally when you use Selenium IDE to record a test script it saves it as a HTML file.

For example a simple test script that goes to Google and verifies that the text “Search:” is present on the screen and the title of the page is “iGoogle” looks like this:

  2. <html>
  3. <head>
  4. <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  5. <title>New Test</title>
  6. </head>
  7. <body>
  8. <table cellpadding="1" cellspacing="1" border="1">
  9. <thead>
  10. <tr><td rowspan="1" colspan="3">New Test</td></tr>
  11. </thead><tbody>
  12. <tr>
  13.     <td>open</td>
  14.     <td>/ig?hl=en</td>
  15.     <td></td>
  16. </tr>
  17. <tr>
  18.     <td>verifyTextPresent</td>
  19.     <td>Search:</td>
  20.     <td></td>
  21. </tr>
  22. <tr>
  23.     <td>assertTitle</td>
  24.     <td>iGoogle</td>
  25.     <td></td>
  26. </tr>
  28. </tbody></table>
  29. </body>
  30. </html>

You can choose to export the script in several other languages, including PHP, in which case the test script it produces looks like this:

  1. span class=”st0″>’Testing/Selenium.php’‘PHPUnit/Framework/TestCase.php’"*firefox", "http://localhost:4444/""/ig?hl=en""Search:""iGoogle"

The Export produces a valid PHPUnit test case that uses the Selenium PHP Client Driver(Selenium.php). Whilst the script is valid and will run you do need add a little more to it before the test will correctly report errors. As it stands all errors captured during the test are added to an array called verificationErrors, by catching the assertion Exceptions that are thrown when an assert fails, in other words if you ran this test as it is, and it did fail you wouldn’t know! To correct this we need to do two things. Firstly, each assert needs to have a message added to it which will printed out in the test report if the assert fails. Secondly we need to modify the tearDown method so that once a test has run, it checks the verificationErrors array, and if any failures have occurred, fails the test. After making these changes the PHP test script looks like this:

  1. span class=”st0″>’Testing/Selenium.php’‘PHPUnit/Framework/TestCase.php’"*firefox",
  2.                          "http://localhost:4444/""VERIFICATION ERRORS:""\n""/ig?hl=en""Search:"),
  3.                "The string Search: was not found""iGoogle",
  4.                    $this->selenium->getTitle(),
  5.                    "The page title did not match iGoogle."

Obviously, I have also given the PHP Class and test function slightly more meaningful names. Now you have a PHP Unit Test case that will use the Selenium PHP Client Driver with Selenium Remote Control to launch a browser, go to the specified URL, and test a couple of assertions. If any of those assertions fail, the tearDown method fails the test … pretty cool, right?

Well now it get’s better. Because the Selenium Client Driver has a published api which is pretty easy to follow, there’s no reason why you can’t just write test cases without using Selenium IDE … for those who want to you could even incorporate this into a TDD process. But for all this to hang together we need to be able to run a build, on a Continuous Integration server which checks out the code, runs unit tests and selenium regression tests against that code line, and only if all tests succeed passes the build.

We are currently using ANT, and CruiseControl to handle our CI/Automated build process. When running the automated suite of tests we need to ensure that the Selenium Remote Control server is also running which creates some complications. The Selenium Remote Control server takes several arguments which can also include the location of a test suite of html based selenium tests – which is really nice because the server will start, execute those tests and then end. Unfortunately you can’t invoke the server and pass it the location of a PHP based test suite. This means you need to find a way to start up the server, then run your tests, and once they are complete, shut the selenium server down.

He are the ANT targets that I have written to achieve this, if anyone can think of better ways of doing this I’d welcome any feedback or suggestions, to run this example you’d simply enter the command “ant selenium” :

  2. <target name="selenium" depends="clean, init" description="Run the Selenium tests">
  3.   <parallel>
  4.     <antcall target="StartRCServer" />
  5.     <antcall target="RunSeleniumTests" />
  6.   </parallel>
  7. </target>
  9. <target name="StartRCServer" description="Start the Selenium RC server">
  10.   <java jar="dependencies/SeleniumRC/lib/selenium-server.jar"
  11.         fork="true" failonerror="true">
  12.     <jvmarg value="-Dhttp.proxyHost=host.domain.com"/>
  13.     <jvmarg value="-Dhttp.proxyPort=80"/>
  14.   </java>
  15. </target>
  17. <target name="RunSeleniumTests" description="RunAllSeleniumTests">    
  18.   <sleep milliseconds="2000" />
  19.   <echo message="======================================" />
  20.   <echo message="Running Selenium Regression Test Suite" />
  21.   <echo message="======================================" />
  22.   <exec executable="php"
  23.        failonerror="false"
  24.        dir="test/seleniumtests/regressiontests/"
  25.        resultproperty="regError">
  26.        <arg line="../../../dependencies/PHPUnit/TextUI/Command.php –log-xml ../../../doc/SeleniumTestReports/RegressionTests/TestReport.xml AllRegressionTests" />
  27.   </exec>
  28.   <get taskname="selenium-shutdown"
  29.     src="http://localhost:4444/selenium-server/driver/?cmd=shutDown"
  30.     dest="result.txt" ignoreerrors="true" />
  32.   <condition property="regressionTest.err">
  33.     <or>
  34.        <equals arg1="1" arg2="${regError}" />
  35.        <equals arg1="2" arg2="${regError}" />
  36.     </or>
  37.   </condition>
  39.   <fail if="regressionTest.err" message="ERROR: Selenium Regression Tests Failed" />               
  40. </target>

A couple of notes, the reason I have to use a conditional check at the end of the selenium target is because, if the exec task that runs the PHP tests was set to failonerror=true then the build would never reach the next line which shuts the Selenium RC server down. To ensure that always happens I have to set the exec to failonerror=false, but this means I have to check what the result was from the exec. Which if successful will return 0, if test failures exist will return 1, and if there were any errors (preventing a test to be exectuted ) will return 2. Hence the conditional check sets regressionTest.err if either of these latter two conditions are true.

Also in order to start up the server, which could take up to a second, but can’t be sure precisely how long. I have to use the Ant Parallel task, which calls the task to start the server and the task to run the tests at the same time. The task to run the tests has a 2 second sleep in it, which should be more than enough time to allow the server to start. This all kind of feels a little clunky, but at the moment it does work very well.

In a nutshell, thats how you integrate PHP based Automated Selenium Regression tests into a continuous build.

Continuous Integration with PHP

I’m in the process of setting up a continuous integration environment for a new PHP project I’m starting. On our previous project, which was Java based, we used the following tools to support similar requirements on that project in order to allow us to implement the project using a test driven approach and automate build generation:

  • Cruise Control – For setting up a Continuous Build process.
  • Ant – A Java based build tool.
  • JUnit – A Java Based xunit testing framework
  • PMD – A tool that checks for common coding standards violations.
  • Corbetura – A Java based code coverage too that calculates the percentage of code accessed by unit tests

I’ve managed to set up an analogous process for our PHP project using the following tools:

  • Cruise Control – For setting up a Continuous Build process.
  • Ant – A Java based build tool.
  • PHPUnit – An Xunit testing framework for PHP
  • PHP_CodeSniffer – A PHP tool that checks for common coding standards violations.
  • Xdebug – Debugging tool, which when combined with PHPUnit, can provide Code Coverage Metrics

It seems to work quite well, here’s the relatively simple ant build script that controls it all.

  2. <?xml version="1.0" encoding="UTF-8"?>
  3. <project default="all" name="DummyProject" basedir=".">
  4.     <target name="all" depends="clean, init, test, sniff" />
  6.     <target name="clean">
  7.         <delete dir="doc/CodeCoverage" />
  8.         <delete dir="doc/UnitTestReport" />
  9.     </target>
  11.     <target name="init">
  12.         <mkdir dir="doc/CodeCoverage" />
  13.         <mkdir dir="doc/UnitTestReport" />
  14.     </target>
  16.     <target name="test" description="Run PHPUnit tests">
  17.         <exec dir="./" executable="TestRunner.bat" failonerror="true">
  18.         </exec>
  19.     </target>
  21.     <target name="sniff" description="">
  22.         <exec dir="./" executable="Sniffer.bat" failonerror="true">
  23.         </exec>
  24.     </target>
  25. </project>

I’m currently running this on a windows machine although it’s trivial to change it work in an *ix based environment which I’ll probably configure in the next day or so. I had a couple of problems installing PHP_CodeSniffer although it was because I hadn’t installed PEAR properly. If you have any problems installing PHP_CodeSniffer under Windows then follow these instructions:

To install PEAR under windows do the following, which assumes you have PHP5.2x installed in c:\php :

  cd c:\\php

The interactive installer presents you with some options, if you follow the defaults you should be fine.
Once PEAR has installed you can install PHP_CodeSniffer like this:

  cd c:\\php
  pear install PHP_CodeSniffer-beta

This will download the PHP_CodeSniffer package and install into into your php/PEAR folder.

Once this is done you can check to see if it has installed by calling phpcs with -h flag which will produce the following:

C:\\php>phpcs -h
Usage: phpcs [-nlvi] [--report=] [--standard=]
    [--generator=] [--extensions=]  ...
        -n           Do not print warnings
        -l           Local directory only, no recursion
        -v[v][v]     Print verbose output
        -i           Show a list of installed coding standards
        --help       Print this help message
        --version    Print version information
               One or more files and/or directories to check
         A comma separated list of file extensions to check
                     (only valid if checking a directory)
           The name of the coding standard to use
          The name of a doc genertor to use
                     (forces doc generation instead of checking)
             Print either the "full" or "summary" report

Now try it out …

C:\\php>phpcs C:\\Projects\\dummyproject\\test\\

FILE: C:\\Projects\\dummyproject\\test\\AllTests.php
  1 | ERROR | End of line character is invalid; expected "\n" but found "\r\n"
  2 | ERROR | Missing file doc comment
  3 | ERROR | Constants must be uppercase; expected 'PHPUNIT_MAIN_METHOD' but
    |       | found 'PHPUnit_MAIN_METHOD'
 12 | ERROR | Missing class doc comment
 14 | ERROR | Missing function doc comment
 19 | ERROR | Missing function doc comment
 30 | ERROR | Constants must be uppercase; expected PHPUNIT_MAIN_METHOD but
    |       | found PHPUnit_MAIN_METHOD

Check the documentation for what the various command line switches do, but you can generate summary reports as well on an entire directory tree:

C:\\php>phpcs --report=summary --standard=Squiz C:\\Projects\\dummyproject\\test\\

FILE                                                            ERRORS  WARNINGS
C:\\Projects\\dummyproject\\test\\AllTests.php                  24      0
C:\\Projects\\dummyproject\\test\\NodeTest.php                  10      0
C:\\Projects\\dummyproject\\test\\NodeTypeTest.php              11      0

Overall I’m quite happy with this set up which for the most part was pretty straight forward. I have no doubt it will evolve over time but I think it’s a good foundation on which to build upon.

Greeks vs Romans. Adaptive vs Plain-Driven development.

Came across this wonderful essay over at Hacknot today. The essay starts off by decrying this assertion made by Raghavendra Rao Loka, in February’s 2007 edition of IEEE Software:

“Writing and maintaining software are not engineering activities. So it’s not clear why we call software development software engineering.”

The author of the essay goes onto offer a brief rebuttal of this this based on some comments by Steve McConnell, and pointed out quite rightly in my opinion that:

Software development is slowly and surely moving its way out of the mire of superstition and belief into the realm of empiricism and reason. The transition closely parallels those already made in other disciplines, such as medicine’s evolution from witchcraft into medical science.

What I found really insightful though was how the author likened these two views ( Loka vs McConnell ) to another conflict:

These two represent the age-old conflict between the engineers and the artists, the sciences and the humanities. In the software development domain, some have previously characterized it as the battle between the Greeks and the Romans.

He then applies this same metaphor to the wider issue of adaptive vs the plain-driven approaches to software development and in doing so offers an interesting perspective on the two schools of thought:

By now you will probably have recognized the analogue between the cultural divide separating the Greeks and Romans and the methodological divide between adaptive and plan-driven approaches to software development.

We can think of the Greeks as representative of Agile Methods. The focus is upon loosely coordinated individuals whose talent and passion combine to produce great artefacts of creativity. Any organizational shortcomings the team might experience are overcome by the cleverness and skilful adaptivity of the contributors.

Alternatively, we can consider the Romans as representative of plan driven methods, in which the carefully engineered and executed efforts of competent and well educated practitioners combine to produce works of great size and complexity. The shortcomings of any of the individuals involved are overcome by the systematic methods and peer review structure within which they work.

It’s a wonderful analogy that I think illustrates some of the differences between the two approaches in a way that most practitioners would readily understand and perhaps even agree with. Which is the right approach? I guess it depends on you, your project but most importantly the kind of people you have in your team. I have certainly spent a considerable amount of time extolling what I believe are the virtues of the agile methodology on this blog.

I spent several years working in a very Roman-esque organisation and I’d probably argue that the competent and well educated practitioners in such organisations rarely fit either of those terms. Probably because the very nature of such systems require those working within that confine to accept a certain level of conformity. Consequently individual flair, creativity or even imagination are less important than the uniformity that such organisations require – a sort of fill-in-the-blanks approach to development where your responsible for only those small pieces assigned to you, and whether you necessarily know what the bigger picture is or even understand it isn’t important because some more senior in the team who assigns you your tasks does know. This often results in developers feeling disaffected or perhaps less likely to feel any sense of personal ownership or even responsibility, because they know they aren’t in a position to be responsible for anything.

I do find myself agreeing that part of what makes agile teams successful is this notion of heroics on the part of individuals. It does work well when you have talented individuals who can work together in a reasonably small scale. How well does that scales up? … I can’t personally answer that question?  I’ve read plenty of accounts and listened to the likes of Scott Ambler explain how agile can work on large scale projects. I must admit I’m not sure if I’m convinced of this. … but that’s only because of my earlier point that this decision needs, in part, to be based on the type of people you have your in team.

Our development group at Talis is quite small,at the moment no more than 20 people – and it’s not for a lack of trying, were constantly trying to recruit people but we look specifically for skilled developers who could fit into the culture that we have. Individuals who are self motivated, self learners who take a great deal of personal pride in what they do, take responsibility and also want to have a sense of ownership over what they are building. As a result we tend to be quite jealous of who we let into the team, but again that’s probably something that is unique to our little group and says more about us than the Agile process.

I do find myself partly agreeing with the conclusion the author of the essay makes:

I favour a development method that is predominantly Greek, with sufficient Roman thrown in to keep things under control and prevent needless wheel spinning. The Greeks are a great bunch of guys, but they tend to put too much emphasis on individual heroics, and pay too little attention to the needs of the maintenance programmers that will following in their path. The Romans are a little stuffy and formal, but they really know how to keep things under control and keep your project running smoothly and predictably

 I don’t agree that, in terms of software development,  the Roman approach was ever truly able to keep things under control or necessarily running smoothly even predictably. If it was we wouldn’t have so many examples of failed projects that went over budget, went over time and totally failed to deliver what the customer actually wanted.

I personally do prefer a predominantly Greek approach and I do see methodologies such as SCRUM imposing the right level of Roman-esque control and structure to keep the project moving along smoothly and predictably, with the added bonus that the customer actually gets what they want.

Spikes, PHP and a Platform that just works

I’ve had a pretty good week. I’ve been totally engrossed in a project I’ve been working on since getting back from Xtech last week. Essentially I’ve been working on a spike with Andrew and Hardeep to extend the functionality in our Project Cenote concept car.

The purpose of the spike was two fold. Firstly to try to better understand how to build a funky new set of features into Cenote, and secondly to allow the members of the team become familiar with and experiment with some technologies that they aren’t familiar with.

In fact it felt quite good leaving work today, having gotten to the point where the little prototype is pretty much feature complete. What’s really impressed me isn’t necessarily what it does (which is cool!), but the speed with which we’ve been able to put it all together. The spike was timeboxed to two weeks, but the reality is that the bulk of the implementation has actually been completed within the last few days. It’s by no means a production system it’s just enough to hopefully facilitate some of the discussions we hope to have both internally and externally … much like the first release of Cenote, which we Open Sourced recently.

The original version of Cenote was a read only application that allowed users to search for books and then mashed up the results with data held in various stores in the Talis Platform as well as external sources such as Amazon in order to provide users with some pretty useful information. This spike extends the original version by allowing users to use that data to create and share some really useful things.

I think there’s some important reasons why we have been able to put this together so quickly. The technology stack has been kept very simple – its just an application built in PHP5, running under Apache 2. Furthermore the application is built upon our Talis Platform which is constantly evolving and becoming more and more powerful. I’m not saying that just as someone who has worked on building that platform, I’m actually saying that as someone who has been using and consuming it’s services primarily to build applications with it.

When Rob and I originally wrote Cenote, we were both impressed at how easily we were able to use the platform, as it was then, to put together a cool looking application in the space of a couple of days. If that impressed me at the time, then I’m doubly impressed at how simple it’s been to create an application that supports creation, deletion and updating of data.

The Role of Testing and QA in Agile Software Development

In our development group at Talis We’ve been thinking a lot about how to test more effectively in an agile environment. One of my colleagues, sent me a link to this excellent talk by Scott Ambler which examines the Role of Testing and QA in Agile Software Development.

Much of the talk is really an introduction to Agile Development which is beneficial to listen to because Scott dispels some of the myths around agile, and offers his own views on best practises using some examples. It does get a bit heated around the 45 minute mark when he’s discussing Database Refactoring, some of the people in the audience were struggling with the idea he was presenting which I felt was fairly simple. If you really want to skip all that try to forward to the 50 minute mark where he starts talking about sandboxes. What I will say is that if your having difficulty getting Agile accepted into your organisation then this might be a video you want to show your managers since it covers all the major issues and benefits.

Here’s some of the tips he has with regard to testing and improving quality:

  • Do Test Driven Development, the unit tests are the detailed design, they force developers to think about the design. Call it Just-in-time design.
  • Use Continuous Integration to build and run unit tests on each check-in to trunk.
  • Acceptance Tests are primary artefacts. Don’t bother with a requirements document simply maintain the acceptance test since the reality is that all testing teams will do is take that requirement and copy it into an acceptance test, so why introduce a traceability issue when you don’t need it. http://www.agilemodeling.com/essays/singleSourceInformation.htm
  • Use Standards and Guidelines to help ensure teams are creating consistent artefacts.
  • Code Reviews and Inspections are not a best practise. They are used to compensate for people working alone, not sharing their work, not communicating, poor teamwork, and poor collaboration. Guru checks output is an anti-pattern. Working together, pairing, good communication, teamwork should negate the need for code reviews and inspections.
  • Short feedback loop is extremely important. The faster you can get testing results and feedback from stakeholders the better.
  • Testers need to be flexible, willing to pick up new skills, need to be able to work with others. They need to be generalising specialists. The trend that is emerging in agile or the emerging belief is that there is no need for traditional testers.

Scott is a passionate speaker and very convincing, some of the points he makes are quite controversial yet hard to ignore – especially his argument that traditional testers are becoming less necessary. I’m not sure I agree with all his views yet he has succeeded in forcing me to challenge my own views which I need to mull over and for that reason alone watching his talk has been invaluable.

End of sprint, SCRUM and why I’m feeling so good

We’ve just reached the end of our eighth sprint on the project I’ve been working on. On Monday we’ll be doing our end of sprint demonstrations to customers as well as internally to the rest of the company and I have to say I’m feeling quite good about it. It’s a lovely day today feel like I need to chill (or as Rob suggested – maybe I need to get a life 😉 ) anyway I’ve been sitting here reflecting on this month and there’s a few things I want to talk about.

I’m fairly new to the SCRUM methodology, in fact this is the first project I’ve worked on that formally uses it. Our development group here at Talis has adopted the SCRUM process across all of our current projects with what I feel has been a great deal of success.

For me personally the transition from traditional waterfall approaches to agile methodologies has been a revelation in many ways. Before joining Talis I’d spent a number of years developing software based on traditional waterfall methodologies. What was frustrating with these traditional approaches was that you’d spend months capturing and documenting requirements, you’d then spend a while analysing these requirements and then designing your software product, before implementing it and then testing it. Any changes to the requirements invariably meant going through a process of change impact analysis and then going through the whole process again (for the changed requirements), which naturally increases the cost to the customer.

A side effect of which, from the perspective of the customer, was that changing requirements during the project was a bad idea, because of the extra costs it would incur. A consequence of this is that customers would often take delivery of systems which after a couple of years of development, don’t actually satisfy the requirements that they now have. These same customers would then have to take out a maintenance contract with the vendor to get the systems updated to satisfy their new requirements.

From a developers point of view, I often found this to be very demoralising, you knew you you were building something the customer didn’t really want, but the software house you work for wants to do that because they have a signed off requirements document and contract that guarantee’s them more money if the customer changes their mind. I often found that when we reached the end of a project, the delivery of that software to the customer was a very and nervous and stressful time. The customer at this point has not necessarily had any visibility of the product so there’s usually a couple of months when your organisation is trying to get them to accept the product – which invariably led to arguments over the interpretation of requirements – and sometimes scary looking meetings between lawyers from both sides.

There was always something very wrong with it.

Since joining Talis, and transitioning to agile methodologies I can finally see why it was so wrong, and why agile, and in this case SCRUM, work so well.

For one thing, I’m not nervous about the end of sprint demonstrations. 🙂 The customers have been involved all along, they’ve been using the system, constantly providing feedback, constantly letting us know what were doing well, and what we need to improve on.

Our sprints have been four weeks long, which means at the beginning of the sprint we agree which stories we are going to implement based on what the customers have asked us for, these can be new stories that have been identified, or stories from the backlog. The customers have an idea, from previous sprints, what our velocity is – in other words they, and we, know how much work we can get done in a sprint so when we pick the stories for the sprint we ensure we don’t exceed that limit. This keeps things realistic. Any story that doesn’t get scheduled in for the sprint because it was deemed less of a priority than the stories that are selected gets added to a backlog.

This iterative cycle is great! For one thing customer’s are encouraged to change their minds, to change their requirements, because they then have to prioritise whether that change is more important than the other items on the backlog. They are empowered to choose what means the most to them, and that’s what we give them ! The customer doesn’t feel like the enemy, but an integral part of the team, and for me that’s vital.

As a developer it feels great to know that your customers like your product … and why shouldn’t they, they’ve been involved every step of the way, they’ve been using it every step of the way.

I’ve only been here at Talis for ten months, and in that time I’ve had to constantly re-examine and re-evaluate not only what I think it means to be a good software developer but pretty much every facet of the process of building services and products for customers. For me it’s been an amazing journey of discovery and I’m pretty certain it’s going to continue to be for a very long time.

The really wonderful thing though is that in our development group I’m surrounded by a team of people who believe passionately that it’s the journey and how we deal with it that defines us, and not the destination. So we are constantly looking for ways to improve, that in itself can be inspiring.
So yeah … I feel good!

Our development group is always looking for talented developers who share our ethos and could fit into the culture we have here. If you’d like to be a part of this journey then get in touch with us. Some of us, including myself, will be at XTECH 2007 next month, so if your attending the conference come and have a chat with us.