Blog Archives

A blitz test of the documentation

This week at Atlassian we held a blitz test of the documentation that we were soon to publish with an upcoming software release. It was fun and productive. We made some nice improvements in the documentation. As a bonus, we found a cute bug in the application itself when we tested the procedure against the step-by-step guide.

Our QA team first introduced blitz tests, to test the applications themselves rather than the documentation. We decided to try it out on the documentation too.

What is a blitz test?

The doughty blitz testers

In a blitz test, people from all walks of life spend an hour doing concerted and concentrated testing of specific aspects of the product. It’s a way of timeboxing the testing. It’s also a way of viewing the product as whole rather than the disparate bits that have previously been reviewed individually. Best of all, it’s a way of working together on improving the product.

To say “people from all walks of life” is a bit of an exaggeration, of course. The point is that the developers, technical writers, product managers and various other people join the QA team and that we all work together for that hour on testing the product due for imminent release.

As technical writers, our product is the documentation. For this blitz test, we targeted the documentation for the release of Atlassian Crowd 2.1.

Preparing for the blitz test

First I contacted the development team leader and product manager to check whether they liked the idea of the team taking part in a documentation blitz test. They were enthusiastic. The Crowd team are great. 🙂

The next step was to create a test plan that everyone could work off, with the following information:

  • Date and time of the blitz test.
  • Names of the testers.
  • Location. We held a kickoff meeting in the team area and then the testers worked at their own desks.
  • Aim and scope of the blitz test.
  • A list of all the new and updated documents that I wanted tested, grouped by functional area in the application.
  • The test environment. In our case, we were testing the draft documentation on our production documentation wiki.
  • Where and how people could record their test results and feedback.

At the appointed date and time, we all gathered in the area around the development team’s desks to confirm that everyone knew the areas of the application and the documents to be tested. We allocated one or more areas and documents to each person, and checked that everyone knew where to put their results and which chat room to join.

It’s on! People rushed back to their desks, the room took on an air of frenzied quiet, and the chat room and wiki exploded into action.

How did the testers provide their test results?

We offered various ways that people could record their results:

  • Edit the documentation pages directly. The documentation is on a wiki, so that was easy.
  • Add comments to the documentation pages. The pages were in draft status, so the content and the comments were visible to Atlassians only.
  • Chat about the test findings in the team’s online chat room.
  • Ping me on IM.
  • Add a child page to the test plan on the internal wiki, documenting the test findings there.
  • Add a comment to the test plan on the internal wiki.

The results

The development team were positive and enthusiastic about the experience and jumped in with a will.

  • The development team leader suggested a few small additions to the more technical documents, clarifying the impact of changes and new features.
  • A developer pointed out a page where the terminology used on the screens had changed again after I’d updated the documentation. We needed new screenshots and a change to a page name. Phew, that was a biggie.
  • Other developers pointed out a number of small clarifications.
  • Someone pointed out a change in behaviour that was actually a bug. Nothing major, just that the “Continue” button in one of the wizards no longer worked in the same way. You now need to click a tab instead of a “Continue” button when you want to move to the next stage. 🙂 I’ve updated the documentation and we’ll fix the bug in the next release. At which stage I’ll change the documentation back again.
  • Someone else found an issue that he’d fixed in the code, but had forgotten to mark as “fixed” in the issue tracker. The result was that I hadn’t updated the documentation, of course. This was a fairly big find too. Phew again.

Some feedback came directly to me via IM. What a good thing that you can copy and paste from an IM conversation!

Did anyone update the documentation itself? Yes indeedy! The product manager tweaked the release notes, and noted that he would produce some performance stats to illustrate a point.

What did the developers think of the documentation blitz test?

The Crowd team's standup toy at rest in the clasp of a monkey

After the blitz test was over, I asked the development team what they thought of the experiment. Here’s their feedback about why they thought it was beneficial:

  • Team ownership of the documentation.
  • Team building.
  • The obvious verification that what we’ve specified in the docs is correct.
  • It can be worthwhile double-checking the docs in case functionality or edge cases are missed in the docs (because we may have subtly changed the implementation without notifying the tech writers).

Was the blitz test a Good Thing?

From my point of view as technical writer, it was very valuable.

The development team and product manager had already reviewed the documentation piece by piece, as I developed it. The blitz test gave them the opportunity of viewing it as a whole.

People continued updating the documentation the day after the blitz. We released the software and published the documentation two days after the blitz test. We’re on a roll!

%d bloggers like this: