Blog Archives

AODC Day 3: Help Authoring Tool Comparison

Two weeks ago I attended AODC 2010, the Australasian Online Documentation and Content conference. We were in Darwin, in Australia’s “Top End”. Over the last couple of weeks, I’ve been posting my summaries of the conference sessions, derived from my notes taken during the presentations. All the credit goes to the presenter. Any mistakes or omissions are my own.

Matthew Ellison presented a number of excellent sessions at this conference. His Friday session was called “Help Authoring Tool Comparison”. He discussed what a help authoring tool (HAT) can do for us as technical writers, then did a detailed comparison of a few popular tools.

What a HAT can do for you

HATs hide the complexity and allow you to concentrate on the content. You don’t need to worry about coding and scripting, conditional tags and other mechanics under the hood.

HATs produce some very nice output, such as browser-based help (WebHelp), PDF and many other types. On the whole, HATs produce the tri-pane output format of online help. If that’s what you need, they make it very easy to do.

They provide help-specific features such as context sensitivity, indexing, dropdowns and related topics.

You can also use HATs for single sourcing.

In summary, HATs cater to our specific needs as technical writers.

Things that may count against a HAT

HATs tend to be proprietary and non-standard. For example, a HAT that supports DITA may not use compliant DITA. This may mean you’d find it difficult to migrate to a different tool. It’s a trade-off between the ease of use and the loss of flexibility.

Compared to a full CMS with XML management, HATs offer limited facilities for content re-use and content management.

Examples of HATs

Matthew emphasised that all the HATs he discussed are excellent tools. His presentation covered the following tools, which are the most popular of the HATs available:

  • Adobe RoboHelp (also TCS2, which contains a slightly different version of RoboHelp that offers better integration with FrameMaker)
  • Author-it
  • ComponentOne Doc-To-Help
  • EC Software Help & Manual – a really nice tool.
  • MadCap Flare
  • WebWorks ePublisher – This tool has been around a long time. It’s not an authoring tool, but rather converts content from one format to another.

Matthew gave us a list of the features that almost all HATs have in common. Then he discussed some criteria for selecting a HAT, examining a number of tools and analysing the strengths and weaknesses of each tool. He covered the following aspects of each tool:

  • A short description of the tool
  • Workflow
  • UI and usability
  • Key strengths
  • Key weaknesses
  • The tool’s own help

I didn’t have time to take notes on everything covered in the presentation. Here follow the notes that I did take.

Adobe RoboHelp

RoboHelp stores its content as XHTML. You work on topics, basically one topic per file. RoboHelp publishes to a number of different formats. The output to AIR Help and WebHelp are very good indeed.

I found this point interesting: If you want a printed document, Adobe recommends that you write your documents in Microsoft Word and link them to RoboHelp as the publishing engine. When printing, print the Word document. For other output formats, publish via Robohelp. You can print directly from RoboHelp, but the output is not great.

If you have TCS2 RoboHelp, you can link FrameMaker documents in the same way as Word documents. FrameMaker gives you a much richer print capability.

The RoboHelp UI and usability are good. RoboHelp goes to some pains to retain a Word-like UI (pre-2007 Word). The UI for linking and importing is not so good.

Other notes:

  • You cannot share resources (stylesheets, icons, snippets) across multiple projects.
  • The download file is very large.
  • RoboHelp’s own help is not all that great, chiefly because it contain features that you cannot create from RoboHelp itself. The information is also not very well structured.


Author-it stores its content in a single library in the database. You can connect to your own database: SQL Server or JET / SQL Server Express (free). On the minus side, Author-it stores the content in a proprietary rather than a standard storage format.

Author-it offers a very powerful capability for content re-use.

You can publish to a number of different output formats. All print outputs are generated via Microsoft Word. This can be seen as clumsy and a problematic dependency. It does produce excellent Word documents.

The UI is very complex, requiring a steep learning curve. It has a very modern look and feel.

Author-it has a number of capabilities, probably the longest list of all the HATs. Some of the add-ons are very powerful, such as authoring memory which tracks duplicate content. It has good support for localisation. Another feature offers web-based authoring too.

The help system is pretty good, and is recognisably generated from Author-it itself. It’s also mirrored on the web, so you can find the information via Google too.

ComponentOne Doc-To-Help

This is the original HAT. Originally the idea was to work in Word and convert to Doc-To-Help. But now you can also choose to edit in XHTML using the Doc-To-Help WYSIWYG editor.

To create styles, you actually create Word .dot files. At first use, this seems a bit odd. Doc-To-Help generates CSS from the .dot files.

It also has a database where you can store metadata.

Doc-To-Help offers a number of output formats.

The UI has improved over the years. It’s a professional-looking, ribbon-based UI.

Doc-To-Help has some cool features, such as the ability to create relationships between topics, similar to the way Author-it does.

It also integrates with Microsoft Sandcastle for documenting class libraries based on comments in the code.

Doc-To-Help’s own help is good.

EC Software Help & Manual

Help & Manual has a very nice WYSIWYG editor, where you edit in XML. It has its own DTD.

It imports content nicely. If importing from Word, save to RTF first. There’s no FrameMaker or DITA support.

The UI is very nice. Of all the tools, it’s the easiest and most comfortable to use (even though ribbon-based). The UI is driven by the table of contents. It offers a simple user experience, yet there are some good advanced features which you can find if you look for them.

One great advanced feature is the skins that you can use to achieve consistency across projects. You can also share resources, such as style sheets, across projects.

MadCap Flare

All editing in Flare is in XHTML, using the WYSIWYG editor.

Flare does a very good job of producing print and help output formats. There’s also a WebHelp Mobile output

You can input DITA, as well as other formats.

The UI is fairly easy to learn, but has some stumbling blocks. You do need to understand CSS to get the most out of it.

Flare has the most excellent help system Matthew has ever seen.

Offering your users the ability to collaborate on your help pages: As an add-on, you can buy the collaboration module so that users can comment on the topics and read each others’ comments.

WebWorks ePublisher

Note that ePublisher is not an authoring tool. There is no editor. You use it to publish content from Word, FrameMaker or DITA to a large number of output formats.

ePublisher offers a large number of mappings that you use map from your input styles to your output styles.

The UI and usability is slightly confusing. There are actually two products involved. ePublisher Express is very simple. That’s the one you use to publish the output. ePublisher Pro is where you create the mappings, and the UI is more complex.

The WebHelp output is not very pretty or professional. The TOC (table of contents) output is fixed by the TOC of the source document.

ePublisher’s own web-based help, published on a wiki, is just bad. On the other hand, the local help that you get when you install the product is much better.

Final comments from Matthew

Always test drive with real data before buying a tool. All the products discussed in this presentation are great tools!

My conclusion

This was a very thorough and in-depth analysis of the features, strengths and weaknesses of the most popular HATs around. I didn’t have time to take enough notes to do it justice. In particular, Matthew showed a useful workflow diagram for each tool. Thank you Matthew for another great presentation.

AODC day 2: What Kind of Assistance do Users Really Need?

This week I’m at AODC 2010, the Australasian Online Documentation and Content conference. We’re in Darwin, in the “top end” of Australia. This post is my summary of one of the sessions at the conference. The post is derived from my notes taken during the presentation. All the credit goes to Matthew Ellison, the presenter. Any mistakes or omissions are my own.

An aside, and a reflection of the fun nature of AODC conferences: At the beginning of this postprandial session, we noticed that quite a number of people were absent. “Good time for a prize draw!” exclaimed Tony Self. (You have to be present to claim your prize.) As usual though, even with such improved chances, I did not win the prize.

Matthew Ellison‘s presentation today asked the question, “What Kind of Assistance do Users Really Need?” He introducted his talk saying that it was a good follow-on from mine. (I had given the preprandial presentation, about engaging your readers in the documentation. I’ll blog about it soon.) Matthew now followed through with providing the kind of assistance that users really need. To do that, he says, we really need to understand our users.

AODC day 2: What Kind of Assistance do Users Really Need?

AODC day 2: What Kind of Assistance do Users Really Need?

Matthew covered the following topics in this session:

  • Background information and recent history that led Matthew to present this topic.
  • Current trends in user assistance design — what we’re doing now and what we think our users need.
  • Common traps for writers.
  • Step-by-step instructions, and whether they are the best solution for users.
  • A look at a research study that Matthew is involved in, and the results and recommendations coming from it.

Another aside: At this point Tony phoned Matthew on the mobile phone Matthew was using as a clicker! Hilarity broke out, as it so often does at AODC. Matthew, being an awesome presenter and well used to Tony’s antics, answered the phone with a grin and “Very funny, Tony” then returned to his presentation without breaking his stride.

Aims of UA

In this part of the session, Matthew took a look at the aims of UA (user assistance) or help. The principal aim is to solve problems and answer questions. Help can also make people aware of features and increase their efficiency in performing tasks.

At this point, Matthew asked us to think of the last time we had a question or problem and needed help.

I immediately thought of the problems I’d had with getting my internet connection to work at the hotel. “How do I get the hotel wi-fi broadband connection to work?”

The first 3 or 4 questions we came up with were all “how do I …”. Then a few different questions arose, such as “what is …”.

To illustrate the idea of cycles in design, Matthew showed some pretty cool pictures of bicycles 40 years ago, 20 years ago and today. The bikes of today share some design characteristics with the bikes of 40 years ago. In documentation and information design, ideas tend to cycle too. We looked at the recent trends towards FAQs and task-based help. Matthew thinks FAQs are a good thing. He showed us some examples of FAQs that work, including some from Twitter and YAHOO Groups.

Traps for technical writers

This may be a dangerous area, says Matthew. Some of the things he suggests are contrary to what he was originally taught about technical writing.

We should not:

  • Just transcribe information given by developers and SMEs.
  • Always give step-by-step instructions. Sometimes it’s enough to give just a simple hint or an answer to a single question.
  • Explain the obvious. If it’s easy for us to understand, maybe we don’t need to document it.
  • Blindly insist on consistency for its own sake.

Now there was another task for us: We had to write instructions telling people how to delete a file in Windows. Most of us chose to write step-by-step instructions. One group chose just a tip. Matthew suggested that you need a useful tip telling people to click Shift+Delete to remove the file permanently. You may also want to let people know that they can recover files deleted accidentally.

This simple exercise led to much animated debate, as might be expected from a group of technical writers. Is information ever redundant? 🙂

Matthew gave us an example of the way Apple have addressed the problem of explaining only the things that need explaining. See the iTunes help: “iTunes How To”. It contains a useful collection of tips people may need. For example, if you click “Buffer Sizes”, you get a page showing a screenshot of the buffer size dialogue and a short paragraph explaining what buffer sizes are and the implications of choosing a larger or smaller size.

Louise from PayPal

Louise is a PayPal “Virtual Agent”. You can type in questions and Louise will answer you.

Where do the responses come from? They’re composed by authors, based on typical questions asked by users. Tony Self, the organiser of the AODC conference,  has in the past signed up as a “virtual person” and helped to write such answers. As a virtual agent, you see the questions that people typically ask and then you write the answers. You can select the personality of your agent, e.g. humble and kind or quirky and arrogant.

Video tutorials

Some video tutorials are very bad, but some are good. As an example of a good one, Matthew showed us a video introducing Morae Observer. The voice is calm and make things sound simple. Each section is short. This video was created using Camtasia.

Study about the questions that people ask

Matthew has been taking part in a study at Portsmouth University in the UK. The reason for the study was the conviction that we should base our help on the questions that our users ask.

The study looks at when users ask questions and what types of questions they ask. It asks participants to tackle three tasks, each clearly explained. The explanation is about what the participant needs to do, rather than how to do it. There is a task in Word, a task using Google Maps (plan a route and view it in Street View) and a task using Flash.

The participants were not allowed to use the help. If they had questions, they had to ask the moderator sitting next to them. The moderator should answer only the question that was asked. This is known as the Wizard of Oz method.

Matthew and the university team used TechSmith Morae Observer to record everything that the participants did, including audio and video. This allowed them to analyse the participants’ actions and to draw conclusions.

The equipment they needed was simply a laptop with Morae installed, plus a webcam and microphone.

Results of the study

Note: Matthew explained that the results are only just becoming available. He can show us only partial, interim results at this stage. Also, I had to jot down the figures from Matthew’s live presentation, as they were not available at the time the slides were printed for our handbook. I’ve done my best to get them right. Matthew will publish the final results later.

The study classified the questions that users asked into 7 categories:

  • Meaning (What does this mean)
  • Reason (Why is this happening, or why should I do this)
  • Confirmation (Is this what I should be doing)
  • Location (Where do I go to do this)
  • Task (What do I do now, or how do I do it)
  • Response
  • Identity

Based on 7 participants out of 20, here is the number of questions that fell into each category:

  • Meaning: 8
  • Reason: 3
  • Confirmation: 74
  • Location: 27
  • Task: 49
  • Response: 3
  • Identity: 3

Participants often had trouble framing the question. Imagine how difficult it would be for them to ask the question online instead of to a person.

There was a surprisingly large number of questions in the “confirmation” category. Some participants often asked for confirmation or affirmation.

There were very few questions that fell into the reason, identity and response categories.

Conclusions — interim only:

  • Tasks are the most common question type.
  • Location is the key to solving the problems.
  • Meaning and reason are not very important in this particular study.

Matthew will publish the final results when available.

Matthew strongly encourages us to do a similar exercise. It is hugely enlightening to see how people tackle a task and the questions they ask.

My conclusions

This session was a lot of fun, especially because of the interaction between us and Matthew, and because of the lively discussions that arose. I’m looking forward to seeing the full results of the study and will publish a link as soon as it’s available. Thank you Matthew for a very interesting session.

AODC day 1: Turning Search into Find

This week I’m at AODC 2010: The Australasian Online Documentation and Content conference. We’re in Darwin, in the “top end” of Australia. This post is my summary of one of the sessions at the conference. The post is derived from my notes taken during the presentation. All the credit goes to Matthew Ellison, the presenter. The mistakes and omissions are all my own.

Matthew Ellison presented the first session of the conference. He called it “Turning Search into Find”. Tony Self, conference organiser extraordinaire, performed the introduction: “Matthew is from the UK. I must apologise for that”! I guess this gives you an idea of the informal nature of this conference. 🙂

Matthew’s talk covered these topics:

  • Why search is important.
  • Why search doesn’t always find, and what the obstacles are.
  • *Innovative search techniques that clever people are using on the web.
  • The top 10 factors that will make your search more effective. Sometimes we have control of or input into the choice of the search tool.
  • Some practical pointers towards implementing a good search.

News flash: Matthew can use his phone as a remote clicker to move through his presentation.

AODC day 1 - Turning Search into Find

AODC day 1 - Matthew talking about "Turning Search into Find"

Why search is important

Matthew pointed out that search is not necessarily the best tool for finding information, but it’s the one that most people want to use. They’re accustomed to using it, from their frequent use of Google and other web searches. Gone are the days when people are accustomed to using the index at the back of a book. Matthew quoted a study where half the people were given a tool with an index at the back and half had the search only. Results showed that the people who used the index were much more effective in finding the information. But when asked, the ones who used the search were more satisfied with the tool.

Many help systems now don’t have an index or table of contents at all. So the search had better be good!

Difference between find and search

Many tools have changed the word “find” to the word “search”. Even Windows did this a while ago. As Matthew said, the difference between the two terms is interesting. It’s a pity we can’t guarantee that people will find the information any more, just that they can search for it!

Matthew asked us to name some problems we may find with search. We came up with these:

  • Too many hits.
  • Synonyms. Search works well if you use the right word. But if you use the wrong word, you don’t find the information.
  • Stop words. The search tool overrides your terms because it thinks your term will return too many results. This means sometimes you can’t find what you’re looking for.
  • Complex search parameters, such as quotes, AND, OR etc. These conventions should be common across all searches.
  • You can’t ask questions.

Innovation in search techniques

Now Matthew showed us some innovative approaches that may help to improve the situation.

Google Suggest

A new development has appeared in Google search over the last 18 months: “Google Suggest”. Matthew calls it predictive search. As you start to type your search term, Google predicts and suggests what you want.

Personally, Matthew finds this has more impact than Google Wave, even though Google made far less fuss about it.

As you type, predictive search suggests the most common keywords that people have used that match your term. Then you can select the term from the dropdown list.

This reminded Matthew of the old experience of using an index but better, because not only does it give the first match alphabetically, it also gives the most popular match.

In an even more recent development, Google Suggest also takes into account your own recent searches.

Matthew had some fun asking us to guess the Google search suggestions for some phrases. Some of them were:

  • “What is” yields “What is my IP address?”
  • “How much wo” yields “How much wood would a woodchuck chuck”
  • “I like to ta” yields “I like to tape my thumbs to my hands to find out what it’s like to be a dinosaur”


Provides an auto-suggest for its search.

Offers a list of choices based on what you type in. There is some synonym matching too. For example, if you type “IT”, it offers a list of jobs starting with “Computer”. and British Airways (

The dropdown suggestions also give you results where the middle of the word or phrase matches your search term. This is useful where you don’t know the official name of the station or airport.

Back to Google search

Google search does this now too. For example, the term “and bec” will bring up “posh and becks”. Google will also offer you alternative spellings.

Need to balance lots of functionality with ease of use

Many searches require you to understand boolean parameters. You need to know the difference between AND and OR.

Two online bookshops have different ways of balancing ease of use with useful functionality in their searches.

Borders UK (alas, now out of business) had a search that allows you to enter the title, author or ISBN. It used predictive technology. It also categorised the results into groups, showing a group of all the books that match the results, and another group of all the people whose names match.

Blackwells offers a very simple search and also a separate advanced search, where you can fill in a lot of detail.

Faceted search

Faceted search is an alternative to a table of contents. The search classifies information by specific characteristics (facets). People can select what they’re interested in and drill down, in any order, as opposed to a table of contents which presents the information in a specific structure.


Matthew introduced the concept of the “scent of information”: If people can see that they’re getting nearer to the information that they’re want, they’re quite happy to keep combining facets to narrow down their search.

What turns “search” into “find”?

Matthew gave some hints about how to make a search as useful and effective as possible:

  • “Stop” words let you exclude specific words from the index. This is useful to reduce the number of irrelevant hits. On the other hand, it may cause problems, for example if you want to search for “sort by date” and the word “by” has been excluded.
  • More useful is the ability to exclude certain topics from the search. For example, it makes sense to exclude popup topics or context-sensitive topics from the search results.
  • The search results should include an extract from the destination page. This is called “synopses” or “context”.
  • Boolean search (using AND, OR and NOT) gives the user the power to increase or decrease the number of results returned. Interesting: Google uses an implied AND, whereas most help tools use an implied OR by default. Bear this in mind, that your users may be used to one or the other way of searching. For example:
    • Adobe AIR Help and WebHelp default to OR. Users can explicitly type AND or OR.
    • Same for MadCap WebHelp.
    • ComponentOne NetHelp defaults to OR and does not allow users to enter specific boolean terms.
    • Etc.
  • Phrase matching allows users to enter phrases in quotes.
  • Fuzzy matching — it would be great if the search knew a bit about linguistics and could offer related words. Google is really good at this sort of thing.
  • Faceted search and search filtering. A while ago, Microsoft had the concept of “Information Types”, but this never really came to anything. MadCap Flare’s WebHelp and DotNetHelp do support “concept keywords” and “search filters”.

The techniques we can use in user assistance

Here are some examples of the kind of faceting could we use in user assistance:

  • Role (administrator or user)
  • Work role (accounts or human resources)
  • Experience (beginner, advanced, etc)
  • What kind of information do you want? (Step by step, conceptual, etc.)

Ranking, such as by number of occurrences of the key word, or by metadata.

Metadata is the key to flexible and effective search. So the search looks not only at the content, but also at other information that the author has added to the topic. This can help with synonym matching, ranking, etc. RoboHelp 8 has some great tools for adding search keywords manually and for auto-adding index keywords as metadata.

Predictive search is great. This reduces the number of keystrokes the user has to make. There’s no excuse for our help not to use auto-suggest. It provides a better “scent of information”.

Worth thinking about: Predictive search may have a negative aspect, in that it channels us all towards the same search and therefore maybe the same content. This could cut out other content that people may have found by entering less popular search terms.

Matthew’s presentation also contains references to ways of implementing predictive search. For example, Google custom search and technologies such as PredictAd. The latter works in a very similar way to Google Suggest. Matthew spoke to the PredictAd developers and they said there’s no reason it shouldn’t be used for user assistance or documentation.

Adobe Forums used to have an awesome predictive search. (Adobe Forums don’t use this technology any more.) They categorised the search results, similar to the way Blackwells do. It was powered by technology from Jive Software: Clearspace. Matt’s presentation contains a basic specification of how it works. I’m sure he’d send it to you if you’re interested.

During question time, Choco recommended that we look at eBay for a good example of faceted search.

My conclusion

This was a great presentation full of information, fun and interactivity. Thank you Matthew!

Update on 30 May 2010: Matthew’s slides for this presentation, “Turning Search into Find”, are now available for downloading from the Matthew Ellison Consulting web site.

AODC day 3 – Pattern language for information architecture

Last week I attended the 2009 Australasian Online Documentation and Content Conference (AODC) in Melbourne. This blog post is part of a series about some of the AODC sessions I attended.

Here are some notes I took from the session on a pattern language for information architecture, by Matthew Ellison. I hope these notes are useful to people who couldn’t be at the conference this year.

Pattern Language for Information Architecture

Matthew introduced pattern languages by saying that they may give you a practical way of capturing the techniques that work for you — a way of documenting the golden rules.

A pattern language is a structured method for describing good design practices within a specific field. Michael Hughes has done a lot of work on pattern languages in our field. Pattern languages establish a rule of thumb. They do not offer a rigid solution, but something you can use again and again when similar situations arise in a particular environments.

Pattern languages in architecture

Pattern languages were first developed about thirty years ago, originating in the architecture field. Matthew was very taken with Christopher Alexander’s book A Pattern Language: Towns, Buildings, Construction (1977). It even smells nice, says Matthew. The structure of the book, its pictures and diagrams, and the quaint language make it a book you can dip into and enjoy. It covers a wide field, from the design of towns down to the design of doorknobs.

Matthew showed us an example pattern from the book: “A Place to Wait”.

  • The pattern starts with the problem: “The process of waiting has inherent conflicts in it.”
  • Then it proposes the solution. As an example of the quaint language used: the pattern suggests that you should “…fuse the waiting with some other activity — newspaper; coffee; pool tables; horseshoes… where you can draw a person waiting into a reverie; quiet…”
  • Then there’s a really cute little sketch of what a waiting area might look like.

Pattern languages for UI design

Another field that uses pattern languages is user interface (UI) design. Matthew showed us what such a pattern formula (template) might look like. Once again, they start with a statement of the problem, then tell you where such a pattern would be used. Next the pattern offers a solution and some form of illustration.

One such pattern in UI design is “Pagination”. Matthew showed us how the list of pages at the bottom of Google search and various other sites all fit this pattern.

Pattern languages for information architecture

What do information architects do? There are a few definitions. A good one is that information architects are responsible for the overall organisation of content.

How can design patterns help? They allow content providers to apply tested architectures to improve the user’s experience.

Matthew listed the following types of design patterns:

  • Interface and layout (window and page layout).
  • Structure of information and navigation dynamics (TOC, related links, popups).
  • Content (information types, writing style and the way we assemble the content we write).

An example of an information architecture pattern: “Breadcrumbs”. The problem is: Users need to know their location in the document’s hierarchical structure, so that they can browse back to a higher level in the hierarchy. Matthew showed us some examples of breadcrumbs in various applications.

Suggested components of an information architecture pattern:

  • The problem.
  • Usage (where the pattern is used).
  • The solution (a short bulleted list that describes the golden rule — fairly flexible and not too prescriptive).
  • An illustration.
  • The rationale (the reason why you would use this solution).

Matthew took us through some more information architecture patterns: “Content taxonomy”; “Signposting”; “Popups”. I don’t have any notes from this part of the session — I got too wrapped up in watching the examples. Matthew is sure to have the details 🙂


Michael Hughes proposed a design pattern for contextual help, to determine when and how we might use such help. Matthew showed us an example of embedded help from Microsoft Excel that conformed to the design pattern.

We looked at some design patterns in a few state-of-the-art online documents. One example is the UK Daily Telegraph online newspaper. Matthew discussed the design objectives of this site, and how they might relate to online documentation too. Notice the design elements, such as:

  • Signposting and visual breadcrumbs, near the top of the page.
  • Search, always at the top. Search is very important in all online newspapers.
  • List of related articles.
  • Related RSS feeds.
  • Link to in-depth background information that supports the story.
  • Pictures.
  • Link to feature article.

Comparing a sports report and a current affairs item, they are visually and spatially very much the same. This makes it easy to use these newpapers online.

We also looked at a government site showing UK planning and building regulations. It also has a standardised pattern, with each element in a predictable place.

How can we define our design patterns?

Matthew suggests the following steps:

  • Create your pattern statements (problem, usage, solution, rationale, etc).
  • Decide whether the pattern statements fit into a style guide.
  • Decide whether to enforce your patterns, e.g. by building them into an XML schema or DTD.

There are different opinions about whether a design pattern would fit into a style guide. IBM talks about enforcing your design patterns in structured authoring via XML, e.g. as DITA topic specialisations or map domains.

Thank you for another very cool and informative presentation, Matthew.

AODC day 2 – Design of context-sensitive help

This week I’m attending the 2009 Australasian Online Documentation and Content Conference (AODC) in Melbourne. Today is the second day of the conference.

Here are some notes I took from the session on user-centred design of context-sensitive help, by Matthew Ellison. I hope these notes are useful to people who couldn’t be at the conference this year. The AODC organisers will also publish the session slides and supplementary material once the conference is over.

User-Centred Design of Context-Sensitive Help

With a laugh, Matthew introduced himself:

“I am the equivalent of Tony Self but with a funny accent and a better shirt.”

In this presentation, Matthew concentrated on the information design side of things, rather than the technical implementation of context-sensitive help (CSH). He gave us a definition of CSH: “Direct access to help that is focused on the user’s current needs.” In practice, he said, we tend to provide help based on where the user is in the UI.

What do we mean by “context”? The more we can tie it down, the better help we can provide. For example, we might be able to detect and respond to: The window/dialogue/tab the user is on; the control they’re trying to use; the zoom level or other settings; previous history, such as other screens visited; role; printer connectivity and so on.

Looking to provide user-centred help, let’s look at how users behave with respect to help:

  • Most people don’t consult the help ahead of time.
  • They’re usually busy with the task when they need help.
  • They’ll only use help if they think it will give them useful information. So Matthew says they’re unlikely to click a little question mark, because it is not a convincing indicator of useful information.
  • They want to be interrupted for as short a time as possible.

We also need to consider the questions a user may ask when confronted with a task or screen. For example: What is this screen for? What do I need to enter in this field? Matthew thinks it’s unlikely they want task information in a context-sensitive topic, because they’re already busy with the task.

Matthew has noticed that some software vendors are moving towards procedural rather than reference topics. (For example, see the CSH for MadCap Flare from versions 3 to 4.) Apparently this is in response to feedback from users. But Matthew thinks this may be a misdirection.

As a consequence, one mistake the Flare help makes is to pack all the reference information into the step in the task topic. So for example, there may be a single step that tells you to complete the options on a screen. This step will now be very long because it contains all the reference information about each option.

By the way, Matthew remarked that he is a great fan of MadCap help.

In the Captivate help, the help links are now labelled as “Learn more” with an information icon. Unfortunately the help topics are very long, especially for online help use.

So Matthew thinks the CSH topics should answer the questions: “What is this? What should I enter? Which should I select?” And the topics should be written specifically for CSH rather than linking only to documentation written for other purposes.

Now Matthew discussed IBM’s Task Support Clusters, designed by Michael Hughes. The idea is that most people who use their help are coming in via CSH. They press help within the application rather than coming in via search.

So IBM identified the locations in the application where help is needed, and then built a self-contained group of topics that support that particular task.

Start with a keystone concept topic (provides critical conceptual information), then provide related tasks and reference. The topics in a cluster are all interrelated by links pointing to each other, but there are no links that point outside the cluster. So the users cannot get lost by following links all over the help system.

The idea is that 80% of the time, the users will read the keystone concept topic and that will be enough. Don’t try to answer all questions in this topic, just the most likely questions users will have.

Now Matthew discussed contextual help. This is additional information that’s actually part of the application and supplements the UI. It’s more likely that people will read this, because they don’t have to open up the help.

This is inline help, available via popups or expanding/dropdown text, or just simply on the screen. This help must be well written, because there’s very little space. Recommended length is one to three short sentences for the entire page. But just one phrase or sentence for a single field.

For the help links, use the actual question the user may ask e.g. the link might say “What is a member name?” When you click it, you get a popup answering the question. Or “See some examples” might pop up some example values for a particular field.

MadCap Flare has this kind of popup help. They’re also working on the functionality that will allow us to put it into our own help systems when using Flare — currently only for DotNet applications.

Quick help or popups should link to full help system for deeper information. MadCap Flare is doing this in their new popup help.

SnagIt’s balloon help is great — it knows what you’ve just done and suggests what you might want to do next. After a while the balloons disappear, because it assumes you know how to do it now.

Also guided help may be a good way of providing procedural help. See Matthew’s presentation from last year’s AODC conference. The application guides you through your task and actually stops you from doing the wrong thing. SHO Guide is a very interesting product that allows you to do this.

Thank you for another great session, Matthew.

%d bloggers like this: