Blog Archives

Can a technical writer write fiction?

Can technical writers do other types of writing, in particular fiction? Oh yes indeed! I’ve just finished reading Dave Gash’s new science fiction novel, The ELI Event. It’s a lot of fun.

I fell in love with the characters, including the non-human ones. I chewed my nails in the tense moments, cried and laughed in the good moments, gritted my teeth when things went wrong. When it was all over I felt great satisfaction at the way things turned out, coupled with that sweet sorrow you get when you finish a good book.

The ELI Event

The ELI Event

Dave is a friend of mine. I met him at a technical communication conference two years ago, and we’ve bumped into each other at a couple of conferences since. He’s great. His other big talent is compiling and hosting geek trivia quizzes. πŸ˜‰

At first I was worried that knowing Dave would spoil my experience of the book. Would I hear his voice speaking through the text, preventing that essential suspension of disbelief that good sci fi demands and facilitates? Even worse, would I feel obliged to enjoy the book? The answer is “No” on all counts. The book grabbed me from page 2 and pushed me through all the way to the end.

Why page 2? Well, it took me most of page 1 to forget my worries about knowing the author. I’m sure the book will grab you from page 1!

Here’s a challenge πŸ˜‰

Can you find anything in The ELI Event to indicate that a technical writer wrote it?

Details of the book

Dave Gash provides the book in paperback and in Adobe ePub format. You can order it from his website: The ELI Event

Title: The ELI Event
Author: Dave Gash
Publisher: Dave Gash with Xlibris.

AODC Day 3: Introduction to DITA Conditional Publishing

A couple of weeks ago I attended AODC 2010, the Australasian Online Documentation and Content conference. We were in Darwin, in Australia’s “Top End”. This post is my summary of one of the sessions at the conference and is derived from my notes taken during the presentation. All the credit goes to Dave Gash, the presenter. Any mistakes or omissions are my own.

This year’s AODC included a number of useful sessions on DITA, the Darwin Information Typing Architecture. I’ve already written about Tony Self’s session, an update on DITA features and tools, and about Suchi Govindarajan’s session, an introduction to DITA.

Now Dave Gash presented one of the more advanced DITA sessions, titled “Introduction to DITA Conditional Publishing”.

At the beginning of his talk, Dave made an announcement. He has presented in countries all over the world, many times, and he has never ever ever before done a presentation in shorts!

AODC Day 3: Introduction to DITA Conditional Publishing

AODC Day 3: Introduction to DITA Conditional Publishing

Introducing the session

To kick off, Dave answered the question, “Why do we care about conditional processing?” One of the tenets of DITA is re-use. You may have hundreds or even thousands of topics. In any single documentation set, you probably don’t want to publish every piece of the documentation every time.

Conditional processing is a way to determine which content is published at any one time.

Dave’s talk covered these subjects:

  • A review of DITA topics, maps and publishing flow
  • The use of metadata
  • The mechanics of conditional processing
  • Some examples

Metadata and the build process

Dave ran us through a quick review of the DITA build process and the concept of metadata. Metadata has many uses. Dave talked specifically about metadata for the control of content publication.

Metadata via attributes

There are a number of attributes available on most DITA elements. These are some of the attributes Dave discussed:

  • audience – a group of intended readers
  • product – the product name
  • platform – the target platform
  • rev – product version number
  • otherprops – you can use this for other properties


<step audience="advanced">

Using metadata for conditional processing

Basically, you use the metadata to filter the content. For example, let’s assume you are writing the installation guide for a software application. You may store all the instructions for Linux, Windows and Mac OS in one file. When publishing, you can filter the operating systems and produce separate output for each OS.

In general, you can put metadata in these 3 locations (layers):

  • maps – metadata on the <map> element. You might use metadata at this layer to build a manual from similar topics for specific versions of a product.
  • topics – metadata to select an entire topic. You might use metadata at this layer to build a documentation set for review by a specific person.
  • elements – metadata on individual XML elements inside a topic. You might use this metadata to select steps that are relevant for beginners, as opposed to intermediate or advanced users.

Dave gave us some guidelines on how to decide which of the above layers to use under given circumstances.

Defining the build conditions to control the filtering

Use the ditaval file to define the filter conditions. This file contains the conditions that we want to match on, and actions to take when they’re matched. The build file contains a reference to the ditaval file, making sure it drives the build.

Dave talked us through the <prop> element in the ditaval file, and its attributes:

  • att – attribute to be processed
  • val – value to be matched
  • action – action to take when match is found

A hint: You can use the same attribute in different layers (map, topic and element). Also, you don’t need to specify the location. The build will find the attributes, based on the <prop> element in the ditaval file.

Next we looked at the “include” and “exclude” actions. Remember, the action is one of the attributes in the <prop> element, as described above. Here’s an example of an action:

<prop att="audience" val="novice" action="exclude" />

Dave’s recommendation, very strongly put πŸ™‚ is:

Don’t use “include”. Stick to “exclude”.

The basic rule is: Everything not explicitly excluded is included.

Dave’s final recommendation

Go get DITA and play with it!

My conclusion

It was great to have a focus on the conditional publishing side of DITA. It’s something I haven’t had a chance to get into before. Now I know the basics, which rounds off the DITA picture for me. Thank you Dave for an entertaining and information-packed talk.

Update on DITA Features, Tools and BestΒ Practices

AODC day 1: The Power of Controlled Language

This week I’m at AODC 2010: The Australasian Online Documentation and Content conference. We’re in Darwin, in the “top end” of Australia. This post is my summary of one of the sessions at the conference. The post is derived from my notes taken during the presentation. All the credit goes to Dave Gash, the presenter. The mistakes and omissions are all my own.

Dave Gash gave a information-rich and focused topic titled “The Power of Controlled Language”. It was about controlled languages and specifically STE (Simplified Technical English). He covered the following aspects of a controlled language:

  • What it is.
  • Why you may want to use it.
  • Some examples.
  • The software tools you can use.

A true story

Dave started with a true story. As an experienced traveller, he likes to make sure that he doesn’t get overcharged for things. One of the things he does when he checks in to a hotel is to ask the front desk to turn off the porno channel. “That way I can’t get to it and they can’t accidentally bill me for it. As if I’d watch porno on the hotel TV anyway. That’s what the wireless broadband is for.”

A while ago, when Dave was checking in to a hotel, he asked:
“Is the porno channel in my room disabled?”
The laconic answer was, “No mate, it’s just regular porn.”

This story is relevant to Dave’s topic. What a controlled language aims for is “one word, one meaning”. In conversational English, we use the word “disabled” in two ways. This can result in miscommunication, as illustrated in Dave’s story.

AODC day 1: The Power of Controlled Language

AODC day 1: The Power of Controlled Language

What is controlled language?

A controlled language is a highly-structured, limited language that is intended to make technical documentation easier to read and understand. It’s always a natural language, not a contrived or artificially-constructed language.

The characteristics of a controlled language:

  • Simplified grammar and style.
  • Limited set of words and meanings.
  • Thesaurus of unapproved terms and their alternatives.
  • Strict guidelines for adding new terms, e.g. terms needed for your industry or company.

The basis for most controlled languages today is STE (Simplified Technical English). The official specification is ASD-STE100.

Why should we care about controlled languages?

English is a rich, subtle language. This means that it’s also complicated.

Complexity confuses the readers and makes the writers’ work harder. Complexity makes translation more difficult, more expensive and more prone to mistaken translations. Complexity also opens up the possibility of legal confusion and liability.

Normalised vocabularies benefit everyone.

As another illustration of ambiguity and the problems it can cause, Dave told a story about an operator of a snowplow operator who had been told to “clear the runway”. So he did. And caused a plane to have to abort a landing because there was a snowplow on the runway!

Disadvantages of controlled language

There are disadvantages too. A big one is the resistance to change, from management and writers. It’s time consuming to adapt to controlled language. You have to train your writers and editors. You need to spend money on new software. Writers feel that they’re losing creativity and aesthetics.

Dave emphasised that technical documentation is not the place for creativity and aesthetics.

A comment from me: Hah, quite a different view from mine in my presentation tomorrow! πŸ™‚

All about STE

Next, Dave took an in-depth look at STE specifically. He touched on the advantages of using an existing controlled language and looked at some of the specific rules of STE. He gave some interesting examples of words (such as “follow” and “test”) that we commonly use in technical documentation but that are not accepted in STE, in the way we usually use them. It was interesting to see the reasons why each word is not right, according to STE.

Adopting STE

Dave walked us through the steps we would need to take, to adopt STE. You need to buy the STE standard. Note that you can get a free personal copy from Boeing.

You also need to get some software tools, develop a corporate dictionary and train your writers and editors.

Building the corporate dictionary sounds like a long and fairly complex task. It also sounds very interesting, something a linguistics nut like me would love to do. The STE standard gives guidelines. Still, I’d say it’s a big undertaking.

Dave gave us some links to information about training and about good software tools. The tools offer text mining, rule checking and word checking. Some of the tools are plugins for common authoring tools, some are standalone tools. Dave showed us screenshots of some of them:

  • Textanz Concordance — analyses your text and shows you how often you use specific words, etc.
  • Concordance 3.2
  • Acrolinx IQ — a broad spectrum product that you can plug into XMetal and various other tools. It does grammar, spelling and style checking and checks your text against version controlled language standards.
  • Tedopres HyperSTE — a fairly popular and comprehensive product that helps you to standardise your vocabular and style. It also takes your extensions into account.
  • MAXit Checker — uses colour-coding and on-hover popups to give you information.
  • SMART Text Miner — extracts terminology from the document, keeps the context and builds a dictionary for you. You can then use that dictionary to help plug new terms into STE.
  • Boeing Simplified English Checker — a well-established tool, since Boeing is well invested in simplified English. One thing that stands out is that it detects errors in subject-verb agreement.

My conclusion

This was an interesting and informative talk from Dave. I’ve never used a controlled language in my writing. I’d be very interested in helping to set one up, because it gets you into linguistics and language use. I’m not so sure I’d like to use a controlled language. On the other hand, I do see the advantages. As Dave said, those are instantly visible.

AODC 2009 day 1 – Structured authoring

This week I’m attending the 2009 Australasian Online Documentation and Content Conference (AODC) in Melbourne. Today, the first day of the conference, the speakers have already given us a lot to think about.

Here are some notes I took from the session on structured authoring by Dave Gash of HyperTrain dot Com. I hope these notes are useful to people who couldn’t be at the conference this year. The AODC organisers will also publish the session slides and supplementary material once the conference is over.

A Painless Introduction to Structured Authoring

In this session Dave Gash discussed the benefits and pitfalls of structured authoring as opposed to the more traditional linear narrative format. He also touched briefly on DITA as the prime technology for a structured authoring environment.

When introducing Dave, Tony Self said, “Dave is known for his shirts, and he hasn’t let us down today.” Dave was wearing a black shirt imprinted with colourful guitars of all sorts. This set the tone for Dave’s live-wire style of presentation. He moved around the stage, chatting to and taunting people in the audience, while at the same time conveying lots of information.

Dave’s session covered the following points: “Look at paradigm shift from linear narratives to structured authoring. Compare and contrast the two methods. Explore structured authoring methodology. Look at some code examples.” He laughed, “This is the one and only time I’m going to use that phrase ‘paradigm shift’. I hate that phrase but it’s appropriate in this case.”

Here’s a good definition of structured authoring:

“Structured writing is a set of publishing workflow practices that lets you define and enforce consistent information structure and facilitates content development, sharing and reuse.”

We need to differentiate between structured authoring (the methodology we use to organise and structure information) and XML (the technology we use to implement the plan).

These are some of the problems Dave mentioned concerning the linear narrative format:

  • There’s too much repetition, rewriting, local formatting. Not enough content re-use.
  • Authors spend too much time doing things that are not writing, but are required for production of the documentation.

He listed the advantages that structured authoring can provide, including:

  • Better control over content versions.
  • More efficient use of a writer’s time.
  • Easier sharing of content across different media and different formats.
  • And more. Take a look at Dave’s presentation slides when the AODC makes them available.

Contrasting linear versus structured authoring, in linear content authoring:

  • Content is authored in a WYSIWYG editing tool.
  • Loose standards allow tweaking.
  • Structure depends largely on output medium.
  • Content and format are intertwined.
  • Re-use of content is done via copy and paste.

On the other hand, in structured authoring:

  • Content is authored in a WYSIOO (What You See is One Option) editing tool.
  • Strict standards do not allow tweaking.
  • Structure is completely independent of output.
  • Authors cannot determine final appearance —Β  content and format are separate.
  • Re-use of content is accomplished via cross-references.

To sum it up: The goal is valid content (structured) instead of attractive output (linear).

There are a number of benefits on the corporate and management side, such as improved document quality, content re-use, higher author productivity, more flexibility for varied output devices. In a nutshell: savings in personnel, software, support and maintenance costs.

For the writer/author, there are benefits and challenges. On the one hand, there are new tools and new procedures to learn, new job responsibilities that are narrower and less flexible than before, no control over formatting and publishing of the documentation. On the positive side, the change will keep your skill set current. Content is still king and you get more time to write it. No more copy and paste, and no more chasing around to find all the repetitions if you have to change something.

Dave also said you can throw away your responsibilities for formatting and publishing the documentation. As content developer, you are responsible for the content only. So if something breaks, it’s not your problem. I have to admit that I disagree with this point πŸ˜‰ I guess that I enjoy the “holistic” approach to document production, where the writer does have a say in the presentation. So if my team was to move towards structured authoring, I’d definitely train myself up on the structure and publishing side as well as the content development skills. But I do recognise the need for content to be able to squeeze into all sorts of output formats and media. So I see the benefits of content being format- and medium-agnostic. I guess I’d fit into a structured authoring environment quite easily, once I started getting my sense of satisfaction from producing perfect content in an efficient environment.

Next, Dave showed us some code examples of HTML (representational markup) as opposed to XML (semantic markup). He explained how in linear authoring, the typography of a document defines its information structure while in structured authoring the tags define the information structure.

Dave gave us a real-world example: A technical writer writes a feature list, showing the set of features provided by a particular piece of software. This feature list might end up as an introduction to a user guide, as marketing literature, in a press release, in a software review site fact sheet, etc. It looks different in each output format, but it’s written only once.

Now Dave dived into the technical details of XML, the technology most used to implement structured authoring. He explained what a schema is and showed us some schema code. The schema controls what the editor allows you to enter. The editor checks against the scheme to validate the content you enter.

Dave showed us the workflow when using a linear narrative, and contrasted it with a structured authoring workflow. In linear authoring, the writer writes the content into a single tool, maybe interacting with a CMS. Then the writer instructs the tool to publish the content into different formats. In a structured authoring workflow, the writer writes the content into the editing tool, very likely working with a CMS. Then the information architect uses a structure tool and a style tool to structure the content. Then the publisher uses a tool to combine the style, structure and content to produce the different output formats. So there are more clearly defined roles and job responsibilities.

In structured authoring, information chunking becomes vital. Technical writers already chunk information i.e. divide it into logical pieces. For structured authoring:

  • Think in smaller chunks.
  • Consider where the information may be re-used.

Now Dave touched on DITA, the most successful XML standard for structured authoring. The DITA Open Toolkit is free, and there are a number of free plugins to extend the toolkit. For example, there’s the GUI publishing interface, WinANT, developed by Tony Self.

There are a number of authoring/publishing tools. Dave’s favourites are:

  • JustSystems XMetal
  • PixWare XMLMind Editor
  • OxygenXML <oXygen/>

Other well-known tools are starting to support DITA to a certain extent:

  • FrameMaker
  • RoboHelp
  • Flare
  • Author-IT

The question time at the end of the session prompted a number of interesting comments, including these:

  • Emily observed that the publishing and structuring side of document production may not be a full-time job, especially in smaller shops. So it may be quite possible for one person to fill all three roles — content author, information architect and publisher.
  • How long does it take to move to a structured authoring environment? The answer depends on a lot of things, such as the amount of legacy documentation that needs converting and the level of enthusiasm in the existing team. There are a number of places in the States that have taken a year or more to get the first project out the door, and that’s without converting the existing documentation.
  • With content re-use over multiple output formats, is there a danger of having the different output formats looking too much the same? Answer: Yes, there is a danger. It’s up to the formatting people to decide whether they want the content to look the same or different in the different locations.

Thank you Dave for a great introduction to structured authoring.

%d bloggers like this: