Can technical writers do other types of writing, in particular fiction? Oh yes indeed! I’ve just finished reading Dave Gash’s new science fiction novel, The ELI Event. It’s a lot of fun.
I fell in love with the characters, including the non-human ones. I chewed my nails in the tense moments, cried and laughed in the good moments, gritted my teeth when things went wrong. When it was all over I felt great satisfaction at the way things turned out, coupled with that sweet sorrow you get when you finish a good book.
Dave is a friend of mine. I met him at a technical communication conference two years ago, and we’ve bumped into each other at a couple of conferences since. He’s great. His other big talent is compiling and hosting geek trivia quizzes. 😉
At first I was worried that knowing Dave would spoil my experience of the book. Would I hear his voice speaking through the text, preventing that essential suspension of disbelief that good sci fi demands and facilitates? Even worse, would I feel obliged to enjoy the book? The answer is “No” on all counts. The book grabbed me from page 2 and pushed me through all the way to the end.
Why page 2? Well, it took me most of page 1 to forget my worries about knowing the author. I’m sure the book will grab you from page 1!
Here’s a challenge 😉
Can you find anything in The ELI Event to indicate that a technical writer wrote it?
Details of the book
Dave Gash provides the book in paperback and in Adobe ePub format. You can order it from his website: The ELI EventTitle: The ELI Event Author: Dave Gash Publisher: Dave Gash with Xlibris.
A couple of weeks ago I attended AODC 2010, the Australasian Online Documentation and Content conference. We were in Darwin, in Australia’s “Top End”. This post is my summary of one of the sessions at the conference and is derived from my notes taken during the presentation. All the credit goes to Dave Gash, the presenter. Any mistakes or omissions are my own.
This year’s AODC included a number of useful sessions on DITA, the Darwin Information Typing Architecture. I’ve already written about Tony Self’s session, an update on DITA features and tools, and about Suchi Govindarajan’s session, an introduction to DITA.
Now Dave Gash presented one of the more advanced DITA sessions, titled “Introduction to DITA Conditional Publishing”.
At the beginning of his talk, Dave made an announcement. He has presented in countries all over the world, many times, and he has never ever ever before done a presentation in shorts!
Introducing the session
To kick off, Dave answered the question, “Why do we care about conditional processing?” One of the tenets of DITA is re-use. You may have hundreds or even thousands of topics. In any single documentation set, you probably don’t want to publish every piece of the documentation every time.
Conditional processing is a way to determine which content is published at any one time.
Dave’s talk covered these subjects:
- A review of DITA topics, maps and publishing flow
- The use of metadata
- The mechanics of conditional processing
- Some examples
Metadata and the build process
Dave ran us through a quick review of the DITA build process and the concept of metadata. Metadata has many uses. Dave talked specifically about metadata for the control of content publication.
Metadata via attributes
There are a number of attributes available on most DITA elements. These are some of the attributes Dave discussed:
- audience – a group of intended readers
- product – the product name
- platform – the target platform
- rev – product version number
- otherprops – you can use this for other properties
Using metadata for conditional processing
Basically, you use the metadata to filter the content. For example, let’s assume you are writing the installation guide for a software application. You may store all the instructions for Linux, Windows and Mac OS in one file. When publishing, you can filter the operating systems and produce separate output for each OS.
In general, you can put metadata in these 3 locations (layers):
- maps – metadata on the <map> element. You might use metadata at this layer to build a manual from similar topics for specific versions of a product.
- topics – metadata to select an entire topic. You might use metadata at this layer to build a documentation set for review by a specific person.
- elements – metadata on individual XML elements inside a topic. You might use this metadata to select steps that are relevant for beginners, as opposed to intermediate or advanced users.
Dave gave us some guidelines on how to decide which of the above layers to use under given circumstances.
Defining the build conditions to control the filtering
Use the ditaval file to define the filter conditions. This file contains the conditions that we want to match on, and actions to take when they’re matched. The build file contains a reference to the ditaval file, making sure it drives the build.
Dave talked us through the <prop> element in the ditaval file, and its attributes:
- att – attribute to be processed
- val – value to be matched
- action – action to take when match is found
A hint: You can use the same attribute in different layers (map, topic and element). Also, you don’t need to specify the location. The build will find the attributes, based on the <prop> element in the ditaval file.
Next we looked at the “include” and “exclude” actions. Remember, the action is one of the attributes in the <prop> element, as described above. Here’s an example of an action:
<prop att="audience" val="novice" action="exclude" />
Dave’s recommendation, very strongly put 🙂 is:
Don’t use “include”. Stick to “exclude”.
The basic rule is: Everything not explicitly excluded is included.
Dave’s final recommendation
Go get DITA and play with it!
It was great to have a focus on the conditional publishing side of DITA. It’s something I haven’t had a chance to get into before. Now I know the basics, which rounds off the DITA picture for me. Thank you Dave for an entertaining and information-packed talk.
This week I’m at AODC 2010: The Australasian Online Documentation and Content conference. We’re in Darwin, in the “top end” of Australia. This post is my summary of one of the sessions at the conference. The post is derived from my notes taken during the presentation. All the credit goes to Dave Gash, the presenter. The mistakes and omissions are all my own.
Dave Gash gave a information-rich and focused topic titled “The Power of Controlled Language”. It was about controlled languages and specifically STE (Simplified Technical English). He covered the following aspects of a controlled language:
- What it is.
- Why you may want to use it.
- Some examples.
- The software tools you can use.
A true story
Dave started with a true story. As an experienced traveller, he likes to make sure that he doesn’t get overcharged for things. One of the things he does when he checks in to a hotel is to ask the front desk to turn off the porno channel. “That way I can’t get to it and they can’t accidentally bill me for it. As if I’d watch porno on the hotel TV anyway. That’s what the wireless broadband is for.”
A while ago, when Dave was checking in to a hotel, he asked:
“Is the porno channel in my room disabled?”
The laconic answer was, “No mate, it’s just regular porn.”
This story is relevant to Dave’s topic. What a controlled language aims for is “one word, one meaning”. In conversational English, we use the word “disabled” in two ways. This can result in miscommunication, as illustrated in Dave’s story.
What is controlled language?
A controlled language is a highly-structured, limited language that is intended to make technical documentation easier to read and understand. It’s always a natural language, not a contrived or artificially-constructed language.
The characteristics of a controlled language:
- Simplified grammar and style.
- Limited set of words and meanings.
- Thesaurus of unapproved terms and their alternatives.
- Strict guidelines for adding new terms, e.g. terms needed for your industry or company.
The basis for most controlled languages today is STE (Simplified Technical English). The official specification is ASD-STE100.
Why should we care about controlled languages?
English is a rich, subtle language. This means that it’s also complicated.
Complexity confuses the readers and makes the writers’ work harder. Complexity makes translation more difficult, more expensive and more prone to mistaken translations. Complexity also opens up the possibility of legal confusion and liability.
Normalised vocabularies benefit everyone.
As another illustration of ambiguity and the problems it can cause, Dave told a story about an operator of a snowplow operator who had been told to “clear the runway”. So he did. And caused a plane to have to abort a landing because there was a snowplow on the runway!
Disadvantages of controlled language
There are disadvantages too. A big one is the resistance to change, from management and writers. It’s time consuming to adapt to controlled language. You have to train your writers and editors. You need to spend money on new software. Writers feel that they’re losing creativity and aesthetics.
Dave emphasised that technical documentation is not the place for creativity and aesthetics.
A comment from me: Hah, quite a different view from mine in my presentation tomorrow! 🙂
All about STE
Next, Dave took an in-depth look at STE specifically. He touched on the advantages of using an existing controlled language and looked at some of the specific rules of STE. He gave some interesting examples of words (such as “follow” and “test”) that we commonly use in technical documentation but that are not accepted in STE, in the way we usually use them. It was interesting to see the reasons why each word is not right, according to STE.
Dave walked us through the steps we would need to take, to adopt STE. You need to buy the STE standard. Note that you can get a free personal copy from Boeing.
You also need to get some software tools, develop a corporate dictionary and train your writers and editors.
Building the corporate dictionary sounds like a long and fairly complex task. It also sounds very interesting, something a linguistics nut like me would love to do. The STE standard gives guidelines. Still, I’d say it’s a big undertaking.
Dave gave us some links to information about training and about good software tools. The tools offer text mining, rule checking and word checking. Some of the tools are plugins for common authoring tools, some are standalone tools. Dave showed us screenshots of some of them:
- Textanz Concordance — analyses your text and shows you how often you use specific words, etc.
- Concordance 3.2
- Acrolinx IQ — a broad spectrum product that you can plug into XMetal and various other tools. It does grammar, spelling and style checking and checks your text against version controlled language standards.
- Tedopres HyperSTE — a fairly popular and comprehensive product that helps you to standardise your vocabular and style. It also takes your extensions into account.
- MAXit Checker — uses colour-coding and on-hover popups to give you information.
- SMART Text Miner — extracts terminology from the document, keeps the context and builds a dictionary for you. You can then use that dictionary to help plug new terms into STE.
- Boeing Simplified English Checker — a well-established tool, since Boeing is well invested in simplified English. One thing that stands out is that it detects errors in subject-verb agreement.
This was an interesting and informative talk from Dave. I’ve never used a controlled language in my writing. I’d be very interested in helping to set one up, because it gets you into linguistics and language use. I’m not so sure I’d like to use a controlled language. On the other hand, I do see the advantages. As Dave said, those are instantly visible.