Blog Archives

Adapting content for usability expectations at stc17

This week I’m attending STC Summit 2017, the annual conference of the Society for Technical Communication. These are my notes from one of the sessions at the conference. All credit goes to the presenter, and any mistakes are mine.

Kirk St.Amant presented a session titled “Prototypes of Use: Adapting Content to the Usability Expectations of Different Contexts”.

Kirk has recently been working on “the auto manual phenomenon”. Think about opening up an auto manual and using the instructions on how to change a tire. Auto companies get plenty of complaints from customers about these particular instructions. At first the auto companies were puzzled, because user testing had showed the instructions were correct and easy to follow. After further investigation, it turned out that people couldn’t use the instructions, packaged as they were in a 450-page manual. Such a large book isn’t designed to be ready on a highway, in the dark, in the pouring rain. It doesn’t lie flat.

In other words, it’s a question of context. Where do readers actually use the doc? We need to focus on the delivery mechanism.

Consider how people use and process information. We associate a particular verbal representation with a physical object. Taking this further: The prototype that we use when we identify something may cause us to deliver the wrong product.

Kirk asked us to play a game, based on Physics or Maths textbooks. He asked us to describe such a book. The audience was unanimous about how such a book would look: Big, hardcover, blue or grey, with atoms drawn on the cover. We concluded that if we saw a thin pamphlet, we wouldn’t identify it as a Physics textbook. Similarly, we described the concept of a classroom.

When someone asks you to create a manual, you’re predisposed to create something that looks like a textbook, and probably something that fits into a classroom.

Instead, we should think: I need to create something that shares information on a particular setting.

We tend to use this formula when thinking of usability:

Content+audience = design

Instead, we should focus on:

Content+context[audience+setting] = design

Kirk now did a deep dive into defining the context of use, using a series of formulae and process maps to define the concepts and flows, illustrated by real-world examples.

Some cool terms:

  • a prototype jam – what you experience when you get stuck because some part of a familiar workflow is suddenly absent.
  • context mapping – a process for describing the objective, setting, and sequence of actions, for various contexts (objects, individuals, access points, exit options).

In response to a question about applying the results of a usability study: Kirk described how his team adapted the instructions for measuring blood pressure, based on the patients’ experiences using a blood pressure cuff and attempting to read a heavy manual simultaneously in order to report the results. Instead, the team substituted a voice-activated mechanism for reporting results.

The idea is to get engineers/designers to think about context at the start of the design process, so that content conforms to context.

Thanks Kirk for an educational, authoritative look at modelling usability and context.

What readers want at stc17

This week I’m attending STC Summit 2017, the annual conference of the Society for Technical Communication. These are my notes from one of the sessions at the conference. All credit goes to the presenter, and any mistakes are mine.

Yoel Strimling presented a session titled “So You Think You Know What Your Readers Want?” He presented the results of a study that measured and compared how writers and readers define documentation quality, as well as how writers assume readers define it.

Yoel started by quipping that the most important word in the docs word starts with an F and ends with a ck: … Feedback.

The most important thing for good documentation is to know how well our docs work. For that, we ideally need direct, actionable feedback from readers. We have personas, user stories, use cases, journey maps – to make educated guesses. That’s not the same as actionable, direct feedback.

The definition of doc quality must:

  • Come from a reader’s point of view.
  • Be based on empirical, research-based feedback.
  • Use clear and unequivocal terminology.
  • Cover all possible aspects of quality.

Yoel found a study from 1998, based on 100s of interviews with information consumers, to find out what they wanted from their information. They came up with a framework for defining information qualities based on 4 categories: intrinsic, representational, contextual, and accessibility. Those categories were subdivided into 15 dimensions. Yoel talked us through an amusing illustration of these dimensions, to determine the relative quality of two pens. Part of this illustration involved losing his pen, which was tucked behind his ear!

Conclusion: To be of high quality, docs must be intrinsically good, clearly represented, contextually appropriate for the need, and accessible to the reader.

Yoel then created a survey, asking readers to rank the docs based on the 15 dimensions. He sent another survey to writers, asking them how they thought readers would rank the docs based on those same dimensions. The top results were very similar to each group:

  • Top qualities as judged by readers: Accurate, easy to understand, and relevant.
  • As judged by writers: Relevant, accurate, easy to understand.

But Yoel found 5 dimensions where writers significantly underestimate the value of the dimension to readers. In particular, writers underestimate the quality named “value to readers”. “Valuable” means “beneficial and providing advantages from its use”. Readers want us to give them docs that help them do their job better!

How do we help users to do their job better? Yoel asked the audience. These were some of the suggestions:

  • Make the tools and the docs easier to use.
  • Provide tips.
  • Give the users the simplest flow, and avoid bombarding them with options.
  • Pare down the information.
  • Base the docs on the readers’ goals.
  • Provide videos and graphics.
  • Give the wider context when relevant.

Client’s language at stc17

This week I’m attending STC Summit 2017, the annual conference of the Society for Technical Communication. These are my notes from one of the sessions at the conference. All credit goes to the presenter, and any mistakes are mine.

Chrystal Mincey’s session was called “Know Your Client’s Language”.  Tech writers’ clients have different styles, and may prefer their writers to follow a particular style guide.

Define the client

Take a look at the client’s reporting structure, who your boss reports to, and going higher. The requirement that you’re tackling may come from higher up the chain. Look out also for conflicts of interest, and which division takes priority, especially if different divisions are competing for your time.

Client expectations

Find out the amount of time your client expects from you, and the times of day you need to work. Is flexitime an option? Find out the end goal of the client, and how your project is contributing to it. You’re there to make yourself look good as well as your client. Check the deadlines and milestones, and whether they’re negotiable.

Confirm your responsibilities, and whether there are other tech writers to cover while you’re out.

Client’s guidelines

Determine whether your client has a style guide, or whether they use a particular industry style guide. If there isn’t one, consider developing one. This may be time consuming, but gives you more control. Adhere to templates, if they exist.

Recipe for success

The end goal is for you to be successful, as well as for your client to be successful. Know what time your client arrives, learn their routine, and adjust your work practices to it. Is it OK to approach your client at 9am or should you wait until they’ve had their coffee?

Research in all ways possible.

Client is always right but may be open to change

See things from the client’s viewpoint. Learn their project as a whole, including how to work with the developers. Remember that everyone is working towards the same goal. If there are conflicts, ask the client to see things from your side, and remind them you’re working towards the same goal as they are.

Thanks Chrystal for a spotlight on how to work with a client.

Internet of Things at stc17

This week I’m attending STC Summit 2017, the annual conference of the Society for Technical Communication. These are my notes from one of the sessions at the conference. All credit goes to the presenter, and any mistakes are mine.

The IoT (Internet of Things) is a hot topic, so I was keen to hear about documenting an IoT product. Michael Harvey presented a case study, “Documentation Support for an IoT Product”.

An Internet of Things is any collection of devices that contain electronics and software for collecting data and for sharing data amongst the devices. An IoT can generate a huge volume of data, in the form of continuously flowing events. Michael described the documentation for a product that developers use to process those events and data.

Michael works on SAS’s flagship product for the IoT. Michael’s session was about what he learned while working on this project, and how we can apply these principles on projects we work on.

What is the IoT

The term IoT was coined by Kevin Ashton. It started when he and his team linked RFID tags through the internet. Other people have built on this. Objects and devices send out signals, which are captured, analysed, and distributed over the internet.

Some use cases for the IoT:

  • Sensors on oil wells showing oil extraction rates, temperature, pressure.
  • Sensors on water meters, using for billing.
  • Sensors on truck engines, for engine diagnostics and driver behaviour. This saves logistics companies a lot of money.

The frequency of activities could be 1000s of times per minutes, or much lower, depending on the use case.

The product

The product Michael worked on was SAS Event Stream Processing (ESP) – a set of tools to build apps that process and analyse events, and can perform real-time analysis of that data.

Michael discussed some use cases for the ESP system

  • Monitoring rogue trades on capital markets.
  • Detecting fraud and analysing cyber security.
  • And more.

The documentation

Michael inherited an overview and a user’s guide. Just 2 documents.

The overview was very brief – just 1 line of text and 3 lines of code.

The user’s guide was a little unstructured, with every heading at the same level, and page after page of C++ code. That clued him in about who the “user” was. The language was a trifle convoluted.

Michael decided he was in a little over his head. He went looking for a graphic to give him a conceptual overview. It didn’t help much. The text on the diagram wasn’t consistent with the text in the manual. There was no indication of how events flowed into or out of the diagram.

The documentation and diagram didn’t tell a story. What do I do, what do I do first, then what, for what reason.

How did Michael tackle this problem?

  • He spent a lot of time getting to know the docs – annotating the user’s guide and rationalising the use of terms, so that he could learn while writing.
  • Then he scheduled time with the developers to resolve his questions.

One trick was to figure out who the audience was – it turned out to be both developers and people who use ESP systems to analyse data. Writing for developers is an art. They have little tolerance for documentation. Build on what they already know rather than repeating what they already know. You need to spend time with them to find that out. Developers skim documentation. So, use terms consistently, so they know what you’re talking about. Make sure you provide easily accessible reference material.

A challenge Michael had was that the product was in flight as he was writing the docs. He therefore had to continually revise the docs. He was also handling new material for a new XML modelling layer and user interface. He had to learn complex technical concepts, like connectors and adapters.

Learning how to read code is not easy, and is a bit boring. But it pays dividends. In particular, you can pull out snippets of code to put in the docs. He also became familiar with various technologies such as Yarn Ready (Hadoop) and Apache Camel. He needed to understand enough about these technologies to help ESP customers to use the technologies in the ESP context.

Michael emphasised the importance of differentiating between the roles of the engineers and the technical communicators. Both roles have equal value. Engineers leave gaps in docs, because they assume their readers know certain things. The tech communicator’s role is to overcome this “curse of knowledge”.

It’s important to build trust with the developers, by spending as much time as you can learning and studying, before you ask questions.

After all this work, Michael had the story for his documentation: Streaming data, events flowing through the product, these were the foundation of the rest of the docs. Michael discussed the nature of streaming data, how it differs from static data, the models that handle such data, and how the events flow through ESP. He was now able to improve the diagram giving a conceptual overview of the ESP system. He called it the event stream processing diagram v2.0. His diagram started showing up in educational materials, which was a clue he’d done a good job. Later, he improved the diagram even more.

A learning point is that tried and true tech writer techniques can be used to tackle any complicated technical material.

What’s next for the IoT

IoT is useful for precision agriculture, transport, airlines, healthcare, and the security industry.

Machine builders need to start thinking about themselves as information vendors. Their machines collect and share feedback from customers.

Thanks Michael for an interesting session about the IoT!

Intelligent content at stc17

This week I’m attending STC Summit 2017, the annual conference of the Society for Technical Communication. These are my notes from one of the sessions at the conference. All credit goes to the presenter, and any mistakes are mine.

Val Swisher presented a session called “The Holy Trifecta of Intelligent Technical Content”. The trifecta comprises structured intelligent technical content, terminology management, and translation memory. With these three, technical writers can efficiently produce content for multiple channels, for an international audience.

Val explained each of the three elements (structured content, source terminology management, and translation memory) and the magic that happens when you use them all together. Using the three together makes content development better, cheaper, and faster.

Structured authoring

Val walked us through the original content development process, where a writer wrote the content, then passed it off for translation and desktop publishing. This process was slow, expensive, and gave the writer little control.

In a structured environment, the author writes smaller chunks of content (sometimes called topics) and checks it into a CMS. The information product (PDF file, web page, book, etc) are a collection of these chunks in a certain order. In theory, you should be able to combine the chunks in different orders and arrangements for different content products.

Structured authoring should therefore produce more deliverables through content reuse, create consistency, and support multichannel publishing.

The content itself is separated from the eventual publication style and medium. Desktop publishing is a thing of the past.

Each individual chunk is independently translated. Each chunk is now in the database with its related translations.

There are a few problems to solve. In particular, terminology. For example, what do you do to a button: Click, Click on, Tap, Select, Hit… We’re not consistent in our use of terminology in our source.

Source terminology

We need to manage our source terminology. People do it in various ways, such as via a document or style guide, via reviews (tribal knowledge), or via a specific tool.

Val emphasised the importance of picking one term for a particular thing or concept. For example, when talking about a dog, choose a word: doc, pooch, hound – it often doesn’t matter which term you pick, provided you’re consistent.

No-one reads style guides! Everyone wants to, because we all want to do a great job. But no-one has the time. Also, it’s hard to know whether the word you’re about to write is a managed term.

We need a way to manage the words we’re using and how we’re using them, that we don’t have to go and look for. The information must be pushed to us.

It’s almost better not to have structured authoring if you don’t manage your terminology. We split the topic development amongst a group of writers, which leads to greater problems with consistency. Val showed us a screenshot from an automated terminology tool, which allows you to define preferred terms, banned terms, etc, and then prompts the authors when they use a deprecated word.

Translation memory

Val asked the audience whether we had translation memory (TM), whether our company owned the translation memory, whether we had more than one translation vendor, and whether those vendors shared the same memory. She stressed the importance of owning your own translation memory.

Translation memory (TM) is one of the automated tools that the translation vendor uses. If something in the source content has already been translated, the tool pops up the translation. This is because the translations are stored in a database called the translation memory. The bits of source content are stored as translation units, which are phrases, usually more than a word.

This makes translation cheaper. If you say the same thing in exactly the same way each time you say it, the tool pulls up the same translation as used the first time. This is called a 100% match. Note that a 100% match doesn’t cost zero dollars. To have no charge, you have to have an in-context match.

Val emphasised that you should make sure that what’s in the TM is pushed to the writers, although she knows of very few companies that are doing this. That way, writers would know what’s already been translated and be able to ensure we use the same terms when developing new content.

Ideally, there’s be an automated link from the translation memory to the terminology management system. But that’s complicated, and doesn’t happen often.

Tying them together

Val discussed the intersection of three technology areas:

  • Structured authoring – write it once, use it many times.
  • Terminology management – say the same thing the same way, every time you say it. Be as boring as you can and as simple as you can.
  • Translation memory – use already-translated terms in your source content.

This takes a lot of setup and maintenance.  But it’s worth it.

Conclusion

Val’s presentation was funny, engaging, and informative. She had the audience nodding and laughing throughout the session. Thanks Val!

%d bloggers like this: