Blog Archives

STC Summit day 3 – Mobile app design

It’s day 3 at the STC 2012, the annual conference of the Society for Technical Communication. I’m attending a session called “Mobile App Design: The Language of Tiny and Touch”, by Joe Welinske.

Joe has been working with mobile apps for quite a while. He thinks that we’re at a really interesting point, a time that will change the way we do things. He compared this period with the arrival of the mouse in 1989, and the way that changed what we do.

Mobile means widely different screen dimensions (mostly small) and touch interfaces. People are using new interactions with the UI. We need to look at software in a different way.

The interactions are changing very quickly. There are very few explicit standards. We need new techniques and processes, to deal with proprietary interactions.

Touch interfaces

We need a different vocabulary. Instead of “click this”, it’s “touch this”, “tap that”. But even more, we need to understand the full environment. The person is in motion when working with their software.

One technique that may be useful in this area is conditional processing. This is something we must look at in our tools.

Joe predicts that Apple will announce something new to do with touch interfaces in their upcoming conference.

Examples of touch devices:

  • Smartphones
  • Tablets: iPad
  • Trackpads
  • Touchpads

Writing help for user interfaces

Joe showed us the Chicago Tribune mobile app, which overlays a layer on the screen showing you how to use the touch interface. A member of the audience mentioned that the USA Today app does the same thing.

Joe thinks that all mobile apps need some form of user assistance. He works for clients that develop mobile apps. For example, a mobile app that works with payrolls on the move. The mobile app is a “sidecar”, in that it provides some of the functionality of the desktop app but not all. The users already knew how to use the desktop app. The challenge was to bridge the user’s existing knowledge and transfer it to the mobile app.

Some interesting facts and tidbits:

  • A member of the audience mentioned that you need to be careful when showing pictures of hands, or pointing fingers, as different gestures are considered rude in different countries.
  • Tablet dimensions allow more interactivity, which can mean a more complex UI to document.
  • An easter egg: Go to Speedtest.net on a mobile device, and pull the speedometer down and you see the person’s cat. Keep pulling down too see him in various positions. This is a hidden gesture just for fun.

The language of gestures

Finding the best word is very important. Work with your thesaurus and test the candidates with your users, to see which words work better for your customers.

It may take several days to design 4 words!

For example, the use of the word “hide” dramatically changed the way users thought of the interaction.

Remember that the choice of words may need to change depending on the mobile device in use.

Joe showed us some examples of gesture language in user assistance:

  • ToonPAINT
  • Weightbot
  • AutoCAD gives short textual instructions in a banner across the top of the screen. This is guided help that instructs users while they do the task. This is effective but takes a lot of resources to do.
  • Convertbot has a tutorial that shows popup bubbles describing the UI elements. Interesting is that they use the words “press the OK button” instead of “tap OK”. This may be the use of generic language to work on different platforms, or it may be that they didn’t think of changing the language.
  • Calvetica
  • eBay have video tutorials, using live fingers. This works, but you need to consider how much you need to show. Also, the device itself tends to be obscured and in the background.

Vocabulary for touch and gestures

See the Touch Gesture Reference Guide. The authors went through a number of pages of documentation for various platforms, to analyse the terminology used for gestures. This is very useful.

Joe led us through a discussion of whether we would use specific terminology per platform, as chosen by the device vendors, or whether we would use generic language. The audience chose to use generic language. Joe said that sounds sensible, but consider the user. For a customer who has chosen a Windows phone, you would probably be using terminology that is more typical in an iPhone or Android phone. Your documentation may use a different vocabulary from the other help for this device.

A member of the audience pointed out that this terminology issue means that you must work closely with your translation manager. Joe showed a slide about translation issues for gesture terms. In some languages, you will probably end up with different translations for the same term, depending on the context.

Single sourcing

Conditional text will be a big part of our toolset.

Joe works with HTML5, CSS and MediaQuery as a solution for allowing devices to announce their screen dimensions and interfaces to stylesheets, which can then do transformations based on the device requirements. He has stylesheets to cater for 4 device types:

  • Phone
  • 7-inch tablet (Galaxy and Kindle)
  • 10-inch tablet (iPad)
  • Desktop

Device-specific controls

We need to consider device-specific controls in our documentation. For example, different Android devices have different button bars. Or a gaming device will have gaming controls instead of those used on a phone.

Voice

Things are moving so fast that voice will soon start making user assistance easier. Siri works fairly well. It’s limited to apps that Apple wants to support. There is talk that Apple will expose an API for Siri so that other app developers can use Siri to give user assistance, either through return voice or by taking the person to a certain topic on the app developer’s web server.

Joe showed us an example of asking Siri how to write a cheque in Quicken. You get a link to the Quicken documentation. This doesn’t really work, because there is no API for Siri. Joe hacked the example together to show us that it could happen!

Thanks Joe. This was an inspiring session.

AODC day 2: Optimising your Content for Google Search

This week I’m at AODC 2010: The Australasian Online Documentation and Content conference. We’re in Darwin, in the “top end” of Australia. This post is my summary of one of the sessions at the conference. The post is derived from my notes taken during the presentation. All the credit goes to Joe Welinske, the presenter. Any mistakes or omissions are my own.

Joe Welinske presented the last session on Thursday, titled “Optimising the Googleability of Your Content”. The subject was search engine optimisation (SEO). Joe is really excited about this topic. He thinks it’s one of the most important skills for a user assistance expert to acquire. Potentially, it also offers a good opportunity for consultancy.

SEO is complex, but we need to learn about it. Joe calls it “embracing the beast”: If people are going to be using Google, then we want our material to come out at the top of the search results.

These are the topics he covered:

  • Get all your content on a public-facing server.
  • Learn about Google Search.
  • Learn about SEO (search engine optimisation).
  • Find out how to use other search engines.
  • Create a custom Google Search for your UA (user assistance) material.

Getting your content onto a public-facing server

This is the single most important thing that has to happen: Your content has to be on a public-facing server.

There are ways to mirror your internal content onto public-facing servers. Joe talked about a few of them. Ultimately, the best way may be to migrate all content to web-based standards.

Security issues when publishing content on the web

Many organisations keep their content behind the firewall, or have privileged content requiring login. Security is a very real concern. Moving to a public-facing server may be a big change for traditional content developers, and a challenge for organisations that have up to now kept their documentation hidden from public view.

Organisations need to take a detailed look at all the help components they provide and see what can and what can’t be published on a public-facing server.

We should be aware that our users are using Google anyway, and we should ask ourselves what these users find when they do a Google search for information on our products.

We could put together examples of Google searches, to show management what people will be finding, given that they can’t find our own documentation. For sure, people will be posting information about your systems to answer other people’s questions.

Google search indexing

Note that some content will not index effectively. Joe gave some tips on the type of content that Google can index well and the type that may cause problems.

Good candidates for Google indexing:

  • Web sites.
  • Web-based help, such as output generated from Flare or RoboHelp. Note that frames may make the content less favourable for indexing. Google has a hard time making sense of content in frames.
  • Eclipse Help is a good candidate, because it’s XHTML.
  • PDF is good, because Google has put effort into indexing it. Some PDF formats may be less effective.

What does not work well:

  • Microsoft Help (CHM and HLP) — This format has some HTML but is mostly locally-installed and uses a lot of proprietary code.
  • Apple Help, Oracle Help for Java and JavaHelp.
  • Flash — Google has done some work to be able to index text in Flash files, but images and videos not.
  • Almost any proprietary markup — Google has problems interpreting it.

Bottom line: The more open standards you use, the better off you will be.

What about loss of context?

When we put together online help systems and context-sensitive help, we have control over the path the user follows. But when the user comes in via Google search, we don’t have that control.

Some tips from Joe:

  • Provide navigation elements on all pages.
  • Add branding, so that people know that they have found your site and thus the official documentation.
  • Include the date last updated and the version of the application that the help covers.

Examples of SEO

Joe showed an example of a Google search he entered. He asked how to print a calendar in Excel. The first result returned was a link to Microsoft Office Online, telling you how to print a blank calendar. This was exactly what Joe wanted to do. What’s more, the content was an exact copy of what’s in the HTML Help provided with Excel, but laid out differently. So Microsoft has their SEO sorted out, and they are mirroring local help content in the web-based help.

Similarly, he entered a Google search for printing a Google calendar. Again, the Google web-based help was right at the top of the results. Google provides only web-based content, so they don’t need to support mirroring of content.

In contrast, when you search for “print an ical calendar”, there’s nothing near the top from Apple. So Apple needs to work on there SEO in this particular context.

How Google search works

It’s a combination of brute force and extremely clever technology:

  • Brute force: Google runs a gigantic server farm in Washington, that buys electric power directly from the hydroelectric dam nearby.
  • Extreme smarts for indexing information: Google has clever and secret algorithms for ranking content.

How does the ranking work?

Joe mentioned the following factors that influence the ranking of search results:

  • Fresh and real content — The search index recognises a page as offering a solid body of information, not just fake content. This is an area we technical writers excel at. Link farms out there put together random text and try to fool Google. But our standard documentation will match up as high quality.
  • Nomenclature — We use the right terms and the right labels in our documentation. If people use these terms in their searches, our content will quickly match.
  • Links pointing to our content — In Google’s eyes, links indicate what other people think of the information on the page. When many people link to pages, this makes it more likely in Google’s eyes that this is well-respected content.
  • Google Ads — There is some indication that your pages will be better indexed if they contain Google Ads.

Joe gave us these sources as guidelines on optimising your content for search:

Joe had a number of more in-depth hints on things like:

  • Including the “robot.txt” and “sitemap.xml” files.
  • Registering your site at “google.com”. (It’s free.)
  • Including metadata such as title, description and keywords.
  • And more. I’m sure Joe would be delighted to pass on this information if you contact him.

A tool to help analyse your pages for SEO

Of huge benefit is a tool called Go Daddy Search Engine Visibility. It goes through your pages and finds any search engine deficiencies. There are other tools that do the same thing.

Other search engines

Joe touched on other search engines, such as Yahoo and Bing. These are also important. Each has a separate registration procedure.

Getting other people to link to your site

Send people links to your content, such as sending links via email. Submit links to relevant sites. It’s a tough process and there’s no easy way to do it.

You can also think about community building:

  • Start a LinkedIn group.
  • Add YouTube videos with links in the metadata.
  • Add a Facebook fan page.
  • Tweet links via Twitter.
  • Encourage bloggers.
  • Supply RSS feeds.
  • Consider translated content, such as having even just a single page about your documentation, in different languages. Google indexes different languages too. Also, people who speak that language will at least learn about your documentation and your product. They can then click through to read the information in English.

Creating a Google custom search

It’s easy and free to register to create a custom Google search. Register your site. Google supplies you with the code you can use to put the search box onto your site. Google will index your site for you. Your search box will bring up results for your site only.

My conclusion

This was intensely interesting and full of information. I could not hope to capture everything that Joe told us. Thank you Joe! This presentation gave me a lot to think about and experiment with.

AODC day 1: UA Design and Implementation for iPhone Apps

This week I’m at AODC 2010: The Australasian Online Documentation and Content conference. We’re in Darwin, in the “top end” of Australia. Crocs and docs, what more can you ask for! (Chocs, I hear you cry? Hear hear!) This post is my summary of one of the sessions at the conference. The post is derived from my notes taken during the presentation. All the credit goes to Joe Welinske, the presenter. All the mistakes and omissions are my own.

Joe Welinske presented the last session on Wednesday, titled “UA Design and Implementation for iPhone Apps”. I was excited to hear what Joe had to say on this highly topical subject.

During the presentation Joe passed around his iPad, much to the delight of the audience. On his iPad were two of the applications (apps) that he has been working on with his clients. His presentation discussed some aspects of the UA (user assistance, or help) and UI (user interface) for the apps:

Joe also pointed out an iPad application that you can use to do presentations where the content is stored in DITA:

  • Nomad

Joe’s introduction to his presentation

Joe set the scene by describing the world of mobile devices.

The number of mobile devices is growing fast, and the way that people consume applications is moving more and more towards mobile devices.  The environment is becoming very complicated for application developers, given so many different platforms and numerous different devices within each platform.

Look at the different types of gestures made available by the iPhone, iPad and other touch devices. The multi-touch features, such as double-tapping on a certain part of the app, make certain things happen. How do you make this obvious to the user?

Mobile devices typically have small screens. So they have more complex UIs, to offer users a rich feature set.

There are literally hundreds of thousands of apps available.

AODC day 1: UA Design and Implementation for iPhone Apps

AODC day 1: UA Design and Implementation for iPhone Apps

iPhone

The set of tools Apple put together for the iPhone app developer (Interface Builder) makes it easy for Joe, as UA developer, to see what the developers are be doing and how their design will fit in with his user assistance work. In particular, he makes use of the wizards and simulator provided by Interface Builder.

These tools mean that Joe can experiment with different types of user interface without creating an entire application and going through the application approval process. He can learn about the possibilities and keep up with what the developers are doing, so that he can then provide his user assistance content quickly when the developers send him the code base.

Joe showed us some screenshots of the Interface Builder (IB). Pretty interesting. Joe uses it to experiment with different types of help text, then he runs the simulator to see what the user would see when using the app and the help.

The iPhone SDK (software development kit) is free. You can run it on an Apple Mac. You don’t even need an iPhone.

UI text

Joe showed us the places the iPhone provides where you can put UI text. Examples are: labels, placeholders, alerts, tab bar text, segmented controllers, error messages, and more.

On a mobile device with such a small screen, every single word and every single phrase is important.

An interesting point: This makes it different dealing with management, who traditionally measure progress by number of pages etc. Here you spend days going over exactly the right word.

Coding language

Almost all iPhone developers use Objective-C, and will be doing so for the foreseeable future. As a UA developer, you should get to know Objective-C, and become familiar with your editing environment so that you know where certain objects appear. That way, you’ll get to know the places where your text will appear.

UI text on the Timewerks app

Timewerks is one of the apps Joe has been working on for a client. It’s an app for contractors (IT consultants, building consultants, etc) to keep track of their tasks. It provides a timer that will track the amount of time they spend on a task. They can then immediately record the time and send an invoice by email.

Joe showed us some examples of supporting UI text he’d created and changes to the wording on buttons etc, that improved the user’s understanding of what was happening. A very small change to UI text can have a big effect in improving how the user understands what’s happening.

UI text on the QuickOffice app

This is another of the apps Joe has been working on. Again, small adjustments can make a big difference. For example, in the Excel part of the application, they changed the words “Font format” to “Text format” and “Cell background” to “Background color”. This helped a number of users who were not Excel users, but were using the iPhone app.

Quick start guide for the Timewerks app

Next Joe walked us through a quick start guide he had created for the Timewerks application.

He pointed out that you need to create a help system that fits with people’s expectation of dealing with a mobile application. They simply expect a minimal amount of content. Yet you want to give them enough information. Joe came up with a quick start guide, with a tab bar at the bottom.

See the human interface guidelines for the iPhone. They’re great! But there’s nothing in there that’s relevant to what we do. Joe remarked that someone needs to write a guide for user assistance.

You must make your vocabulary one that people are familiar with, such as using the words “touch”, “pinch”, “swipe”.

Where possible, keep the content above the fold i.e. it should fit onto one screen. Joe’s content actually extends over two screens. He’s not completely happy with that, but at least it does give people the information they need.

Server-based help pages

The help pages are hosted on a server. If someone is online, the phone picks the content up from the server. If they’re not online, it uses a cached version.

Another point worth knowing is that the help content was viewed as part of the app. So Joe did not need to get it approved by Apple as a separate app.

Joe discussed the complexities of getting your content to look right and fit onto the iPhone screen. The content is basically HTML. The iPhone web browser is based on Safari. You can use the “viewport” tag in the HTML header to format the content for display on the iPhone.

<meta name="viewport" content="width=device-width" />

Other browsers will ignore this tag, but the iPhone will respond to it.

BTW, says Joe, you can use this same “viewport” tag on your own website, to have it appear nicely on the iPhone.

Web-based video tutorials

Joe likes the idea of YouTube-type tutorials for iPhone apps. He thinks that this is the type of medium iPhone users want to use for this type of information.

Syncing

There are almost certainly going to be some synchronisation tasks involved, to move data between a desktop application and the mobile app. Joe pointed out that you should make an effort to ensure some consistency in the UI of the desktop syncing app and the mobile app.

More about iPhone help

Interesting: MadCap Flare’s latest version has an output type to generate iPhone help.

Joe remarked on the way Flare and other authoring tools are going: The help output is not based on the native iPhone UI. Instead, it’s basically HTML and CSS, made to look like the native iPhone UI. The advantage is that you can use the content in different platforms. Whether this is the best way to go, is still a question.

Joe also showed us the iPad user guide. The documentation UI exactly mirrors the UI of the iPhone/iPad itself. It’s based on HTML, CSS and sophisticated JavaScript created by the Apple developers to look exactly like the native iPhone/iPad UI.

Developing help for the iPad

The iPad presents four times the screen real estate. iPhone app vendors are already working on apps for the iPad. For example, QuickOffice have already made an iPad version — they’ve scaled everything to make it work well on the iPad.

There’s a new version of the iPhone/iPad SDK. It has a new iPad simulator too. You can develop iPhone and iPad apps using the same SDK. The main difference is that you have more screen real estate on the iPad.

What about Android?

Joe talked about the fundamental differences and similarities between the iPhone SDK and Android:

Android is based on Java.  There is a simulator too. The things you can do for UA are essentially the same as for the iPhone: Web-based help pages, native help text, different types of UI text.

The main difference is that it’s much more complicated to figure out how to do that for Android. There’s no unified development environment. It’s not packaged, as Apple have done. Most developers use Eclipse.

Joe ended up hiring an Android programmer to sit down with him for 3 hours and go through the whole scenario with him.

Another big difference is that you’re dealing with multiple Android Virtual Devices (AVDs). Essentially with Apple, you only have one version of the iPhone or iPad. With Android, virtually any type of device is possible. It doesn’t even need to be a phone. So you’re dealing with many different shapes.

Another difference is that in Android, all the UI text is in an XML resource file. For example:

<resources>
<string name="start">START!</string>
<string name="retry">RETRY</string>
</resources>

My conclusions

This was an awesome window into the world of iPhone and iPad UA development. It sounds like a lot of fun. Thank you Joe.

AODC – web technology and standards

This week I’m attending the Australasian Online Documentation and Content Conference (AODC) on the Gold Coast in Queensland, Australia. One of the sessions today covered web technology and standards, presented by Joe Welinske.

Joe is the president of Seattle-based WritersUA. This is the second of his sessions that I’ve attended. The first is covered in my earlier blog post, on trends, tools and technologies in online documentation. Today’s session was more technical, covering various standards in the following groups:

  • W3C standards such as HTML, XHTML, XML/XSL, Web Accessibility Initiative, CSS.
  • W3C hybrids such as HTML 5 (more below), AJAX, RSS and jQuery.
  • OASIS technologies — DocBook and DITA.
  • Other open source technologies — Oracle Help, IBM WebSphere.
  • Proprietary technologies that are still relevant and useful because they are so widely adopted and stable, such as Adobe PDF, AIR and even Microsoft Silverlight (emerging)

As well as talking about the above standards, Joe discussed each one’s possible application to technical communication.

In this blog post, I’ve extracted the subjects that were new to me and a couple of interesting items. Joe covered a lot more than I’ve mentioned here.

HTML 5

HTML 5 is an emerging standard that Joe feels we need to keep on our radar. It’s also known as “Web Applications 1.0”. It includes capabilities that are relevant to technical writers. Specifically, it supports new modular objects as part of a web page e.g. <aside>, <article>, <nav>. So a web page can recognise chunks of content in a non-linear way. Here’s a link Joe gave us to a demonstration: HTML 5 Support by Browser

I found this really interesting, after all the discussion in previous sessions about modular documentation and structured authoring.

Joe thinks that, because HTML 5 has a high amount of interest from some big players, it will probably go ahead.

A question from the floor led to a discussion around HTML’s leniency as opposed to the strictness of XML / XHTML. So HTML 5 may meld HTML’s leniency with the semantic tagging provided by XML. This is potentially useful for the less-technically-savvy authors, because the browsers and other viewers will be instructed to render a page if at all possible even if it contains formatting errors.

Other snippets

Some more interesting items from Joe’s talk:

  • Comparison of XML rendering via XSL versus CSS: Printing XML: Why CSS Is Better than XSL.
  • Neat use of Flash for an online tutorial, from Verizon Wireless — demonstrates how to use a mobile phone. Click the ‘Interactive “How To” Simulator’. It has a movie of a hand clicking the buttons, plus a block of text in the right-hand panel. The text is in sync with the movie, and you can influence the movie by clicking the text.

My conclusions

Joe covered a huge area in this short session, and his knowledge is huge too. Thank you Joe! The next two days of the conference include other sessions with more detail on some of the areas which Joe has introduced, including one on AIR (Adobe Integrated Runtime) tomorrow.

%d bloggers like this: