Author Archives: Sarah Maddox
I’m attending tcworld India 2016 in Bangalore. The keynote on day 2 of the conference was “The Future is Intelligent Information”, presented by Michael Fritz. These are my notes from the session. All credit goes to Michael, and any mistakes are my own.
Michael’s presentation was partly a recap of things said at the conference yesterday. It also discussed digitalisation and its consequences for tech comm in the future. He also introduced a program started last year by tekom.
Notes from Michael’s presentation
There are many buzzwords around. “Smart” is one of those. Many things are smart these days: smart cars, smart homes, smart watches, smart fridges. These smart things have the capability to store and process data. They become parts of smart services. For example, instead of talking about a car, we talk about mobility. A mobility app may send you a car, or tell you to walk to your destination as you need more exercise. eHealth, smart shopping services, and so on.
What else is digitised? Production, for one. Smart components can join up and talk to each other by machine to machine communication. Products configure themselves automatically, forming intelligent products. Sometimes this is called cyber physical systems. This type of production system may not be focused on mass production, but rather on producing just enough for the people in the neighbourhood.
Another buzzword is “ubiquitous data”. All things are on the Web. The way in which we use data has changed. We used to use relational databases, and queried them, producing reports showing aggregated data, at specific time frames. Today we just search for data when we need it. Data is everywhere and can be used everywhere.
The economy is changing as a result of digitisation. Michael described Uber as an example. Users can rate drivers, and if a driver is consistently down-rated he or she won’t get any more fares. Taxis are suffering as a result of ride-sharing services like Uber.
Michael mentioned a few words of caution, such as cyber security (data is valuable), and the dangers of large companies dominating services by their digital platforms.
What are the consequences for technical communication?
- Smart products: Usage information should be embedded in the product so users can get it easily, or the information should be easily accessible on the Internet. Products (here Michael looked at his bottle of water) should have connectors, such as codes that you can scan to get the information you want.
- Smart services: Usage information won’t be stand-alone. Instead it’ll be part of an information chain. For example, when using Uber, the information is part of the whole process of using the service. When you car has a problem, the in-car screen will tell you what the problem is and where to find the nearest service station. It will even communicate the problem and your arrival to the service station.
- Smart production: Usage information must be standardised so that it can be easily merged and delivered to the user. Remember, this is the scenario where components have automatically assembled themselves. There’s probably no printer around, so the information must be available in some other way.
- Ubiquitous data: Usage information must be accessible from everywhere and from all the different device types that people are using.
Technical communicators should be ready for change. Michael mentioned the enticing prospect of an Uber for technical communicators. And we must be aware of cyber security, and make sure we take care of the security of the usage information itself. Be careful not to send malware to our customers!
Michael discussed some challenges we face. We need to know the reality of today: we’re not yet in the world of smart things and smart data. Paper and PDF still prevail.
tekom’s intelligent information initiative
Michael described a tekom initiative promoting a paradigm change from classical publishing to intelligent information delivery. The focus is on the (electronic) delivery of information. We shouldn’t focus so much on the creation of information, but focus first on delivery. Don’t think of documentation or even of topics any more. Think of usage situations, use cases, the customer journey. This will lead to intelligent information, which is the right information at the right time for the right person.
There’s a lot more to this concept of intelligent information, and content creation based on intelligent information: adding metadata, analysing use cases, the types of information products we produce (paper, mobile, augmented reality, embedded display, online, and so on).
Tekom has kicked off the intelligent information initiative, and is running various working groups to move it forward.
Thanks Michael, this was an informative and entertaining session. Tekom’s intelligent information is an exciting initiative indeed.
This week I’m attending tcworld India 2016 in Bangalore. I was honoured to be invited to present the keynote address on day one. People seem to have enjoyed it. This post is a short summary of the session.
The presentation is called The future *is* technical communication. It’s a look at the fast-moving world of technology, the ways people interact with technology, and in particular how technology affects the way we communicate. I’m proposing that communication via technology is core to our experience of the world. We, as technical communicators, are in a very good position to grab the opportunities offered by this technology-rich world.
The slides are available on SlideShare: The future *is* technical communication.
An overview of the topics covered in the presentation
Technology is fast-moving and confusing.
It’s hard to keep up – blink, and you’ve missed something.
People suffer from cognitive overhead.
Sometimes they’re even bamboozled by technology.
Technology is not a lost cause.
People love technology.
People have a relationship with their tech.
People use technology for communication in weird and wonderful ways.
The way people absorb information has changed. It’s now fluid and asynchronous.
Immersive technology offers enhanced, full-on experiences.
In our weird and wonderful world, even inanimate things communicate with each other. We call this the Internet of Things, or IoT.
Some people are doing things that seem way out there. Until they become the norm.
I proposed a technical communicator’s mission statement, based on the new ways people are communicating and experiencing information.
And we looked at some things we can do right now to grab the opportunities this technology-rich world is making for us as tech communicators.
One of the slides from the presentation:
I’m attending tcworld India 2016 in Bangalore. Mayur Bhandarkar gave a presentation entitled “Predicting User Questions to Build an Information Repository”. These are my notes from the session. All credit goes to Mayur, and any mistakes are my own.
I’m interested in this session especially because it advocates the use of FAQ, which is a document type often criticised in the technical writing community. One of the items in the presentation overview was “What the FAQ”. Funny and clever!
Mayur started by saying that presenting information in a structured way is a failure. Rather, queries are the path to success.
What are the advantages that FAQs offer? FAQs give the reader the impression that the document is going to answer a problem that the reader has. The content makes you feel that you’re having an interaction with the system. And an FAQ provides a complete solution for a particular problem. Another advantage is that FAQs seem more informal.
Look at LinkedIn for example: they provide documentation in the form of FAQs.
How can you predict users’ questions? Mayur gave an example of predicting questions for users of a dish washer, and then for a user interface.
How to go about it:
- List the features in the user interface.
- Decide the types of questions: why, what, and how.
- Build the question repository.
- Why should I use the x feature? (concept)
- What actions can be performed using the x feature? (reference)
- How do I use the x feature? (task)
Mayur related the above questions to the DITA types of concept, reference and task. Using the above examples, you can generate questions for each feature.
The next step is to organise the information so that it is useful to the user. Mayur showed us a system that prompted the user for information, including the language in which they want the information, and the type of information they want, and then showed a list of relevant questions. The user can select a question to see the answer. Mayur also discussed online sites that organise FAQs by most popular, most recently added, or modified features.
Users don’t like reading long procedures. When creating the FAQs, if a procedure ends up being very long, make the call whether to change the procedure into a video, simulation, or other easily consumable media. You could split the topic into smaller topics, but Mayur says that the users prefer media. An example is installation guides.
A trend that the team is following is the use of long-tailed keywords for SEO (search engine optimisation). A “keyword” can actually represent the whole task that the user is trying to perform. Using the FAQ format, an FAQ cis specifically related to such a keyword.
Someone from the audience asked about the problem of maintenance of videos. Mayur confirmed that these are maintenance heavy, and that the team produces videos based primarily on user demand, as represented by the support sites.
Another audience question was about basing your documentation on the UI, which is something technical writers don’t normally recommend. Mayur replied that this approach doesn’t work for troubleshooting or command-line applications, but only for simple UI-based documentation. It’s a solution for showing how to use the product.
An interesting point that Mayur mentioned: his team found that their PDF documents were being crawled more thoroughly by search engines than their online HML docs with the same content.
Thanks Mayur, this was an interesting perspective on FAQs.
I’m attending tcworld India 2016 in Bangalore. Pavithra Garre gave a presentation entitled “Human Auditory Processing and Speech Recognition—Potential Latencies and Benefits for Documentation”. These are my notes from the session. All credit goes to Pavithra, and any mistakes are my own.
Pavithra Garre is an engineer in design technology at Samsung Electronics in South Korea. She started by showing us a video clip about communication as an innate human ability, and about the vision of interacting with computers via speech recognition, and the evolution of speech recognition technology.
Pavithra’s presentation was very interactive. She asked questions and chatted to the audience throughout. The presentation covered the layers of speech recognition architecture, the modes of speech recognition, speech identifiers and tagging, CMS interpretation and custom delivery.
Pavithra described a three-layer architecture:
- Speech recognition: There are different modes of speech recognition: converting digital audio to simpler acoustic forms; matching units of speech; a complex lexical decoding system based on pattern matching; applying grammar, such as in predictive typing; and phoneme identification. There are challenges in speech recognition technology, such as background noise reduction, the size of the data gathered and data compression to reduce this size, and the problem of energy consumption.
- Tagging the different elements of speech to present to the CMS: The software needs to identify what the person is talking about, and tag each element appropriately. Once the speech is tagged, it becomes data. Examples of tags may be a form of XML, or VTML, or a more complex tagging format like ID3 or ID3V2Easy.
- Documentation in a database: Content is information plus data. The indexed tag and associated content are combined to form or retrieve a document, in what Pavithra calls “CMS interpretation”.
Some well known examples of speech recognition software:
- Siri by Apple
- Genie by Microsoft
- Google Speech
- and more
Where can we use this technology and the voice bank containing the derived content?
- Marketing agility
- Big data and analytics
- Resolving disputes about customer interactions in a help desk (this suggestion came from the audience)
- Better performance
Pavithra also described things you need to take into account, such as data volume and data migration.
There was a lively and interested discussion at the close of the presentation. Thanks Pavithra for an interesting presentation!
I’m attending tcworld India 2016 in Bangalore. Surag Ramachandran presented a session called “Technical Writing for Big Data Applications”. These are my notes from the session. All credit goes to Surag, and any mistakes are my own.
Surag Ramachandran works in a financial services business unit, working with data analytics in enterprise big data applications. He presented a couple of use cases from the financial industry. Then he described how the applications evolved, and how that led to a shift in the technical writing processes.
Surag said that financial industry has many use cases, including risk reduction, financial crime detection, etc. When it comes to big data, most of the use cases are in the marketing area, analysing customer behaviour.
The new enterprise IT model is a convergence of SMAC – social media, mobile, analytics and cloud. These all contribute to big data. Big data consists of the structured and unstructured data coming from these sources.
The financial services industry is highly data driven. Analytics is an evolving field. Predictive modelling tools analyse incoming data and give the bank actionable intelligence. Realtime crime analysis can pick up things like insider trading.
Looking at the core banking systems, you’ll find common platforms: frameworks that contain common modules to support big data. For example, the IT team develops an application for the banking sector, then adapts it for the insurance sector. In other words, the strategy is software reuse.
Thinking about technical documentation, there is a clear case for content reuse. Write installation guides and user guides that describe the common modules. Create the content as DITA files. Store the content in a content management system, and use them in the documentation for each individual platform. Within the applications themselves, there’s opportunity for content reuse too.
A lively question-and-answer session followed Surag’s talk, with many questions from the audience about cross-industry platform reuse, training for people interested in getting into the industry, challenges in documenting features in the big data industry, and other aspects of the talk.