Category Archives: workshops

From the Tech Writing 101 workshop at Write the Docs AU 2019

Sydney hosted the annual conference of Write the Docs Australia this week. As part of the conference, I ran a Tech Writing 101 workshop. It was a very rewarding experience. If you ever consider running a conference or workshop for a group of technical writers, do it! Tech writers are an engaged, humour-loving, smart group of people.

The workshop

The workshop teaches the principles and patterns of effective technical writing. Before the event, the participants do some pre-reading. Then, during the two-hour workshop, we do a series of exercises and discussions based on the principles in the pre-reading. This is a good way of cementing the patterns into your brain. The next time you write an overlong sentence, or use the passive voice, you’re likely to recognise the anti-pattern and do something to correct it.

We had around 45 participants at the workshop during the conference. Here’s a shot of the room during the workshop. At this stage, the participants had just finished one of the exercises and were discussing their solutions with their partners:

Three assistants helped with running the workshop. They walked around the room answering questions, assisting with logistics, and generally making sure everyone had a good experience. A big thank you to:

The Tech Writing 101 workshop was developed by tech writers at Google to train engineers and others in the principles of effective technical writing. Google is currently preparing a revised, improved set of pre-reading and presentation content, which will be available for tech writers all over the world who want to run the workshop. Stay tuned for news on this front.

What else happened at the conference?

Write the Docs Australia 2019 was jam-packed with talks, workshops, lightning talks, and unconferences. Take a look at the full program.

Here’s the Twitter hashtag: #wtdau2019.

Thanks so much to all the organisers and attendees. Write the Docs AU 2019 was awesome. See you at Write the Docs AU 2020!

Join the Kubeflow doc fixit at Write the Docs AU conference

Are you coming to the Write the Docs Australia 2019 conference on 14-15 November in Sydney? You’re invited to join us in a two-hour doc fixit on Thursday afternoon, as part of the conference.

Become a contributor to an open source project, learn a bit about how open source works, and help improve the experience of Kubeflow users. All in just two hours!

During the fixit, you’ll add a touch of tech writer shine to the Kubeflow docs. Docs are a super important part of the user experience of any product. Typos and grammatical inconsistencies can spoil that UX. Yet typos and odd syntax creep into any doc set so easily, especially when the doc set is largely written by non tech writers. You can help us set things right.

Where and when

The doc fixit is part of the Write the Docs Australia 2019 conference.

Registration

You don’t need to register separately for the doc fixit. Just register for the conference, then come along to the fixit on Thursday.

Your friendly doc fixit helpers

The doc fixit hosts are:

What happens at the fixit

Here’s how the fixit will work:

  • Before the fixit, I’ll create a spreadsheet with a list of doc bugs that need fixing. They’ll mostly be small things: typos, consistency in page structure, capitalisation, and so on.
  • At the start of the fixit, I’ll give a very short talk introducing the product (Kubeflow) and open source.
  • Then the group will look at the list of bugs and each person will choose what they want to do.
  • My assistants and I will help people create GitHub IDs if necessary.
  • Each person will create an issue in the GitHub issue tracker, describing the bug they’re about to fix.
  • Each person will then update the relevant doc on GitHub and send a pull request (PR) for review.
  • My assistants and I will help people sign the contributor licence agreement if necessary. (A bot will prompt them to do this when they send their first PR.)
  • My assistants and I will review the pull requests and approve each one when it’s ready.
  • The continuous merge/publish tools on the GitHub project will merge the change and publish the update in the docs.
  • The contributor will see their update appear in the docs!

I’ll also prepare a guide to for fixit participants, with the basics on how to work in GitHub and how to update the Kubeflow docs. The guides, in combination with the three of us helping during the fixit, should make the fixit fun and a useful learning experience for everyone.

Prerequisites

Here’s how you can prepare for the Kubeflow doc fixit:

  • Bring a laptop with WiFi capabilities.
  • If you don’t already have a GitHub account, sign up for one. If you have time to do this before the start of the sprint, that’s great. If not, you can do it during the sprint.
  • Sign the Google Contributor License Agreement (CLA). If you have time to do this before the start of the sprint, that’s great. If not, you can do it during the sprint.
  • It’s not mandatory to do any prework, but it will help if you know some Markdown.

References

Wikidata, open data, and interoperability

This week I’m attending a conference titled Collaborations Workshop 2019, run by the Software Sustainability Institute of the UK. The conference focuses on interoperability, documentation, training and sustainability. I’m blogging my notes from the talks I attend. All credit goes to the presenter, and all mistakes are my own.

Franziska Heine presented a keynote on Wikidata, a Wikimedia project that provides structured data to Wikipedia and other data sets. Franziska is Head of Software & Development at Wikimedia Deutschland.

Franziska’s talk was titled “Wikidata, Interoperability and the Future of Scientific Work“.

The Wikidata project

Franziska said she’s very excited to be here and talk about Wikidata, as it’s such a big part of what her team does. She cares about making Wikipedia, which started 20 years ago, into something that remains meaningful in the future.

Wikidata makes interwiki link semantics so that computers can understand the relationships between the pieces of data. When you ask Siri or Google Assistant a question, the answer comes from Wikidata. Franziska also showed us a map of the world with a data overlay sourced from Wikidata. (I can’t find a link to that specific map, alas.)

Wikidata has more than 20,000 active editors per month. That’s the highest number in the entire Wikimedia movement, surpassing even the number of edits of the English-language Wikipedia.

How Wikidata works

The core of Wikidata is a database of items. Each item describes a concept in the world. Each item has an ID number (“Q number”). Items also have descriptions and language information. In Wikipedia, the content for each language is completely separate. So, you can have the same topic in various languages, each with entirely different content. By contrast, in Wikidata all the languages are properties of the single data item. So, for example, each item has a description, and the description may be available in various languages.

Each item is also linked to all the various Wikipedia instances.

Each item has a number of statements (pieces of information), such as date of birth, place of birth, date of death, and so on. Each statement lists the sources of the information. It is of course possible that different sources may provide conflicting information about a particular statement. For example, there may be different opinions about the date of birth of a person.

Wikidata can be edited by people, but there are also bots that do the updates. The concepts within Wikidata are not built primarily for humans to navigate, but rather for machines to understand. For example, Wikidata is able to give Siri and Google Assistant information in ways that Wikipedia can’t.

But can humans look at the data?

Yes! You can use the Wikidata Query Service to access the data. To get started, grab an example query and then adapt it. The query language is SPARQL.

Franziska showed us some interesting query results:

  • The location of trees grown from seeds that have travelled around the moon. 🙂
  • Natural arches around the world
  • Cause of death of members of noble families

The expanding use of Wikidata

Wikidata was created to help the Wikipedia team maintain their data. Over the last few years, Wikidata has become a useful tool for other Wikimedia projects and even other organisations to manage their own data and metadata. Franziska showed a diagram of a future where various wikis can share and interlink data.

Existing projects:

  • The Sum of all Welsh Literature – a project presented by Jason Evans at the WikiCite Conference 2018.
  • Gwiki: Combining Wikidata with other linked databases by Andra Waagmeester and Dragan Espenschied.

Franziska showed us some graphs from the above projects, to demonstrate the research value that comes out of combining data from different large databases and analysing the results. This is what we’re about, she said: opening up data and making it freely accessible.

How interoperability fits in

Interoperability means mpre than just technical standards. Franziska referred to Mark Zuckerberg’s recent speech about the future of Facebook. Interoperability in his world, she commented, means the ability to communicate with people who are important to you, regardless of which platform they’re on.

Looking at the Gwiki project quoted above: It will connect very different people with each other: different languages, different cultures, different roles (academia, industry, etc). To facilitate this meeting of different worlds, we need to build tools and platforms – this is the social aspect of interoperability.

Instead of independent researchers working in their own worlds, they’ll be able to cooperate across disciplines, provided they have a shared metadata or infrastructure. This is the data aspect of interoperability.

In closing

Scientific knowledge graphs are key, said Franziska. They enable data analysis and power artificial intelligence. Semantic data and linked data are core to innovation and research.

We need to be able to provide data in a way that makes sense to people. This is where the infrastructure fits in. We must provide APIs and other interfaces that make it appealing to use and integrate the data. This is the essential infrastructure for free knowledge, so that research can transcend disciplinary silos, and we can make data and research available to everyone.

Thank you Franziska for a very interesting deep dive into Wikidata, interoperability, and open data.

Open data reduces friction in sharing and use of data

This week I’m attending a conference titled Collaborations Workshop 2019, run by the Software Sustainability Institute of the UK. The conference focuses on interoperability, documentation, training, and sustainability. I’m planning to post a blog or two about the talks I attend. All credit goes to the presenter, and all mistakes are my own.

I’m very much looking forward to the conference. The audience is slightly different from the developer-focused and tech-writer-focused gatherings that I see more often. At this conference, attendees are a lively mix of researchers, engineers, educators, and others. The goal of the Software Sustainability Institute is to cultivate and improve research software.

Better software, better research [reference]

Opening keynote by Catherine Stihler

Catherine Stihler is the Chief Executive Officer of Open Knowledge International. She presented the opening keynote of the conference.

Catherine’s talk was titled “Transporting data more easily with Frictionless Data“.

Frictionless Data

Frictionless Data is one of the primary initiatives of Open Knowledge International. The website offers this description:

Frictionless Data is about removing the friction in working with data through the creation of tools, standards, and best practices for publishing data using the Data Package standard, a containerization format for any kind of data.

These are the challenges the Frictionless Data initiative addresses:

  • Legal barriers
  • Data quality
  • Interoperability
  • Hard to find
  • Tool integration

A goal of Frictionless Data is to provide a common packaging format that can hold many different types of data. So people can understand and use your data as easily as possible. Catherine used the metaphor of shipping containers to talk about data packages.

  • Publishers can create the data packages, and
  • consumers can plug the data packages into their systems.

There’s more information at the frictionlessdata.io website, including the Frictionless Data specifications and software (apps, integrations, libraries, and platforms).

As well as revolutionising how data is shared and used, the Frictionless Data initiative aims to massively improve data quality.

Open data

Open Knowledge International is a strong supporter of open data. They’re currently advocating against the EU copyright law, specifically Article 13, which many fear will result in the implementation of upload filters to ensure that the big content aggregation companies don’t fall foul of the law.

Catherine spoke passionately about the issues around political advertising on social media, the Responsible Data initiative, and the Open Definition which sets out principles defining openness in relation to data and content.

Catherine says the key challenge right now is that we could go down a closed, proprietary route where only those who have money will have access to knowledge. We need to win the debate about the importance of an open society and open and free knowledge.

Thank you Catherine for a spirited introduction to Open Knowledge International and its work.

Invitation to a Tech Writing 101 workshop, Melbourne, November

Are you a software engineer wanting to learn the patterns of technical writing? Or a technical writer wanting to refresh the ABCs of our craft? Or someone who loves debating and exercising good writing styles? Join us for a Tech Writing 101 workshop in Melbourne, Australia, on Thursday, 15th November.

The workshop is part of the Write the Docs AU conference, and the cost of the workshop is included in the conference registration.

Quick reference

Workshop name: Tech Writing 101

Date & Time: Thursday, 15th November 2018, 2.30pm – 4.30pm

Location: Library at the Dock, 107 Victoria Harbour Promenade, Melbourne, Australia

Signup: Register for the Write the Docs AU conference

Prework and what to bring

Before attending the workshop, you need to do a small amount of pre-reading (about half an hour).

This is where you discover that tech writing patterns are interesting and fun.

On the day of the workshop, bring a laptop with a text editor and an internet (WiFi) connection to do the exercises.

Workshop overview

The workshop leads you through a series of exercises to improve the clarity, readability, and effectiveness of your writing. You’ll work in pairs, learning from an experienced Google technical writer (me) and from each other.

Topics include the importance of knowing your audience; what can go wrong when you use passive voice; how to avoid getting tangled up in long sentences and disorganised paragraphs; how lists have taken over the world.

By the end of the session you’ll also think differently about toothbrushes.

During the workshop, you’ll apply the principles you’ve read in the prework. We’ve found this method of learning is highly effective. And it’s just plain fun. The workshop has been run at SREcon in Europe 2017 and at SREcon in the US in 2018, where it was very well received. People said they came away with useful skills and cleaner teeth.

Who’s welcome

The intended audience for the workshop is people who’re interested in learning how to write efficiently and effectively. That includes software developers, support engineers, UX specialists, product managers, technical writers, editors – well, really, everyone who needs to write technical content.

I hope to see you there!

Instead of a picture of a toothbrush (as that’d be a spoiler) here are some patterns from a recent walk in the bush:

%d bloggers like this: