This is a quick tip about a useful Git technique. It took me a while to figure this out when I first needed it. I was working on a pull request (PR) on one computer when I was in the office. Then I wanted to continue working on the PR from my laptop at home. I needed to transfer my work from my work computer to my laptop, using GitHub as middleman.
Another scenario for this technique is when you’ve used the GitHub UI to make some changes, but now you want to swap to command-line usage while in the middle of your PR. This could be useful, for example, if you find that your PR needs to include changes to more than one file, which is hard to do in the GitHub UI.
You need Git on your local computer. See the Git installation guide.
I’m assuming the following things:
- You’re comfortable using command-line Git.
- You already have a PR that you’ve been working on, and you want to make a local copy of the PR so that you can update one or more files in that PR. (If you haven’t yet created a PR, you can follow this quick guide to working on GitHub, which I created for the Kubeflow open source doc set that I’m currently working on.)
- You’ve pushed your latest changes up from your other machine to GitHub, so that GitHub contains the latest version of the PR.
All you want to do now is to copy a particular PR down from GitHub so that you can work on it on this computer.
Clone the repository to your computer
If you’ve already cloned the GitHub repository to your local computer, you can skip this section. This would be the case if you’ve previously done some work on this repository and on this computer.
You need a clone of the GitHub repository on the computer you’re currently using, so that Git can track the changes you make in the repository. Usually, you fork the main repository on GitHub before creating a PR. The reason for creating the fork is that you probably don’t have update rights on the main repository. I’m assuming that you have a fork of the repository, and therefore your next step is to clone your fork of the repository to your local computer, as described below.
Note: If you’re working directly on the main repository rather than on your fork of the repository, then you should clone the main repository to your local computer.
To clone your fork of the repository onto your local computer:
- Find your fork of the repository on GitHub. For example, if the repository name is “awesome-repo”, then the fork should be at this URL:
- Open a command window on your local computer.
- Run the following commands to clone your forked repository onto your local machine. The commands create a directory called
git-repositoriesand then use HTTPS cloning to download the files:
mkdir git-repositories cd git-repositories/ git clone https://github.com/your-github-username/awesome-repo.git cd awesome-repo/
- Find your fork of the repository on GitHub. For example, if the repository name is “awesome-repo”, then the fork should be at this URL:
If you prefer, you can use SSH cloning instead of HTTPS cloning:
mkdir git-repositories cd git-repositories/ git clone email@example.com:your-github-username/awesome-repo.git cd awesome-repo/
You’re now in a directory called
awesome-repo. If you take a look at the files in the directory, you should see some file- and directory names starting with
.git, indicating that Git is tracking the files in the directory. You should also see the files and directories belonging to the GitHub repository that you cloned.
Download the PR to your computer
Follow these steps to copy the PR from GitHub to your local computer:
- Find your PR on GitHub and check the name of the branch that contains the PR. In the screenshot below, the branch name is gcpsdk:
- Go to the directory containing the repository on your local computer. The commands below assume that you’ve cloned the repository into a directory named
- Run these commands to copy the branch containing your PR to your computer. In the commands, change
your-branch-nameto the actual branch name:
git status git checkout master git fetch origin your-branch-name:your-branch-name git checkout your-branch-name
- Find your PR on GitHub and check the name of the branch that contains the PR. In the screenshot below, the branch name is gcpsdk:
That’s it. You’re now in the branch that contains all the updates from your PR. You can continue working on the files or adding new files. Remember to
git commit and
git push as usual, to copy your updates back up to GitHub.
Here’s an explanation of each of the above commands:
git status: Run this command to see where you are and what the current status is of your files. You may have been busy with something that still needs tidying up before you can create a new branch.
git checkout master: Go to the master branch to make sure you have a tidy place from which to create a new branch.
git fetch origin your-branch-name:your-branch-name: This is the key command. It tells Git to copy the branch from GitHub (“origin”) and to create a new branch on your local computer with the same updates and the same name.
git checkout your-branch-name: This puts you in the new branch on your local computer. This branch now has the same updates as the equivalent branch on GitHub.
Some notes for those who are interested
The above set of commands assumes that you want the branch name on your local computer to be the same as the branch name on GitHub. That’s most likely to be the case, but you can use a different local branch name if you need to. The command pattern is this:
git fetch origin your-origin-branch-name:your-local-branch-name git checkout your-local-branch-name
origin refers to the remote repository on GitHub from which you cloned your local repository when you first started working on it. You can use the following command to see which remote repositories Git knows about:
git remote -v
For the past year I’ve been working with colleagues to create and run Google’s Season of Docs program. It’s super exciting that the first results are now out. There are more results to come in the new year, when the long-running projects finish.
Congratulations to the technical writers and open source mentors who’ve successfully completed their standard-length projects and good luck to those who’re working on the long-running projects. Also a big thank you from me, to everyone who’s taken part in this pilot of the program, including those who had to withdraw from the program for various reasons. It’s been a privilege to receive all the feedback from everyone involved and to learn from the experiences of so many people.
Results for Season of Docs 2019
This year’s Season of Docs included a limited number of technical writing projects, as a pilot to measure how well the program would be received. There are 36 successfully completed projects out of the 41 standard-length projects that finished in December 2019. Eight long-running projects are still in progress, scheduled to finish in February 2020.
The goals of Season of Docs are:
- Bring technical writers and open source projects together to improve open source documentation and thus to contribute to the excellence of open source products.
- Give technical writers around the world the opportunity to work in open source and to learn about open source processes, tools, products, and code.
- Help open source projects understand how technical writers work and what technical writers can contribute to the open source projects.
- Improve the overall experience of contributing to open source, by providing excellent documentation for new contributors.
Season of Docs 2019 participants come from round the world, including all continents except Antarctica.
What are the participants saying about their experiences so far? Here are a few quotations from the blog posts and reports that people have published.
- From technical writer Kartik in a blog post, Experience in Google Season of Docs 2019 with Apache Airflow:
I also started getting invited in the PR reviews of other developers. I am looking forward to more contributions to the project in the coming year.
- From technical writer Elena in the project report, Apache Airflow: Documenting using local development environments:
I’m deeply grateful to my mentor and Airflow developers for their feedback, patience and help while I was breaking through new challenges (I’ve never worked on an open source project before), and for their support of all my ideas! I think a key success factor for any contributor is a responsive, supportive and motivated team, and I was lucky to join such a team for 3 months.
- From technical writer Aaron, quoted in a blog post by open source mentor Olivier Hallot, The LibreOffice Documentation Team Announces the LibreOffice Online Guide:
My experience working on this guide was fantastic, and I would urge anyone interested in getting involved with open source to consider documentation as a first step. The Document Foundation’s documentation team in particular has a very well-established process and infrastructure for producing their products, and one of the only things I can think of that would help them is more volunteers.
- From technical writer Shaloo in a blog post about the GenPipes project, Giving it your all to Documentation!
Unfortunately, most of us who write software, think that documentation means simply translating what the software does in plain English. That is so not true. Documentation is more to do with how to use your cool software and solve real life pain points. Staying in touch with your user needs is of paramount importance, whether it is code that you are writing or the documentation for using the same.
- From technical writer Laurel in a post on the Ensembl blog, Ensembl is part of the first Google Season of Docs!
It has been both fun and interesting to learn about the Ensembl REST API implementation as well as to learn about what people who use the REST API need to get started.
A brand new tutorial is now available for GDevelop, and we would like to thank @end3r for writing it as part of Google #SeasonOfDocs!
Thanks a lot to @felicity_brand for supporting the #OSGeoLive project in #SeasonsofDocs 2019. She did a great job and improved our @osgeolive documentation a lot.
- From technical writer Edidiong in the project report, Modernize (rewrite) the VLC user documentation:
Overall, It was one of the best things that happened to me this year. I have been using VLC for as long as I can remember and the fact that I was able to contribute to the organization is an honor.
- From technical writer Pavithra in the blog post, My GSoD Journey – Part 3:
I like working with the Wikimedia community. I got to interact with some really amazing folks and the overall experience has been wonderful! Even though Season of Docs has officially come to an end, I intend to continue contributing and would welcome interested folks to join us.
- From technical writer Mister Gold in the project writeup, Report for the Bot Docs project:
At the end of the GSoD program, my Rocket.Chat mentors asked me to review a technical writer job posting they’d created. They said that the contribution I made to Rocket.Chat is invaluable and cannot be underestimated. They were so excited and impressed by this project that they have decided to hire a person who will be in charge of all the qualitative documentation updates. Rocket.Chat team confirmed that this is one of the greatest achievements of all the times in Rocket.Chat thanks to my work.
- From technical writer Muhammad in the project report, OpenMRS: Write Code, Save lives!
GSoD not only provided me with an opportunity to learn about open source but also to interact with some of the most wonderful people around the globe working enthusiastically for betterment of society and help create and most importantly maintain a Medical Record System completely free of cost for under-privileged as well as other users.
GSoD not only increased my knowledge about open-source and led me to meet exciting people, it also helped me to further enhance my technical writing and managerial skills…
I would love to continue working for OpenMRS and be an active member of the community.
- From technical writer Areesha in a writeup of the project, OpenELIS documentation for end users:
Google Season of Docs introduced me to open-source community. I realized how open-source projects offer multiple benefits to its users. Open-source systems provide more flexibility, security and transparency to the users as the system is continuously being reviewed and updated…
Writing technical documentation for a system that is implemented at such a wide scale and is a part of a global community made me learn so much about how the technical documentation is done in the real-world at the corporate and professional level.
- From technical writer Anna in a writeup of the project, How did Open Collective’s docs change in three months?
This community has welcomed me with open arms and I couldn’t be more grateful for that. Open Collective has found in me a contributor for life, and I hope to keep contributing for as long as I can.
- From technical writer Anne in the project writeup, The Ultimate Beginner’s Guide to NumPy:
If you’re interested in getting involved with NumPy or Google Season of Docs, I highly recommend it. If you’re new to open source projects, you might have a bit of a learning curve setting up your workstation, becoming familiar with NumPy’s contribution guidelines, and getting to know the ins and outs of .rst and Sphinx, but it’s so worth it. It’s definitely challenging if you haven’t done it before, but keep going. Once you get those details nailed down, you can start having real fun!
- From technical writer Alex in a writeup of the project with International Neuroinformatics Coordinating Facility (INCF), Technical Writer, Google Season of Docs 2019:
I’M STILL SMILING!
It’s well worth taking a look at all the project reports listed on the Season of Docs website, to see the scope of the projects tackled, the challenges that the technical writers faced, and how they overcame them. Every story is different!
Please do let me know of any posts I’ve missed, or if you’d like to add any experiences of your own. Add a comment on this post so that others can read it too.
Opportunities to contribute to open source
If you’re looking for opportunities to contribute to open source, these opportunities abound.
A note to avoid potential confusion: Contributing to these projects would be outside of the Season of Docs program, unless you’re already officially signed up to take part in the relevant long-running project in Season of Docs 2019, or you’re accepted into a future Season of Docs program in 2020 or beyond.
Some suggestions for finding a project to contribute to:
- In their project reports listed on the Season of Docs website, many of the technical writers describe followup work to be done.
- The open source organisations published detailed lists of ideas in their original proposals for Season of Docs. Many of these ideas are not yet addressed, because Season of Docs is only three months’ worth of writing!
There are more Season of Docs 2019 results to come
To the technical writers and mentors working on the long-running projects for Season of Docs 2019: Have fun, best of luck with your projects, and I’m very much looking forward to seeing the results in February!
To add to the celebratory nature of this post, here’s a picture of a tea tree flower spray which I came across while walking in the Australian bush:
The first Kubeflow doc sprint took place in July this year. It was so good that we’re going to do it again in February 2020!
This time round, our focus is on use cases. A few weeks ago I wrote about our doc analytics, and in particular how the “use cases” section had jumped into the top ten most viewed areas of the docs. When arriving at the use-case docs, though, people are disappointed in what they find. We need to improve that.
There’ll also be some juicy bugs to dig into. We’re preparing our list of docs to write/fix in the doc sprint Kanban board.
Dates: Monday, February 10, to Wednesday, February 12, 2020.
Location: Sunnyvale, CA, or remotely from anywhere in the world.
To find out more about the upcoming doc sprint, read the announcement on the Kubeflow blog.
Over the past few months, I’ve been delving into analytics and feedback on the doc site that I currently manage. I’m crafting strategies as I go, and creating reports for product stakeholders to get their input too. I hope some of the strategies described in this post may be useful or at least interesting to other people who’re looking into how to use analytics.
Note: Although I work at Google, this post does not constitute any recommendations on the use of any Google product. I’m a technical writer, and I’m using analytics and feedback in the same way other tech writers do, to gain insights into the doc set that I manage. I am by no means an expert on analytics.
Let’s get some technical details out of the way first. The doc site under discussion is kubeflow.org, which hosts the documentation for an open source machine learning platform called Kubeflow. The documentation is also open source. The source for the docs lives on GitHub.
I’m using Google Analytics to see the doc usage stats. The Kubeflow doc site is fairly new. I enabled Google Analytics and the feedback widget on February 27, 2019, which means that the stats start from that date.
To gather user ratings on the doc pages, I’m using the feedback widget that’s available with the Docsy theme. The Kubeflow website uses Docsy and Hugo. If you’re interested in the details of the website tooling, take a look at the website README.
Goals for the analytics reports
The Kubeflow community and I are interested to see how people are using the docs. A high percentage of page views in a particular area can indicate a high level of interest in the related product features, or can point to an area of the product where people need more help than in other areas.
From a docs point of view, my goal is to identify the top priority docs for improvement, and to get some direction on the types of improvements we need to make. For example, if people are particularly interested in an area of the docs, and at the same time are not satisfied with the information they find there, then that area of the docs is high priority for improvement.
Overall site views
I started by looking at the number of website views from March (when Google Analytics became available on the site) to November (now). The number of views per month has more than doubled in that time, from 104,000+ to 220,000+. It’s good to know our reader base is increasing.
I looked at the pages with highest number of views across the site as a whole, and also within a few high-priority sections of the docs.
The period for these stats is two months, from September 1 – November 1. Our previous report was in July. I didn’t include August in the stats, because we did some information architecture refactoring in August. We moved many pages around. Moving pages affects the Google Analytics stats, which makes August a bad month to use for assessments in this case.
The top entry, “/”, denotes the Kubeflow website home page: https://www.kubeflow.org/. This page consistently receives the highest number of views.
As in previous reports, the second-most viewed page is the main Getting Started guide. It’s linked from the website home page. Other getting-started guides rank highly too.
Also as in previous reports, the third-most viewed page is About Kubeflow. It’s linked from the top-level menu bar with text “What is Kubeflow”.
In a change from previous reports, the Use Cases section has replaced components and notebooks in the list of 10 most-viewed sections. I should start paying attention to this section.
Other pages in the top 10 are the same as in previous reports: the docs index page and the pipelines section.
Strategies for most-viewed sections and pages
My overall strategy for the top-viewed pages is to spend time perfecting the user experience on those pages, addressing any issues, and making sure people find the information they need:
- Improve the textual and visual content of the most-viewed pages. For example, we recently ran a doc sprint in which we spent considerable time restructuring and rewriting the website home page, which is the most highly viewed page in the doc set. Feedback on the new design and content is good.
- Link from the most-viewed pages to content deeper in the site, to ensure people find all the information they need. For example, we recently rewrote the “About Kubeflow” page and added links down into relevant content on the site.
- Examine the bounce rate and time on page, to see how people are using the page.
- Examine feedback, to see whether people are finding the content useful.
Getting feedback from readers
Every page on the Kubeflow website has a feedback option. The option asks “Was this page helpful? Yes / No”.
- About Kubeflow received the most feedback, and 24 of 28 responses (85.7%) were positive. That’s an improvement from the July analysis, which showed 70% positive.
- Getting Started received the second-most feedback, and 11 of 15 responses (73%) were positive. That’s exactly the same as in July.
It’s worth noting that the number of feedback responses is very low in comparison with the number of page views. Also, people are more likely to respond with negative feedback than with positive. Even so, the feedback is useful, particularly when it’s strongly positive or negative, and if the ratios of positive to negative change after we’ve updated the content.
Deep dive into specific sections and pages
Based on the above statistics and feedback results, I examined some specific pages in greater detail.
The next screenshot shows the 10 most-viewed pages within the getting-started section. We reorganized this section significantly in August. It’s useful to see which getting-started experiences are the most often viewed, in the period since that significant refactoring.
The guide to deploying Kubeflow on an existing Kubernetes cluster (roughly equivalent to on-premises installation) has most views. The workstation installation guides come next, followed by deployment to a cloud.
The following stats are for the Getting Started page, which introduces the getting-started section:
Looking at the information for this getting-started overview page in detail:
- The page has the second-highest number of page views in the entire doc set (the top-level kubeflow.org page has highest).
- Bounce rate* has continued dropping, from 56% in April to 44.15% in July, to 39.6 percent now. That’s a great improvement. Our goal was see it drop below 40% – goal achieved!
- Time on page is 1 minute 7 seconds. That’s fine. There’s no need for people to spend longer on the page, because this is an overview page and the meaty content is in sub-pages.
- The getting-started overview page has received the second-highest amount of feedback of all pages on the site , and 11 of 15 responses (73.3%) were positive. That’s exactly the same as in July.
- Overall, the getting-started pages continue to receive low ratings.
* The bounce rate for a page is the percentage of user sessions that started and ended with that page. So, people entered the site on that page, and left without viewing any other pages. I’ve seen guidelines indicating that, as a general rule, we should avoid a bounce rate higher than 70%. If many people visit a page but leave immediately, this may indicate that the page isn’t giving them what they need, and so they leave the site. (It does depends on the type of page. The purpose of some pages is exactly to send people elsewhere.)
We need to improve the content of the getting-started section so that it better meets the readers’ expectations. One tactic I hope to follow, if I can get time from a UX research team, is to test the pages with some specific groups of users. In addition, I’ve already seen feedback from customer issues that people are looking for a single, recommended flow for getting started quickly. Currently the docs offer all the options, but don’t give much guidance on where to start.
Next up is the About Kubeflow page:
Looking at the About Kubeflow page in detail:
- It’s the third-most highly viewed page on the website.
- In previous reports, bounce rate came down from 63% in April to 60.4% in July. Bounce rate has now gone up again to 62%. We need to lower the bounce rate, as this page is a highly-viewed page and we want to draw people deeper into the site. I’m working on a new Kubeflow overview (pull request #1339). When that new page is available, I’ll link to it from the About Kubeflow page, and then re-assess the bounce rate.
- Average time on page is two minutes. That’s good for an overview page. People are engaged in the content.
- The page has received the most feedback of all pages on the site, and 24 of 28 responses (85.7%) were positive. That’s an improvement on July (70% positive).
We refactored the page in June to provide more information and links. I hope to improve the positivity still further by linking to the new Kubeflow overview mentioned above.
What about the Use Cases section, which has recently made it into the top 10 most highly viewed sections?
- It’s interesting to see a set of guides arrive in the top 10 most highly-viewed pages for the first time. This change potentially indicates that our audience is maturing and looking for more in-depth use-case focused docs. The product (Kubeflow) is relatively new, and is currently working towards a v1.0 launch in 2020. Up now, perhaps most people have been focused on getting the product up and running and trying the simple use cases provided in the getting-started section. Now maybe they need more in-depth use cases.
- The feedback ratings on this section are low. We need to make sure people get what they’re looking for.
- One action I’m considering is to adjust the information architecture to reflect what people are probably looking for. At least in the short term, I could rename the section, as it describes highly specific ways of using the product, rather than the more generic use case information that people may be looking for. Alternatively, I could move the content into another section, such as the “further setup and trouble shooting” section.
- Then, when we have more bandwidth and have had time to do more research, we should flesh out the section with more use cases. We do already have some good examples and tutorials, which we can include in this section.
Open source contributors to the docs
Moving from Google Analytics to GitHub stats for the doc repository, it’s interesting to see the fluctuation in the number of contributors to the docs. It’s not just me writing the docs!
The following events influenced the contributor numbers:
- We ran a community-wide Kubeflow doc sprint in July. Contributions increased significantly during that period, and stayed high for a while afterwards.
- Contributions picked up towards the Kubeflow v0.7 release, which happened in early November.
- In mid November, we ran a doc fixit for external tech writers at the Write the Docs conference in Australia. That fixit causes the large spike at the right-hand edge of the graph.
We need to run more doc sprints and fixits!
A product stakeholder requested information about the sources of website traffic. I haven’t yet figured out any related strategies.
In the period August 1 – November 1, 2019, close to 60% of the website traffic came from organic search. Referrals accounted for 22%.
I looked at the referrals, and found that the largest percentage (29%) of referrals come from GitHub. This is not surprising, given that the source code for the product is also on GitHub. The next-largest percentage is 8.5%, coming from a related doc site.
The top 10 search terms are primarily variations of the product name, “Kubeflow”, with one outlier: “minikf” at number 4. MiniKF is a deployment tool for Kubeflow.
More analytics tips?
If you have any analytics tips or experiences to share, I’d love to hear them. Links are welcome!
Sydney hosted the annual conference of Write the Docs Australia this week. As part of the conference, I ran a Tech Writing 101 workshop. It was a very rewarding experience. If you ever consider running a conference or workshop for a group of technical writers, do it! Tech writers are an engaged, humour-loving, smart group of people.
The workshop teaches the principles and patterns of effective technical writing. Before the event, the participants do some pre-reading. Then, during the two-hour workshop, we do a series of exercises and discussions based on the principles in the pre-reading. This is a good way of cementing the patterns into your brain. The next time you write an overlong sentence, or use the passive voice, you’re likely to recognise the anti-pattern and do something to correct it.
We had around 45 participants at the workshop during the conference. Here’s a shot of the room during the workshop. At this stage, the participants had just finished one of the exercises and were discussing their solutions with their partners:
Three assistants helped with running the workshop. They walked around the room answering questions, assisting with logistics, and generally making sure everyone had a good experience. A big thank you to:
The Tech Writing 101 workshop was developed by tech writers at Google to train engineers and others in the principles of effective technical writing. Google is currently preparing a revised, improved set of pre-reading and presentation content, which will be available for tech writers all over the world who want to run the workshop. Stay tuned for news on this front.
What else happened at the conference?
Write the Docs Australia 2019 was jam-packed with talks, workshops, lightning talks, and unconferences. Take a look at the full program.
Here’s the Twitter hashtag: #wtdau2019.
Thanks so much to all the organisers and attendees. Write the Docs AU 2019 was awesome. See you at Write the Docs AU 2020!