Have you ever written a security advisory? Be ready to write the most fraught and demanding document in your life! I’ve been writing them off and on for three years now. The first was a huge challenge. Since then, each one has been interesting in its own way and I’ve learned an enormous amount.
If you’re as lucky as I was, you’ll have an expert to help you with the first few security advisories you have to write. Dave O’Flynn knows a lot about application security. He spent a fair bit of time explaining severity levels, designing our security advisories and walking me through the accepted and expected ways of communicating vulnerabilities and fixes to customers. Thank you Dave!
So I decided to share my experiences by writing this blog post. It’s been brewing for months. My inbox and my dressing-table have been collecting notes of all shapes and sizes. It’s going to be a long one, and a goodly mix of opinion and fact. Grab a cuppa. I hope you enjoy it. 🙂
Please note that the opinions here are my own, and not necessarily shared by the company I work for. Comments welcome!
What is a security advisory?
It’s a document letting customers know that you have found a security vulnerability in your product or that someone else has found it for you. It also tells customers how to fix the problem. A security vulnerability, also called a security flaw or security hole, is a bug in the application that malicious users (hackers) could exploit to gain access to your customers’ data or network. The best known and most-frequently occurring security vulnerabilities in web applications are cross-site scripting (XSS) flaws.
In most cases, you would issue the security advisory at the same time as you provide the patch or upgrade that fixes the flaw. If necessary, you may issue the security advisory even before the patch is ready, and tell people how to minimise or prevent their exposure to attack until the fix is ready.
Here’s an example of the most recent security advisory I’ve written. It announces a number of flaws found and fixed, all XSS flaws. Here’s another security advisory, with a number of different types of vulnerability in one single advisory. This third security advisory is much simpler, describing just a single flaw.
Why do I say “fraught”?
People are going to lose some of that warm fuzzy feeling about your application when you issue a security advisory. That’s especially true of customers who haven’t been aware of security holes in other applications up to now, and who are not used to the advisory and patch procedures. As technical writers, it’s our job to help customers feel safe by letting them know the company takes security seriously. In a nutshell, we want to tell them:
Someone finds a hole, we fix it, and we let you know so that you can upgrade or patch your systems before any hacker can take advantage of the flaw.
What other aims do we have? Ah, well, this is where things get complicated.
On the one hand we want to tell our customers as much as possible, so that they can make their decisions about whether and when to upgrade or patch their systems. What type of security flaw are we announcing and how can they fix it? Are they vulnerable to attack if their site is open to staff only, for example? What about if it’s behind a firewall? What is the worst case scenario if they are attacked?
On the other hand, we want to protect people from hackers. If we publish too much information about a vulnerability, that may be an invitation to a hacker to jump in before customers have time to upgrade.
People have, ahem, strong feelings on this topic. You’ll probably find polarised opinions even within your own company, let alone amongst your customers. It’s fraught, uh, fun… well, let’s say it’s a very interesting path you’ll take to determine just how much detail to reveal in your security advisory and when to reveal it.
Personally, my thoughts are:
- Tell everyone.
- Tell them everything except the exact details of the attack vector. In other words, tell them the severity level of the flaw and which areas of the application are vulnerable, but not exactly how to script an attack.
- Tell them as soon as you have a fix available. Moreover, if the vulnerability has already been exploited, meaning that someone has already hacked the software in anger, then tell everyone immediately.
- If possible, give people advance warning a week or so before the actual advisory and patch are released, letting them know that a security advisory is in the offing. They will be able to set aside time to upgrade or patch their systems when the details are released.
People can then make decisions about limiting access to their sites, blocking all but known IP addresses, or even taking the site down until they can apply the fix.
Because guess what, hackers share information behind the scenes. They can be malicious or just ignorantly mischievous, and they will not wait politely for us or our customers to get our ducks in a row.
Why do I say “demanding”?
A security advisory is also one of the most work-intensive and time-consuming documents you will ever have to write. There are few words in a security advisory, but each word carries a lot of weight and consequences for your customers. What’s more, everyone in the company has an opinion on each word. And even more, as technical writer you’ll probably find that you have to wring each detail out of the development team and cajole them into doing extra work.
No matter if your internal procedures say it’s the responsibility of the development team to decide the severity level of a flaw. No matter if it’s the product manager’s responsibility to determine the patch policy. When push comes to shove, it’s the technical writer who ensures that every detail of the security advisory is complete and correct.
You need to know exactly which versions of the product are vulnerable to the security flaw. This could mean delving into the code going years back.
You need to know the exact details of the patches available, which versions of the product they will fix, where people can get them and how they can install them. For the development team, it’s very time-consuming to create the patches to fix earlier releases for those customers who can’t upgrade. Be sure to start asking for the details early. You’ll find that your questions drive the development team to start looking at patches early enough to meet the release deadline.
You need to know the exact details of the flaw, so that you can be comfortable with the severity level assigned to each one. Often the bugs are highly technical. Be sure to get at least two opinions on the severity level. In many cases, the severity level determines your communication policy. For example, you may issue advance warnings for “critical” severity levels only, or you may build patches for “critical” and “high” severity levels only.
The content of a security advisory
Dave and I based the design of the security advisories on those by other well-known and respected web development companies and security-focused sites. We came up with the following sections for each vulnerability:
- Vulnerability Type
- Risk Assessment
- Risk Mitigation
- Acknowledgement of the reporter
The vulnerability type is a well-recognised classification of the vulnerability. Examples are XSS, XSRF, privilege escalation.
The severity level is a rating of the potential impact and ease of exploitation of the vulnerability, based on our published criteria, allowing us to rank the severity as critical, high, moderate or low.
The risk assessment includes information about the type of data a hacker might be able to access, the worst case scenario, and the kind of deployment scenario or security setup which is at risk. The purpose of this section is to give our customers enough information to make their own decisions about when to upgrade, whether to apply a patch, and whether to shut down public access to their data until the fix is implemented.
The vulnerability section describes the location or area of the application that is affected by the vulnerability, such as the affected macro, URL or screen. It also says which versions of the application are vulnerable, and links to the JIRA issue where the bug is tracked.
The risk mitigation section tells customers what they can do to minimise their exposure to malicious attack, especially if they are not in a position to upgrade immediately
The fix section tells customers what they can do to fix the problem permanently. Usually, this will be a recommendation to upgrade to the latest version of the application. In addition, we describe how to apply patches if supplied.
We also acknowledge the reporter, that is the person who reported the vulnerability, if it was someone outside the company and only if the reporter has indicated that they would like to be acknowledged.
For convenience, here are the links to some of our recent security advisories again. Here’s the most recent security advisory I’ve written. It announces a number of flaws found and fixed, all XSS flaws. Here’s another security advisory, with a number of different types of vulnerability in one single advisory. This third security advisory is much simpler, describing just a single flaw.
Technical and product management review
Give the developers and product managers time to stew. 🙂
Heh, as technical writers we’re very aware of the importance of the review phase for a technical document. For a security advisory, I’ve found that this phase is even more crucial. It’s a good plan to set aside a week or so. Technically, security bugs can be complex. From a product management point of view, they can be even more complex. I’ve found that if I start badgering people for a review early, it gives them the time they need to let their ideas and concerns stew, to consider and re-consider all aspects of the security fix and of the security advisory.
As is so often true in technical writing, you’ll find that your questions prompt the development team to look at the bug (the security flaw, in this case) and the fix in a new light. Your concerns about crafting a good advisory will also prompt product management to think in different ways about the impact on the customers. Getting started early will help the development team meet their release deadline, as well as being essential for you to meet yours.
So, if you get started early, that will reduce the “fraughtness”? Heh again. No, probably not, because your questions will lead to changes, and you’ll need to keep adjusting the advisory and other communications until everything is just right. But at least you’ll feel that you’re on top of and driving the fraughtness rather than swamped and driven by it.
An “advance warning” is a message sent to customers warning them that a security advisory is in the offing. Not all companies issue advance warnings. The policy may be to issue an advance warning for any upcoming security advisory, or for critical vulnerabilities only, or not at all.
If you do issue an advance warning, you would publish it a while before the actual security advisory itself is released. We’ve decided that a week is a good period. It gives system administrators time to decide whether they need to do an upgrade or patch and to schedule the downtime if necessary.
The wording and content of the advance warning need careful consideration, just like the security advisory itself. Our aim is to inform and protect our customers. We need to give them enough information to make their decisions, but we must not give hackers enough information to track down the vulnerability itself before we issue the fix.
What you might put into an advance warning:
- The approximate date on which you plan to release the security advisory and fix, such as: “currently scheduled for release in the middle of next week”.
- The type of security flaw you will be announcing, its severity level and a generic risk assessment.
- Advice on how to prepare for the security advisory and fix.
- Advice on mitigation strategies.
Here’s a recent example of an advance warning.
Known communication channels
We need to tell people what our policy is, both the policy on security advisories and the policy on advance warnings. That way, system administrators know what they’ll be told and when they’ll be told it, and they can base their planning on that knowledge. People also need to know where the security advisories will appear, such as in the discussion forums, on an email list or on a wiki page that people can subscribe to.
We publish our security policies as part of the technical documentation on the wiki. For example, the security advisory publishing policy, the security patch policy and the way we determine severity levels.
“Zero-day attacks” are a special case, and they affect your disclosure policy and patch policy. They’re also sometimes called “zero-hour” or “day zero” attacks.
A zero-day attack is a previously unknown and unpublicised vulnerability that has now been exploited by a hacker. It’s called “zero-day” because the attack happens before the first day that the application developer knows about it – and so, it happens on “day zero”.
How does this affect your communication policy? If you suffer a zero-day attack, the common understanding is that you will announce the vulnerability as soon as you hear it has been exploited, even if you do not yet have a fix for it. If you do not announce it, you run the risk that your customers will be attacked by hackers, who are sharing the information about the vulnerability, and the customers will not have a chance to protect themselves.
Here are a couple of examples:
- The Adobe announcement of a zero-day attack on 4 June 2010: Security Advisory for Flash Player, Adobe Reader and Acrobat
- The Atlassian announcement of a security breach on 13 April 2010: Oh man, what a day! An update on our security breach and the subsequent security advisory.
Some interesting sites
- Web Application Exploits and Defenses, a codelab from Google Code University. The codelab is based on an example application called Gruyere, “a small, cheesy web application that allows its users to publish snippets of text and store assorted files”. Gruyere is rife with security holes, including XSS and XSRF vulnerabilities. You get to be the hacker. The codelab gives you your own instance of Gruyere, and a set of tutorials to help you find the security vulnerabilities. It demonstrates the principles behind the various types of vulnerability. I’m told it should take approximately 4 hours to complete the set of exercises. I haven’t done it myself.
- ZDNet‘s Zero Day blog with the latest news in security-related research, vulnerabilities, threats and attacks.
- The Cross-Site Scripting (XSS) FAQ at cgisecurity.com.
- CERT’s advisory on Malicious HTML Tags Embedded in Client Web Requests.
- The Alphabet Soup of Software Security Guidelines, an informative and entertaining post by Todd Landry on >kloctalk.
Fraught and demanding? Yes, but…
I hope this post may be useful to someone who is in the enviable position of having to write a security advisory. Yes, I really do mean “enviable”. Though they may be fraught and demanding, they’re also one of the most interesting documents you’ll ever write. 🙂 I’d love to hear your experiences of writing such documents, and your ideas and opinions on what they should reveal. How much influence does the technical writer have over the procedure and the content? In my experience, a lot, because we quickly become the ones who know most about it!
One of the best things about technical writing is the variety it offers: who you work with; the style of writing required; the type of products you document; your input into and impact upon the products themselves; the medium you use, and so on. One of the products I work on is highly technical, and the documentation is funky that way too. The product is Crowd. If you find authentication, authorisation, single sign-on and user management sexy, then Crowd is for you. And the documentation would be your choice of bed-time reading.
This week we released Crowd 1.2, a major release with lots of new features. So we published many new and revised documents too.
Documenting this type of product is interesting. On the one hand, you get a kind of glow from belonging to the elite group of people who understand words like ‘Acegi’ and ‘OpenId’. You have the chance to indulge your natural inclination for long words and other esoterica. You may notice one or two creeping into this blog post 😉 On the other hand, in the documents themselves the trick is to know when to explain something and when not. After all, we have a savvy readership. I try to keep explanations short. Often all you need is a link to an authoritative website and an expansion of an acronym on first use.
Gourmet’s guide to a technical document: Mix the dry ingredients. Sprinkle in the acronyms. Pour the open source over the top.
Another interesting thing about this type of documentation is that the developers write much of the content themselves. As technical writer, I guide and tweak the content. When I can, I’m keen to jump in and test-drive the integrations myself before documenting them. But sometimes that’s not practical or efficient.
Wikis are great for collaborating like this. I might kick off a document by creating the skeleton structure. Then the developer writes the first draft. I jump in and tweak some things. The development lead and another technical writer do an in-depth review. And there you have it, a document to suit the epicurean taste.
Sometimes, you can even make the document look pretty:
Ain’t that just copacetic?
Have you noticed that your brain starts shooting off at a tangent when you’re writing dry technical kind of stuff? I had to explain ‘Acegi’ in the Crowd release notes:
Here’s what my gray matter kept insisting was relevant:
Spring has sprung, da grass is riz
I wonder where da boidies iz
Da boidies is on da wing
Don’t be absoid
Da wings is on da boid
That’s a jingle that my dad recited to me when I was just a kid. It’s spoken with a cockney or Bronx accent. Evidently the verse’s origins are obscure, though it’s often quoted, and sometimes cited as an anonymous work called ‘The Budding Bronx’. Isn’t it weird how such things stick in your head? A blog is a great place to get rid of such insistent promptings from the subconscious. The jingle would probably, though not indubitably, be considered out of place in the Crowd docs.