Team content audits: Creating a structured plan

A 5 minute read written by Tracey October 22, 2015

Checking website content under a magnifying glass

Team content audits: Creating a structured plan

Content audits. On large sites, they can be long, painful, and require a lot (a lot) of coffee — but they’re also invaluable when it comes to our content first approach to responsive web design.

Recently, we kicked off an audit for an enterprise-level site and wanted to divide the work between multiple team members to cover more ground with the time we had. My only concern with this approach was that without some sort of structure, we’d end up with disparate observations and findings that were all over the map.

So I came up with a more structured plan to do the following:

  • Allow multiple team members to divide the work, with a shared understanding of our project and evaluation goals.
  • Determine structured evaluation criteria that we could share and use for a more targeted, consistent assessment of the content.
  • Define these criteria based on the business needs of the project.

Here’s a quick walkthrough of what we did.

Defining research questions

To start, we created a list of specific research questions that we could use as heuristics to assess the content. We defined these questions based on business needs we’d learned during discovery research, as well as established best practices. (For heuristics focused on content and information architecture, Abby Covert’s Information Architecture Heuristics is a good starting point.)

The list of questions we came up with is definitely not comprehensive. You could define research questions based on whatever project priorities you might have: Are voice and tone consistent? Does your content follow your messaging architecture? Create research questions based on whatever you’ve defined as key areas for evaluation and improvement.

Next, we grouped our questions by theme and developed the following baseline:

Reading experience

These questions looked at the reading experience on a page-by-page basis.

  • Is content concise and well written?​
  • Is page content structured using headings, keywords, bulleted lists, and other elements that will help users scan for information they need?
  • Is content formatted consistently, using clear patterns across the site?

Here, we wanted to evaluate elements that allow users to navigate between pages and look for specific information.

  • Does the navigation structure support users with different information-seeking behaviours? We wanted to examine if content organization supported different ways of looking for information, including known-item searching, exploratory browsing, unknown information finding, and refinding.
  • Is user-centered language used for page titles and labelling?
  • Is the navigation structure task-oriented? That is, is it structured to support users in completing key tasks?
  • Do page titles, links, and navigation elements display good information scent? By “information scent,” we mean the extent to which users can predict what they will find if they follow a link or specific navigation path.

ROT: Redundant, outdated, and trivial content

  • What’s redundant? Duplicated content leads to a bloated site, and makes it harder to keep content consistent and current.
  • What’s outdated?
  • What’s trivial? This can include content that doesn’t relate to a user’s task, or other content that might be considered superfluous.

After we developed these research questions, we turned to every content strategists’ best friend: the spreadsheet. We created a tab for each top-level navigation item, added standard columns for page identifier number, title, and URL, and then a column for each of the above themes to accommodate targeted notes on each.

And finally, we divided up the sections and dove in.

Analyzing the results

Even with multiple people auditing, we didn’t have the staff resources to look at every page within our time frame. Instead, we took a sampling of pages, and only to a certain depth in each section.

Once we’d each reviewed our sections, we held a brainstorming session to analyze our findings:

Sticky notes ♥ ♥

(But of course you knew this was going to involve sticky notes, didn’t you?)

  • Using our trusty chalkboard wall, we started by writing headings for each of our main audit themes: reading experience, navigation, and ROT.
  • Then, we went through our spreadsheets and pulled out issues, writing down one per sticky note. If it was a really good example of a particular issue, we’d add the page indicator number, making it easier to come back later and gather examples.
  • As we stuck each note to the wall under the appropriate heading, we took turns explaining each issue and discussing it as a group. Before long we started seeing larger problems emerge, and started collecting stickies into like-groupings.

Problem statements

  • With our groups in place, we used chalk to circle each and start jotting down problem statements and descriptions. The more sticky notes we had in that group, the bigger the problem.

Putting it all together

Armed with our chalkboard covered in sticky notes and scribbles, we had a great jumping off point. From here, we could easily create:

  • Targeted recommendations for improving content: Rather than a long-winded summary report, we were able to provide a series of specific content issues based on our problem groupings, an explanation of why each was a problem, and targeted recommendations for how to fix that issue.
  • Writing for the web training and a style guide for authors: We now have a clear idea of some of the biggest problems with content, so we know where to focus to help authors improve in content creation and maintenance. We’re also stocked with a ton of examples of content that could be improved — and ready to put it in front of content creators to collaboratively figure out how to solve those problems.
  • Recommendations on ROT content to edit or remove: A detailed look at site content really highlights redundant content, the old forgotten stuff that’s lurking in dusty corners, and trivial content that doesn’t map to user tasks or isn’t getting any visits. From here, we can flag that content to be reviewed, edited, or removed.

End scene

And that about sums it up!

Using this more structured approach, we came up with more issues and better examples than we would have if we took the more organic, unstructured path a lot of content audits take.

So next time you’re starting a big unwieldy content audit, try it out:

  • Define research questions that will guide your assessment of the content. Providing these research questions gave us a baseline with which we could examine the content — the process ended up being somewhere between a content audit and a heuristics evaluation of both the content and the information architecture. Again, this approach helped us to surface issues that we may have missed otherwise.
  • Do the sticky notes exercise! Get out of that spreadsheet (I know it’s hard. I love them too.), and start writing these things down on stickies, getting them up on a wall, moving them around, and talking them out. You’ll have animated discussions filled with great ideas — I promise.

And if you have any questions, feel free to give me a shout.