Marriages – when two become one

We’ve published 2 new pieces on Visual.ONS today that push our beta site into new storytelling and data visualisation territory. These 2 pieces are 2 sides of the same coin – 2 approaches to the same data set.

But why 2 pieces? Herein lies the tale…

Visual was set up last year to help cast new light on existing data, to act as a an editorial and data lab to learn lessons about how we can get you engaged in statistics, and make better decisions based on them.

Today we’re experimenting back in the lab with 2 very different approaches to turning a fascinating set of data into (what we hope is) engaging content.

We’ve been looking at trends in England and Wales marriage data over time; essentially, when people choose to get married and how this has changed over the last 60 years.

We noticed a pattern in the data, and felt there was an opportunity to let people explore that data and understand some interesting societal shifts.

Whenever we start a project, we are looking to tell the story in the format that best suits the messages in the data and the intended audience. We want to create an engaging experience, likely to encourage exploration. We like to develop and use formats that can be reused for different sets of data as it’s not possible for us to continuously create bespoke experiences.

Our data visualisation specialist and designer Paul presented an approach that used wedding rings as his design inspiration, he used radar charts and heat maps to give users an elegant overview of all the data, while highlighting certain patterns within the data. We wanted to offer people not just an overview of the data at a distance, but also a personal experience, letting them explore the data for themselves and see themselves, their parents or friends in the data. How do they compare?

Save the data image
Save the data – Scrolling through data

There’s been a surge in the use of vertical scrolling as a simple but effective user interaction model in the last few years. Driven by mobile usage, and by time-poor users, so-called “scrollytelling” can be a great way to move people through an experience with the minimum amount of friction.

The approach was to let users simply scroll through the data, and the various stages of the data-driven storytelling. We felt this was a format that had great re-use potential.

The challenge we faced was how to incorporate a narrative that could tell the story from more of a social history perspective and help pull the users through the interactive.

We appreciate that people are different: some prefer a visual representation of a story and would relish scrolling through the data while others respond best to written text and might prefer the opportunity to have an alternative take on the data, with visual prompts and more emotional storytelling.

What do you see when you think about marriages through the decades? You might think of black and white photographs with curled edges in shoeboxes, the faded primary colour emulsions of Polaroid shots, and family members with gaudy clothes and questionable haircuts.

How could we capture that emotional history in a data visualisation? And should we even try?

Image of questionable haircuts

The Office for National Statistics isn’t a natural home for storytelling, but we felt there were lessons for us to learn in how we could make our data more accessible, more relevant to people.

But when we discussed adding more editorial narrative, and even archive photos, to the existing interactive there was a tension between the simplicity and focus of the data visualisation and the potential distraction of adding such detail, background and texture to the story. There was also the not insignificant challenge of responsive design – how do would you fit this all on the same screen?

We didn’t want to undermine the purity of the interactive, but knew there was an additional opportunity too good to miss. In some ways it felt like a tension between showing and telling: if we were showing, did we need to do more telling? If we were telling people, were we also able to show?

There was also a long and involved discussion about the levels of visual literacy in our audience. Our audience is typically very data competent, and used to consuming the types of charts you wouldn’t ordinarily see in some sectors of the media.

But we also have a mission to reach an audience member we call the inquiring citizen. Would they be familiar with our radar charts?

We’d been looking at a tool called Shorthand that helps media organisations create mobile-friendly “scrollytelling” content that combines text, photos, video and interactives. So we decided to create a second piece of content, more focused on narrative and social history, using emotionally-driven story-telling and simpler charts.

Shorthand marriages image
Let’s get married. OK – when? Shorthand experimentation

As we often do at ONS Digital, we turned to user testing to help us hone our approaches.

Some of our users felt the very data-pure approach we had tried in the interactive was not to their taste. It was something they were unfamiliar with. So we’ve incorporated more annotation on the charts, which highlights the key messages, and added navigation aids to move people through the experience.

There was a definite appreciation of the boldness of the design and the simplicity of the user interaction that we felt was worth testing out in the real world.

For our Shorthand experience, the use of images to create an emotional connection and accentuate the social changes over time was clearly appealing, but people wanted more control over the charts which weren’t interactive, and wanted the narrative to be shorter, punchier and more digestible.

We never set up the user testing as pure A/B testing because we weren’t looking to validate or invalidate one approach over the other. What we found was a distinction between audiences who preferred a more narrative and emotional story versus those who appreciated a visualisation that accentuated the story in the data itself.

So that’s why we have 2 stories to present to you. One is an elegant, sophisticated piece of data visualisation that uses simple scrolling interactions to walk you through the data, allow personal exploration and help you find the story. The other is a rich, visual experience that combines archive photographs, simpler line graphs and more of a traditional narrative to engage you in social history through statistics.

We hope there is a huge amount to enjoy, explore, and learn from both pieces. And we want your feedback on them. This isn’t an A/B test, or Pepsi Challenge. We don’t want to know which you prefer, but rather what works and doesn’t work for you in each of the pieces.

What were the key messages you took away?
How engaged were you in the experiences?
How did scrolling work as a mechanism for exploring the data?
What kind of storytelling do you want from data?

Email us at with any considered views, or take our really quick survey: interactive data visualisation survey , storytelling survey.

Darren Waters & Rob Fry

Decoding Discovery

The ‘discovery’ phase of any digital undertaking is vital as it lays the foundation of any successful project. Unfortunately like many lessons from agile and the service design world it seems like a desire for a uniform, measurable approach in some organisations is creating obstacles to the technique being as impactful as if could be.

The GDS Service Manual does seems to contradict itself somewhat →

A short phase, in which you start researching the needs of
your service’s users, find out what you should be
measuring, and explore technological or
policy-related constraints.

It seems to me people get hung up on short phase.

Turn the page though (metaphorically) and you get this →

What to find out in discovery

In the discovery phase you need to understand and map out the user journey.

You should find out:
• who your users are
• your users’ needs and how you’re meeting them, or any needs you’re not meeting
• which services currently meet your users’ needs and whether they’re government services or private sector
• how you’d start developing a new service if your discovery finds there’s a user need for one
• the people you need on your team for the alpha phase
• what the user journey for someone using your proposed service might look like
• how you might build a technical solution given the constraints of your organisation’s legacy systems
• the policy that relates to your service and how it might prevent you from delivering a good service to your users

Which depending on problem in front of you might be anything but short.

I prefer this from the Australian Digital Transformation Office on the topic

There’s no one-size-fits-all duration for a Discovery,
however, many teams spend between four and six weeks.

I think that is more realistic — especially once an organisation becomes more digitally mature and starts to have a better understanding of users and their needs (a real understanding backed by research rather than anecdote) but for some time initially there is a lot to learn and often years of misconceptions to shake off. This is not a short phase.

Sometimes the ‘discovery’ has a wide reach. One of my favourite pieces of work over the last few years was this massive service mapping exercise from the Ministry of Justice that span out of their ‘discovery’. To understand any one part of the journey they first had to understand the big picture.


Here is another interesting example with a great team spending eight weeks looking at how the UK border effects trade. The findings from this ‘discovery’ was that they needed to do additional — more specific — discovery work.

On the other hand Sarah Prag suggests an experienced, mature team can “rattle through (discovery) in a week or two” in the right circumstances.

Like our antipodean cousins say “there’s no one-size-fits-all duration for a Discovery”.

I think rather than get hung up on timeframes we should think about the ‘minimal viable discovery’ to get started (because that is all this is — enough information to get started — you should never stop learning about your users and should be prepared to course correct as you learn more — other wise we are just playing at being agile.)

Personally I like to be able to confidently answer the questions in Sarah’s diamond (below) but your mileage may vary. Maybe you want answers to all the GDS questions above, or the DTO version or maybe even the Design Council ‘double diamond’ technique. As long as you learn what you need to learn to move to the next phase with confidence then it takes as long as it takes — whether that be two weeks or twelve. Don’t rush understanding your users — getting that right will underpin everything you do from that point on.


Learning first-hand why accessibility is important

This week, several members of the Digital Publishing team, including myself, went on the first of 2 scheduled visits to the Digital Accessibility Centre (DAC) in Neath.

The visit was incredibly educational. It drove home just how many things I take for granted that are a real challenge for users with a disability. I always thought I had a good understanding of accessibility principles, but seeing accessibility in action taught me there’s a lot still to learn.

Visual impairment

We spent most of the morning talking to Tara, who is registered blind. She browsed our website using NonVisual Desktop Access (NVDA), a free open source screenreader programme, similar to the commercially available and popular Job Access With Speech (JAWS).

The first thing that struck me was how involved a process browsing was for Tara. She would “tab” her way through a page using keyboard shortcuts that searched by header. If that wasn’t working for her, she would search by link. The screenreader would read out every instance it found, including the kind of page furniture (headers, footers and so on) that I would normally ignore.

I was particularly keen to get Tara’s perspective on our content, as the use of visual elements such as charts is an integral part of our digital product. Having never seen how a screenreader works before, I didn’t have a strong idea how data displayed in this way could be made accessible.

The most significant thing I learned from Tara was that, for her to be able to access the same information in a chart as a sighted user, she needs a fully accessible alternative. Alt text with a generic overview of the chart but no actual data simply isn’t good enough. As a consumer, she should be able to access the exact same information as anyone else. If she can’t, then she will leave the website and look elsewhere.

For her, the best option was a correctly formatted table. Spreadsheets were a bad fit in her experience, as she’d never been able to find a way of automatically reading columns. She also demonstrated the importance of correct formatting; if a table wasn’t tagged properly then each figure would be preceded by “Column 4, Row 6” rather than the correct names.

It is attention to detail that makes all the difference to visually impaired users. Another example is punctuation. I hadn’t fully considered how that would work for screenreaders. Tara told us that while most screenreading technology has default punctuation settings, many users turn it off. This means that a mathematical symbol like a plus or minus sign would be missed out entirely, so it is better to write out the word rather than the symbol in your content. Now, it just so happens that this what our style guide advises anyway, but I now have a deeper understanding of why that particular style decision has been made.

Ultimately, NVDA is just one brand of screenreader. Tara stressed that there are several packages available and each works slightly differently, which highlights the need to test content on multiple programs.

Learning difficulties

We sat down with Jonathan, a user who had a learning difficulty. Clarity and consistency were really important to prevent him getting confused, particularly for navigation. A seemingly small thing like different pages on the same website having different colours (like our home page and our methodology landing page) was really off-putting for him.

Jonathan didn’t mind the length of some of our bulletins, but what was important was that the paragraphs weren’t too long, and there was plenty of white space around the text. He strongly felt that acronyms and abbreviations should always be explained, with even something really common like GDP being written out in full (“gross domestic product”) for the user.

In terms of charts, Jonathan had no problem with them as long as they were clearly presented. He found annotations useful in this regard. However, he did not get on well with stacked bar charts.

At the beginning of our conversation, Jonathan said he disliked moving images, like Flash or a carousel. However, he quite liked Visual.ONS’ recent Basket of Goods timeline, as the functionality allowed him to be in control of the motion and images.

Hard of hearing

The final tester we spent time with was William, who was hard of hearing. He found the website easy to navigate, as we don’t have anything he finds problematic like videos or audio content.

William placed a great deal of emphasis on the need to provide either an email address or mobile number as a “Contact us” option, as a landline number would be no use to him.

Prior to this conversation, I was completely unaware that not all deaf people use British Sign Language (BSL) or another equivalent. Some use body language and Makaton. William himself relied a lot on lip reading.

What we will do next

Our team came away with reams of notes, and our User Researcher John Lewis in particular was able to draw up an extensive list of areas which require further investigation.

For our next visit

We only had so much time and didn’t get to sit with all the testers on the day. There will be a second Digital Publishing visit to DAC at the beginning of September, where we will be observing the following accessibility testing:

  • voice activation
  • colour contract for dyslexic users
  • keyboard only for users with mobility issues
  • screenreading for mobile devices

I’m always keen to hear different perspectives on accessibility, so if you have any insights you wish to share please drop me an email at

An Unexpectedly Interesting Friday

I had a very interesting Friday last week; and I honestly wasn’t expecting it.

Some of the eQ team visited the Digital Accessibility Centre (DAC) on Wednesday to observe testing of our live eQ product along with some prototypes we are currently working on. During their visit they each made notes and observations about how things were performing then pulled them together in a shared Google Sheet on Thursday. As an aside, this approach worked really well and we’ll be using it again next time.

At Friday morning’s stand-up the team were running through the board as usual and hit the card related to the DAC testing sitting in the Research Outcomes column; a brief nod towards the associated Google Sheet and we agreed to quickly review it immediately after the stand-up was complete (thinking it wouldn’t take long and not wanting the outcomes to linger).

What followed was a completely unplanned and unexpected full-day session discussing and going through the findings (we finished just before 4 o’clock). We generated a wealth of small actionable changes we could make for either an immediate improvement to the system or areas that needed new prototypes and additional research.

Although this was a full day session there was a real sense that this was time well spent. We left the room late in the afternoon and actually all felt slightly exhausted from the marathon stint but still buzzing. It felt good, it was a feeling that we’re doing something right and it’s going to make a difference.

I caught up on the inevitable inbox queue that accompanies being away from the desk all day and went to get a coffee; upon returning I found the team huddled around one of the developers desks listening to an iPhone VoiceOver trying out different approaches to one of the areas that hadn’t tested well; I looked around and realised that there was hardly anyone left in the office (now being what I would consider late for a Friday!) suffice to say we were inspired.

To me this speaks volumes about the power of testing and research with real users, the way it energised and mobilised the team in a way that a list of requirements never could. I wasn’t present during the actual testing but the ‘effect’ is infectious, I truly found myself drawn in.

I will admit that we should have been doing the kind of accessibility testing (with real users) much earlier in the product lifecycle and this did mean we had accumulated a lot of potential debt. Going forwards we’re committed to testing little and often rather than large and infrequently. To support this we’re discussing with DAC how we can undertake quick remote testing sessions.

I expect that as the team becomes more familiar with what does and doesn’t work for accessibility in our service we will become more adept at putting together a better initial solution and spotting likely problem areas sooner. However, it is clear to me and the team, from this round of testing, there is no substitute for testing with real users or the power that comes with it.

Release note – sprint 11

I must write more blog posts. I must write more blog posts. I must write more blog posts

As our product manager is on paternity leave at the moment (congratulations Rob!) I am stepping in and putting together our sprint notes for what is behind the 11 shaped window

Sprint 11

We’ve achieved a lot in the last two weeks, but we have more user testing to do so we haven’t released it to our live website quite yet.

Delete content functionality

In this sprint we focused on building delete functionality for our CMS. Until now the ability to remove content from the site has been limited to the development team, and required manual tasks to remove the content, make it available to be previewed in our publishing tool, and to actually publish the changes to our live website.

We’d hoped to also look at functionality to move content, which requires similar manual tasks to the deletion process, but summer holidays and annual leave means we’ve had to push this back slightly.

At the start of the sprint we discussed the requirements and some of the issues we might face implementing them. We uncovered a lot we hadn’t thought about, so we decided to treat the work as a spike to make sure we fully understood the deletion process.

By the end of the sprint the delete functionality was working mostly end to end, so we’re now treating it as a ‘beta’ which we’ll put in front of our internal users to get feedback before we attempt to make it production ready.

PDF downloads for compendium

We’ve also been working on support for downloading a PDF of a full compendium, which is now going through final testing before it’s released. This builds on the ability to print the full compendium, and makes PDF download support more consistent across our site.

Personas get personal

Over in User Research corner we have been doing a range of work to check that our personas still hold. We did some work a little while back to produce some really helpful personas. They are never used to make direct choices, but are used to help us identify which type of user we want to test different functionality with. We have been reviewing the ones we have ahead of big set if changes we will be making around the use of geography and a more complex set of tools for querying data. I will be nagging our User Research team to put together a post about this work soon, as it is a really interesting area for us.

Open is good

In addition to this we were part of an ‘open day’ here in the Newport Stats Palace to allow people to drop in and chat to the team about a whole range of vacancies we have (include the amazing product owner role Matt posted about the other day). It was really good fun talking to a wide range of people and I found myself talking very positively about pretty much everything at the ONS. Have I become and ONS’er…?


Dear Reader – I blogged it


After 3 months with very little blogging, I thought I would put together a collection of three posts in the coming days to talk about what we have done, what we have learnt and what we are going to do.

To recap, I am Andy, the Digital Service Manager for the ONS website. Being a digital service manager in the civil service means taking total ownership of delivering something. In this case, the way the ONS presents and delivers its information online.

I was lucky enough to join the team just after a major re-launch of the underlying technology and a complete rework of the user experience. In a standing on the shoulders of giants style, this was an ideal time to join, as so much great work had been undertaken to give an essentially blank canvas for me to work with. However, as with all of these things, it has not been 100% plain sailing. Some of the inevitable technical debt incurred from launching a website needed to be repaid and so the team have been faced with continuing to deliver functionality to the audience whilst ensuring that we kept the platform as stable as possible.

I am broadly pleased with what we have achieved, whilst always wanting more. For me, some of the key things have been the way functionality has been delivered, as well as the how. The technical development of the ONS website was undertaken by a mix of internal staff and contractors. As we launched the site, the contract roles finished up and the internal staff took full responsibility for the site. It is a really fundamental challenge in working out how to transfer the institutional knowledge of a 2 year web project in a few weeks.  It may well be the topic of another blog another time, but it doesn’t matter how many diagrams you have and how well your code is structured and commented, it is hard to transfer the knowledge of why every choice was made and the context it was made in. Retrospectively, this transfer of knowledge took more time than I had envisaged, but is something I am much confident has happened now (though with Matt leaving at the end of the summer means we really will find out soon).

Alongside this, we chose to restructure the teams around my arrival as well. This means the technical , editorial, design and delivery functions of the service have all come together for the first time. A new org structure, with new staff and iterating the way we work is a lot of change in a short period of time, but I feel that it has benefited from us adopting an approach of just getting on with it, rather than protracted change over a longer period of time.

I have found a personal challenge in trying to define how much of my role should be focused on helping the team, facing the wider business and being the external ‘face’ of the project. I am not sure it is a percentage that can ever be defined, but I am pleased to have spoken at a number of external conferences/meet ups. Presented on the themes of digital change and agile delivery at internal events and hopefully given the team the cover they need to deliver. They certainly have kept on doing that. The recent updates to the way we process time series data, engaging with the issues of the EU referendum , iterating the way we deliver the statistical bulletin and continually improving the techniques we use to gain feedback from our users  certainly showcase this.

In my next blog post, I will be talking about the way we tackled the discovery and alpha phases of the next Big Thing we are working on. Spoiler. We learnt a lot

In a relationship – it’s complicated

A strategy for choosing the right chart.

So far we’ve covered a lot of charting tips about using your axes properly [axes on bars, axes on lines & and, scaling appropriately]… but let’s take a step back – are you using the right chart? If you looked at the latest versions of Excel, Tableau, SAS, R or any other charting package you’d be excused for feeling a little overwhelmed at the variety of charting options on offer. In reality you’ll probably only want to use a small selection of these, but how do you choose?


First of all it helps to introduce a bit of language. If I asked you to name all the statistical relationships you might have in your data you may well look at me blankly, or give me a Facebook style answer – “it’s complicated”. But have a think about the charts you use or create on a regular basis – if you had to put it into words, what are they designed to convey?




[Go on, have a think before I give you the answers. For example, why would you use a scatterplot?]




OK – well here’s a list of all the common statistical relationships we can think of…
Magnitude – How big is a value, and how does it compare to another?
Change over time – How do values change over time? What are the trends?
Part-to-whole – How is an overall or total value broken down?
Distribution – How is a set of values spread?
Difference – How do values differ from, say, an average or from each other?
Correlation – How are values related?
Rank – Is there a way to order the data?
Spatial – Are there geographical patterns in your data?

So step 1 is to identify the relationships that are in your data, and decide, usually through analysis, which relationships you need to portray to the reader. You now have a language to justify the chart choices you make.

Sometimes it’s really simple to think about what you’ll need to illustrate your data and your message – it might be a simple line chart to illustrate how a series has changed over time or a bar chart to show how a range of values compare. However, there are other times when we need to think about the variety of relationships that might be present, and consider a different approach.

Let’s look at an example. We were working on a project before the EU referendum to give a picture of our trade with the world and Europe. One section of this analysis was looking at our trade relationship with countries, in particular our top 10 trading partners. This was the first chart we started with. Here you can see that magnitude is the primary relationship we are showing – how do the values compare to each other? You might also have noticed that we’ve ranked the data according to the value of exports to help the reader make that comparison more easily. Can you notice anything you would improve?

Top 10 trading partners in goods and services, current prices, 2014

Even though we ranked by the value of exports to help comparison, it’s not so easy to compare the value of imports. And if I asked you to tell me which trade partnerships we benefited from the most or the least that would be even harder. The balance of trade would be 1 example of a difference relationship.

So one option we tried was to emphasise this difference, using a lollipop chart

Top 10 trading partners in goods and services, current prices, 2014

What do you think? The aim was to prioritise the difference relationship, that is, the difference between exports and imports. We ranked from positive to negative. We still wanted to maintain the ability to compare magnitude, which you can, but it’s not easy. It feels obscured, and making this comparison was important in the context of the article. Also, comparing the balance was somewhat difficult because of the colour switch. If it was only the balance we wanted to compare we could have used a simple bar chart.

So this didn’t work, but we shouldn’t be afraid to experiment. Often we look at a number of different options before coming to an agreement.

If you read the article you’ll see from the final option that we opted for the slope chart.

Top 10 trading partners in goods and services, current prices, 2014
Screen Shot 2016-08-03 at 10.36.51

So here you can see there are a few statistical relationships present. On either side we can compare the magnitude of our exports and imports. You can also see that we’ve ranked either side, making it easy to see who we import from and export to the most. The final relationship is the balance of trade for each country – the angle and steepness of the line illustrating how different exports and imports are.

While this isn’t a chart option everyone will be familiar with, it should only take a few moments to interpret. It’s worth introducing our users to new chart forms if, like in this case, it illustrates the relationships in our data better.

So to recap … choosing the right chart type is a 2-step process.

Step 1 – ask yourself what statistical relationships are in your data and, more importantly, what statistical relationships do you want to portray?

Step 2 – make a chart choice which emphasises this relationship. Experiment with different options if necessary.

One note of caution though, sometimes you might have lots of statistical relationships to convey – it might often be better to split out into several charts to share the messages more clearly.

Have a look at what relationships you have in the reports you’ve written or read recently and ask yourself, does it match the message you want to tell? Could you make it better? Are you expecting your reader to see something that is hidden or obscured in your chart? If you’re really enthusiastic, have a go at a Makeover Monday.