The Service Standard and eQ Alpha – Outcomes – Part 4

This is the fourth and final part of the eQ Alpha outcomes. In Part 1, I wrote about our workflow, and some of the issues we faced getting design and development balanced. In Part 2, Ben reflected on what we’ve learnt about our users and techniques for user research. In Part 3, I provided a deep dive into some of the tech and architectural aspects of the Alpha.

In this concluding part I want to share some details about the GDS Service Standard and our Alpha, including our Service Assessment.

A Standard or an Assessment?

Before I dive in, there is something I want to clarify; our goal wasn’t to pass a Service Assessment. What do I mean by that? Passing the assessment should be a by-product of working to the Service Standard, not the other way around, that is, if we’re working to the standard then the assessment should be straightforward and obvious. I’m not suggesting that the assessment is a walk in the park or that meeting the standard is trivial but it is easy to focus on the assessment itself and lose sight of the point behind it.

A Service or a Product?

Ok now that’s cleared up, let’s wind back the clock to the start of the eQ; during the kick-off for the Discovery back in Sept ’15 the team signed up to some shared values and principles, one of those principles was to be ‘digital by default‘. For us, this meant working to the principles and spirit of the standard from the start.

This was fine, except, as the name suggests, the Service Standard is designed for Services and as I shared in our Discovery Outcomes we are producing a Product. I’ll expand a bit on that, our product will be used by teams in the ONS as one part of the wider end-to-end Service for online data collection. There are other products involved in the Service as well (such as the customer portal) which we aren’t responsible for, but we do need to integrate with. In addition, our product will be used for many different surveys within the office, we’re giving teams the tools to design their surveys and test them with that survey’s specific users. But to be clear, the eQ team aren’t responsible for the whole Service, and at the risk of jumping ahead, GDS agreed with us too.

We didn’t use this as an excuse to ignore the standard, quite the opposite, the team wanted to follow the standard and apply it as much as possible; and for the large part this was the case. There were some points that were difficult to satisfy directly as a product, such as point 14 on assisted digital and 16 on KPIs, which really do relate to a Service. However, we didn’t want to excuse ourselves from the spirit of the points and for example, on KPIs we proposed some alternatives we could capture as a product.

Approach

Some of the points in the standard require clear decisions that can be stated simply and easily, such as point 8 on making source code open and reusable (i.e. we agreed to publish to GitHub and use an MIT licence). However, others are much more involved, such as point 1 on understanding user needs, or point 4 on agile, user-centered, iterative methods, these needed to be part of the fundamental principles and ways of working for the team.

Having some of the team already familiar with the standard or the principles of particular points (e.g. from experience of agile/lean/devops and research based design), made this much easier as they were already thinking in that space and could share their experience and help embed the approach into the team.

So aside from drawing on experience within the team and having big A2 print-outs of the Service Standard stuck on the walls, what else did we do to align with the standard?

We attempted to model our Story workflow to best model the behaviours we wanted (such as having each Story move through research, design, usability testing etc.), in this area we were less successful; but we’re addressing it in Beta.

Early in the Alpha we also created a page for each point in our team document system (Confluence) and judged ourselves with a simple red, yellow, green rating to reflect how we felt we were doing against each point at various times throughout the Alpha and where we needed to focus more effort. This later became useful for the assessment itself as well.

Screenshot of the eQ Confluence documentation system showing the GDS Service Standard summary.

eQ Team Confluence Service Standard pages

We reviewed this as a whole set of 18 points a number of times throughout the Alpha and generally as related documents or findings came up, such as user research outcomes, we linked it into the relevant point’s page and added some brief notes. This proved to be a useful approach, collating information incrementally as and when it happened. However, saying that, we never set out to ‘collect evidence’ to pass the assessment, these were artefacts that existed to support our product and I never got the impression from GDS that they wanted to see lots of documented evidence, quite the opposite actually, preferring to see the product and discuss it and the rationale behind decisions.

Having some notes and a pointers really is a big help when discussing how the team are approaching the Service Standard and highlighting what has been going on with respect to each point. This is good for the team self-checking it is staying aligned with the standard as well as discussing with any interested parties and naturally it supports the assessment as well.

Mock Assessment

Around halfway through the Alpha we undertook a mock assessment with GDS trained assessors (a mix of ONS internal and IPO staff). This was carried out as if it were a real assessment, and took around 3.5 hours to complete. During the assessment itself we demonstrated the mid-Alpha product that had been developed so far and then each of the points were assessed. The lead assessor collated the outcomes and fed it back to us, which was great for us, it revealed some areas we needed to improve on and also highlighted some of the challenges of being a product assessed as a Service.

Photo of eQ Alpha mock assessment

eQ Alpha Mock Assessment – coffee needed after 3 hours.

Discussing this mock with the team, we agreed it was a great way to firstly understand what an assessment actually involved (it was new to some of us) and secondly identify areas we needed to improve on in the Alpha, with enough time left to actually make the changes. This is definitely something I’d recommending other teams do.

Assessment Warm-up

Prior to the assessment we provided our GDS assessors with access to the latest running eQ release and a brief overview of the project. We also held a couple video calls with some of them, which was a big help when discussing the product/service difference and clarifying what this meant for the assessment and outcomes.

A few days before the actual assessment we performed a dry-run (2.5 hours) with just our team, we ran through the items we wanted to cover in the product demo and stepped through each of the Service Standard points asking ourselves the questions in the assessment documents and discussing our response. This was a great way to check the notes we’d be taking with us to make sure they captured the elements we wanted to convey. The team agreed this really was worthwhile, it refreshed everyone’s memory and weeded out some parts we needed to clarify and be more specific about.  I recommend doing this prior to an assessment and we’ll be doing the same before our next one.

The Assessment

Onto the assessment itself, this was held at GDS’ office (Aviation House) in London. The assessment room was pretty full as we had quite a lot of observers taking part, one from ONS and three or four from GDS in addition to the four eQ team members (Product Owner, Delivery Manger, User Researcher, Technical Lead) and the actual GDS assessors themselves, it was cosy. The high number of observers was partly due to the fact that we were undertaking an assessment as a product and there isn’t a defined path for this yet within GDS, I’m sure this is something that is being worked on and hopefully it was valuable for the observers to see our assessment first hand.

The assessment itself lasted around 4.5 hours with a few short breaks. We started with a live demo of the eQ product which resulted in some discussion and then jumped into the assessment of each point in the standard. A significant amount of time was quite rightly focused on user needs and research as this is a key factor in designing and creating a high quality service/product. The assessment was pretty conversational, with discussions naturally crossing over a number of the points in parallel, this generally meant that whilst the first handful of points seemed to take a long time in terms of progression it actually meant much less time was needed on some of the later ones as we’d essentially already covered them.

I won’t go into the details of the individual points or discussions, other than to say that each of the team had the Confluence pages on screen so we could quickly jump to the relevant information that the assessors were looking for and this proved to be really useful. We weren’t at any point asked to produce detailed evidence or documents but were expected to be able to talk in detail about our research, processes, tools, architecture, designs etc. the level of detail meant that without some notes and links to relevant info it would have been difficult to provide useful answers easily. To be clear though, those notes were purely for our team’s benefit, GDS didn’t ask for or get a copy of them, the assessment was all about the discussion.

The assessment of each point didn’t come across as a binary pass/fail, and we didn’t try and pretend we met everything in the standard perfectly. There were some areas where we knew we needed to make changes going into Beta (and had experienced problems in Alpha) and others where we wanted advice and support from GDS, especially around the differences between a product and service and how we could best meet the standard even if this meant doing things in a different way. We felt it was important to either explain how we would address any shortcomings going forwards (if we had a genuine plan) or be open and transparent and admit we needed help and direction. In this respect I felt the assessment team were very supportive and understanding.

Whilst the assessment itself felt quite relaxed it was nonetheless still quite exhausting discussing the project intensely for 4 hours, leg stretching and fresh air was definitely needed by the end! I feel that if we hadn’t prepared our notes beforehand and undertaken the dry-run it would have been more stressful/less relaxed, potentially taken longer and resulted in a less useful assessment outcome.

Feedback

We agreed with GDS prior to our assessment that we wouldn’t be given the traditional pass/not pass outcome due to our position as a product and it not being possible to directly apply every point, instead they would provide us with feedback and recommendations.

We received the feedback a few days after our assessment, it contained a summary of our discussions during the assessment and 11 recommendations that we’re taking forward into Beta (some of the changes are discussed in Parts 1 to 3 of these outcome posts). The headline feedback is definitely at the start and it states:

“At this early stage the assessment panel are assured the team is on track to deliver the tool that will meet the Digital Service Standard.”

This is very reassuring as it means GDS agree that we’re working to our principle of being digital by default that we set back when we kicked the project off, there are things we know we need to address but fundamentally we’re on the right track. This doesn’t mean we can forget the standard now, we’ll be re-visiting it for Beta and making sure we stick to our principles!

Beta

This is the last post on our Alpha, going forwards it will be all about our Beta which is now underway! I hope you’ve enjoyed following our Alpha here as much as I have been being involved in it.

See you in the Beta…

 

 

Tags: , ,

One comment on “The Service Standard and eQ Alpha – Outcomes – Part 4”

Comments are closed.