How do we know if a project is likely to be good or bad? I’ve just been reading descriptions of work from organisations looking for digital teams. Many of them still define some change they want to make in terms of a predetermined technology solution.
I recently spent time with a team working on such a project. Their stakeholders described it as ‘we need a portal for applicants to submit their bank statements digitally’. It was a solution to the significant time, cost and effort of teams still processing bundles of paper. Success measures were to be around task completion rates, uploads and errors. But nothing about these metrics described if this particular solution would be good, or effective, or related to what a user might be needing or wanting to do, or linked to what the organisation was ultimately trying to achieve.
Predetermined solutions can be genuinely well thought through, and just split in to constituent parts so that a team get on and deliver something. More often though, I’ve seen them come from a desire to be certain about ‘the right solution’ in order for something to get done, in a trackable way. That’s when delivering “A Thing” becomes the main definition of success, regardless of whether it’s needed, useful, likely to be used or related to any underlying goal.
The best teams I’ve seen help their organisations be really clear about the actual change they’re trying to make. And the best organisations come to understand there’s more than one way to achieve an outcome – and that it takes ongoing effort to do this.
Untangling the purpose
When descriptions of work are predetermined we can break them down to get greater clarity. In our ‘we need a portal’ example, to get to what would make any solution good, or effective, we asked ourselves:
- What are users actually trying to do here and why?
- Who is the ‘we’ in the organisation that wants or uses this?
- Why do this now? What happens if we don’t do anything?
- Why do we think this solution? What is meant by it? How else could we tackle or support this?
- Why do users need to do the thing we’re asking them to do? What’s the simplest, most direct way of this happening?
- What does the data from statements actually tell us? Why do we need to know? How else could we know?
- What does this thing being ‘digital’ enable, that isn’t the case now?
Once the whole team were clear that the work was in fact to improve time and cost of verifying whether people were financially eligible for a service, they were able to communicate their purpose clearly, explore feasibility of solutions beyond portals and to talk more accurately about success.
Establishing the level of work
Not every team is in a position to design an entire service end to end. For very large services I’ve often found tens or hundreds of teams working on different parts of a service.
At this scale there are different levels of work. Sometimes our focus is rightly on component parts. But it’s when the relationship between these levels has been lost, or was unclear from the start, that we see the more obscure briefs that can’t be easily untangled.
To shift perspective from inwardly focused ‘is the thing done, is it on time’, to what actually constitutes good performance in the real world for people, you can set out levels in a wider service. I’ve found it useful to make a distinction between:
- a whole end-to-end service (eg. insure a house, visit the UK, get a passport)
- supporting transactional services (eg. applying for something, deciding if someone qualifies, checking entitlements)
- capabilities (eg. appointment setting or capacity in fulfilling orders)
- activities (eg. checking eligibility, finding out what to do, getting help)
- technology (eg. payment systems, identity verification, records storage)
- data (eg. what’s used, derived or created by a service, its storage and availability)
When you set work in this wider context, relating it to an external service or lifecycle of use of a product, it gives you a way to relate to what a project needs to make happen. This has helped me have the difficult conversations when work has been set at the wrong level.
In the build-me-a-portal example, this helped the delivery team come at the problem from outside-in. They worked through what would be the simplest, most direct way to know if someone is eligible. Some of the things they considered included verifying information directly with banks and other sources, better content design, clarity on what constitutes evidence, changes to standards of evidence, using other providers to validate data, re-using data or simply identifying better points in a service to ask for things.
Identifying real world performance
How can a team know if their chosen approach is actually good, bad or even neutral? Having identified the purpose and level of the work, you could then do these things:
- Find the interested parties in your context – users, service providers
- Establish what you would see happen if it worked or didn’t work for them
- Set indicators for this, keeping the connection to overall performance
Rather than use the suggested metrics around task completion and uploads, the portal team found other indicators of time, cost, effort and confidence, for both users and service providers. It helped them to evaluate different ways of improving how eligibility was verified. And as it turns out, it took a set of different improvements to get a good outcome – and no portal was built.
Of course making genuinely good products and services isn’t guaranteed just because you write a good brief and identify sensible metrics. But helping everyone around you be clear about the intention behind a project, the context, and performance in the real world, gives a much more solid basis to discuss, prototype and deliver approaches that have a better chance of being good, rather than just being.