Agreeing prioritisation of digital programmes and projects

In 2018, I pivoted from agency life into a corporate role, this was a huge decision for me at the time and one I found difficult having been on the agency side for the whole of my career to date. Whilst a tough decision, it was a solid move to develop myself and get exposure to the internal workings of larger organisations. Over those years of leading analysis, user experience, development, and experimentation teams we innovated and optimised some fantastic products and experiences, which of course married with experience of collaborating and providing similar service to the hundreds of clients that I’ve worked with throughout my career. 

However, there was one significant difference, whilst doing the day job was the same: carrying out analysis, developing strategies and concepts from the insight and fundamentally delivering digital products the intake of products drastically changed.

Whilst this difference is based on my observations you may have a utopian experience of this. I should add that I’ve spoken to many digital product teams over the last six months, from varying sectors and they’ve all asked the same question of me: 

How do you prioritise multiple stakeholders who all believe their product or feature is the most important thing to the business?

When I spoke to the first few product teams, my response was one of laughter in that I’ve found common ground and specific areas of product management to talk through. However, as I’ve been lucky enough to meet and talk with more and more teams, it turned out this is a common problem for most organisations across private and third sectors and even government.

The model I adopt comes from a common scoring framework used in the world of Conversion Rate Optimisation (CRO). When you work in CRO hypotheses and ideas tend to be overflowing especially when you have the luxury of other teams and subject matter experts feeding into the pool, because of this it becomes difficult to prioritise or order the delivery of experiments. 

To answer this problem many teams adopt the PIE framework, scoring between 1 – 10 against Potential to succeed, Impact of the result and Ease of implementation. This is a great method of developing an order to a list of things, proving they are all commonly scored with relatively the same level of judgment. This functions well for CRO teams as they are usually either led by a single manager or in larger organisations explicitly focused on one particular area of a product or journey. Ensuring the scoring is fair as additional factors don’t tend to come into play.

Example PIE Framework Formula

\[ (\text{Potential} \times \text{Impact} \times \text{Ease}) = \text{PIE Score} \]

Because of its simplicity, I began to adopt this framework for managing product backlogs, adopting the principles and scoring. At first, this functioned well, I could provide clear direction and a strong focus to the development team to ensure that we delivered the most important projects and easiest to implement first. This method of scoring provided a great message to the wider business from where ultimately the business strategy and in turn, project objectives originated – now I had a common ground to our intake, a reason as to why their specific innovation or feature improvement had not reached development yet or on the other hand why it had been fast-tracked and delivered in record time. 

However, even though logically this was sound in principle, what the PIE framework failed on was alignment with the wider business strategy. Potential was scored by me or a member of my team, great for consistency, but subjective when a question of why was asked. Impact, consistently scores, but again subjective when a strategic leader asks why. Whilst ease was very much in our full remit to score, we were the subject matter experts in digital delivery, so we rarely received challenges there.

This got me thinking about adapting the priority framework to be more aligned with the business, its strategy and values. Taking into consideration what impact the project being scored has on the customer, and commercials, would it mitigate risk and how aligned it is to our strategies?    

A strategically aligned product prioritisation framework 

I evolved the scoring based on alignment towards influencing factors of customer, commercial, strategic and risk mitigation. This exercise was extremely fulfilling as it provided reassurance to the team that our assumptions using the earlier PIE framework were very close when we scored on the revised more micro scale. Perhaps more fundamentally the scoring and prioritisation model used language similar to those within the development of business cases and strategic plans. 

Example product priority framework formula

\[ (\text{Customer} \times \text{Commercial} \times \text{Strategic Fit} \times \text{Risk Mitigator}) = \text{Priority Score} \]

The revised formula combined with the common ground on language felt revolutionary when holding conversations with stakeholders in the business, I was at a point where my scoring could be drafted before sessions with business leads, improving the efficiency of meetings and opening the floor to challenges and questions against specific points. This helped reduce conflict and passionate discussions that focused on specific areas of the business. Driving forward a consensus on what we were delivering as a digital team. But it was missing something fundamental that the PIE framework gave us in spades. 

My focus on capability and resources had disappeared! I had lost a fundamental ingredient of the PIE, which was Ease or how complicated the project would be from a technical capability and time. At this stage, I’d been sharing the prioritisation framework throughout the business asking key individuals their professional opinion on the scoring mechanism and ordering. This resulted in similar models being adopted by those teams one of which was the technology team whose delivery leads took it to the next level by applying eight child verticle splits between customer and commercial, which allowed focus on specific strategies and honing in on which projects were delivering value against them. Hats off to the team their additional thinking added complexity, but it aided their reporting back to the business. The team also introduced two additional columns covering resource and complexity, these gave them insight into how much impact or disruption they would cause the technology team and it was these two that I would use to recreate the Ease from our earlier PIE framework. 

Introducing resource and complexity into the mix we quickly found that this change to the formula created more favourable results for larger more complex and time-consuming projects, exacerbated by the opposite scoring that Resource and Complexity over Ease. PIE’s Ease score provided simple results depending on how easy the project was to deliver. The simpler the project the higher the Ease score, whereas our new framework was the opposite, for example, the more complex the project the higher the score and when combined with the more resource-hungry the project the score grew larger and favoured technical complexity.  

As a product team, your focus should be delivering benefits with the most value using the time and resources available, whilst delivering against the strategic objectives you’ve been set. As a result of the revised framework, we were no longer achieving this simple ask for a product team and it needed to evolve once more.

Following a little thinking it became apparent to me that the scores against business fit couldn’t be combined with those around its complexity, this was because they were trying to present different pieces of data. Whilst the priority or impact for the business was delivered through the earlier example, a separate subscore of Complexity multiplied by Resource gave us a countermeasure and an indicator of time and cost, but it was just that, separate from the priority scoring and just another layer of complexity for us to manage and try and explain to our wider stakeholders.

Example priority scoring to deliver benefit and value in the shortest time   

\[ \frac{(\text{Customer} \times \text{Commercial} \times \text{Strategic Fit} \times \text{Risk Mitigator})}{(\text{Complexity} \times \text{Resource})} = \text{Priority Score} \]

Thinking back to the core objective of a product team to deliver benefits and value, I needed to evolve the framework once more developing my current prioritisation framework that I call the  “Quick Win Score”. 

Driving the delivery of benefit and value

The “Quick Win Score” gave the team exactly what they needed, it provided visibility of those projects that would drive the most benefit and value, whilst gauging the impact on the team, giving product owners a logical support structure to order product backlogs, helping to reframe how experiments align to wider business strategy and providing the foundations for taking stakeholders through the priority and delivery of digital projects.  

Photo by George Pagan III on Unsplash


Posted

in

by

Tags: