Building prioritisation frameworks (RICE and WSJF) using Expressions

Edited

Effective prioritisation is at the heart of successful portfolio management. This article walks through how to construct two widely used prioritisation frameworks — RICE (Reach, Impact, Confidence, Effort) and WSJF (Weighted Shortest Job First) — using Expressions within Fluid. It outlines the core elements common to any scoring model, recommended property types, safe formula patterns (including how to handle blanks and prevent division-by-zero errors), and practical examples of how to calculate both raw scores and categorical bands for RICE and WSJF. Alongside implementation guidance, it also addresses governance considerations that help ensure consistency and meaningful comparisons across teams.


Setting up Scoring Models

Before looking at specific frameworks such as RICE or WSJF, it’s useful to understand the common building blocks and expression patterns that apply to any prioritisation model built in Fluid. The sections below cover the core concepts you’ll reuse regardless of the framework you choose.


1. What are the core building blocks of any prioritisation framework?

A prioritisation framework in Fluid typically includes:

  • Input properties
    The scoring factors your team enters (for example Reach, Impact, Effort, Confidence).

  • A calculated score
    A numeric output used for sorting and ranking items.

  • A calculated band or level
    A derived label such as Low / Medium / High, used for quick scanning, reporting, and governance rules.

These elements work together to provide both a precise ranking mechanism and an easily interpretable view for stakeholders.

2. Which property types should I use?

Use these patterns consistently when defining properties for your scoring model:

  • Number
    Best for free numeric entry (for example Reach, Effort, Job Size, Duration).

  • Valued Option
    Best for controlled scoring scales where each option has a numeric weight
    (for example Impact, Confidence, Business Value, Time Criticality).

  • Option
    Best for labels or bands (for example “Low”, “Medium”, “High”).

Recommendation: Make the main score a Number so you can sort, filter, and report on it reliably.

3. How do I handle blank values safely?

Blank values are common, especially during early intake or when assessments are completed incrementally. To avoid errors or unexpected results, use COALESCE() to default blanks to a safe number.

Patterns you can reuse:

COALESCE([_AnyNumberField], 0)
COALESCE([_AnyValuedOption-Value], 0)

This ensures your expressions continue to evaluate cleanly even when some inputs have not yet been provided.

4. How do I avoid division-by-zero in score calculations?

If your scoring model divides one value by another (for example value ÷ effort), you should always guard the denominator with an IF() check.

4. How do I avoid division-by-zero in score calculations?

If your scoring model divides one value by another (for example value ÷ effort), you should always guard the denominator with an IF() check.

Reusable pattern:

IF(
  COALESCE([_Denominator], 0) = 0,
  0,
  [_Numerator] / [_Denominator]
)

This prevents calculation errors and ensures scores remain predictable and comparable across items.


RICE Framework

This section shows how to implement the RICE (Reach, Impact, Confidence, Effort) prioritisation framework using custom properties and Expressions. It focuses on the practical setup — the inputs to capture, how to calculate the RICE score safely, and how to derive a RICE Level for quick comparison and governance.

The examples assume a standard RICE model where Reach and Effort are numeric inputs, and Impact and Confidence are valued options. You can adapt the same patterns if your organisation uses different scales or variations of the RICE framework.

1. What inputs do I need for RICE?

A typical RICE setup in Fluid consists of a small set of input properties and two calculated outputs. The inputs capture the four RICE factors, while the calculated fields derive the overall score and an optional band for reporting and governance.

Typical RICE setup:

  • Reach (Number)
    A numeric measure of how many users, customers, or items will be affected.

  • Impact (Valued Option)
    A controlled scale representing the magnitude of the impact (for example 1–5 or 1–10).

  • Confidence (Valued Option)
    A measure of how confident you are in the Reach and Impact estimates, expressed as a weighted scale.

  • Effort (Number)
    A numeric estimate of the work required (for example person-days, weeks, or relative effort).

  • RICE (Number, calculated)
    The calculated RICE score used for sorting and ranking.

  • RICELevel (Option, calculated)
    A derived band (for example Low / Medium / High) used for quick scanning, reporting, and governance rules.

2. What is the standard RICE score formula?

The standard RICE score is calculated by multiplying Reach, Impact, and Confidence, then dividing the result by Effort. In Fluid, this is implemented using an Expression that safely handles blank values and avoids division-by-zero errors.

Use this pattern when Impact and Confidence are Valued Options, and Reach and Effort are Number properties.

RICE score (copy/paste):

IF(
  COALESCE([Effort], 0) = 0,
  0,
  COALESCE([Reach], 0)
  * COALESCE([_Impact-Value], 0)
  * COALESCE([_Confidence-Value], 0)
  / [Effort]
)

This approach ensures that:

  • the score evaluates cleanly when some inputs are blank

  • Effort values of zero do not cause calculation errors

  • incomplete records do not inflate priority unintentionally

If you prefer a “default denominator” style, this alternative is acceptable, but note that it does not explicitly prevent an Effort value of zero:

COALESCE([Reach], 0)
* COALESCE([_Impact-Value], 0)
* COALESCE([_Confidence-Value], 0)
/ COALESCE([Effort], 1)

In most cases, the guarded IF() pattern is recommended for clarity and predictable behaviour.

3. How do I band RICE into Low / Medium / High?

Once you have a numeric RICE score, you can derive a RICE Level to make prioritisation easier to scan and apply in reporting or governance rules. This is typically implemented as a calculated Option property using a CASE() expression.

You can approach this in two ways.

Option A — explicit thresholds (copy/paste):

CASE(
  [RICE] >= 0 AND [RICE] <= 1000, "Low",
  [RICE] >= 1001 AND [RICE] <= 5000, "Medium",
  [RICE] >= 5001, "High",
  "Low"
)

This style makes the boundaries very explicit, which can be useful when thresholds are tightly defined or reviewed as part of governance.

Option B — ordered thresholds (recommended for maintainability):

CASE(
  [RICE] >= 5001, "High",
  [RICE] >= 1001, "Medium",
  "Low"
)

This approach is generally easier to maintain, as thresholds can be adjusted without needing to restate full ranges. It also reduces the risk of gaps or overlaps when values change.

4. How should I set the thresholds for RICE Level?

There is no single “correct” set of thresholds for RICE Level. Thresholds should be chosen to reflect how your organisation scores work and how you intend to use the bands in decision-making.

When defining thresholds, consider:

  • Your scoring scales
    For example, whether Impact is scored 1–5 or 1–10 will materially affect the range of resulting RICE scores.

  • The distribution of real scores in your backlog
    Aim for thresholds that create meaningful separation, rather than clustering most items into a single band.

  • Governance intent
    For example, whether items marked “High” require additional review, or whether “Low” items are intentionally deprioritised.

A practical approach is to start with simple, intuitive thresholds, then review and adjust them after two to four prioritisation cycles once you can see how scores are clustering in practice.


WSJF Framework

This section shows how to implement the WSJF (Weighted Shortest Job First) prioritisation framework using custom properties and Expressions. It focuses on the practical setup — the inputs to capture, how to calculate the WSJF score safely, and how to derive a WSJF Level for reporting and governance.

The examples assume a standard WSJF model where Cost of Delay is derived from weighted inputs and Job Size is a numeric effort measure. The same patterns can be adapted if your organisation uses different weighting or terminology.

1. What inputs do I need for WSJF?

A typical WSJF setup consists of a small set of input properties and two calculated outputs. The inputs capture the drivers of Cost of Delay and Job Size, while the calculated fields derive the overall score and an optional band.

Typical WSJF setup:

  • User / Business Value (Valued Option)
    A weighted measure of the value delivered by completing the work.

  • Time Criticality (Valued Option)
    A measure of urgency or sensitivity to delay.

  • Risk Reduction / Opportunity Enablement (Valued Option)
    A weighted measure of how much risk is reduced or opportunity unlocked.

  • Cost of Delay (Number, calculated)
    The combined value of the WSJF drivers.

  • Job Size (Number or Valued Option)
    A numeric estimate of the size or effort required to complete the work.

  • WSJF (Number, calculated)
    The calculated WSJF score used for sorting and ranking.

  • WSJFLevel (Option, calculated)
    A derived band (for example Low / Medium / High) used for scanning, reporting, and governance rules.

2. How do I calculate Cost of Delay (CoD)?

In WSJF, Cost of Delay represents the total value lost by delaying a piece of work. It is typically calculated as the sum of several weighted factors that capture value, urgency, and risk.

In Fluid, Cost of Delay is usually derived by combining valued option inputs such as Business Value, Time Criticality, and Risk Reduction / Opportunity Enablement into a single numeric score.

Cost of Delay (copy/paste):

COALESCE([_BusinessValue-Value], 0)
+ COALESCE([_TimeCriticality-Value], 0)
+ COALESCE([_RiskReduction-Value], 0)

Each input contributes its numeric weight to the total Cost of Delay. Using COALESCE() ensures the calculation remains safe if one or more values are blank.

3. How do I calculate WSJF?

WSJF is calculated by dividing Cost of Delay by Job Size. This ensures that work delivering higher value sooner, relative to its size, is prioritised ahead of larger or lower-value items.

In Fluid, this is implemented using an Expression that safely handles blank values and avoids division-by-zero errors.

WSJF (copy/paste):

IF(
  COALESCE([JobSize], 0) = 0,
  0,
  COALESCE([CostOfDelay], 0) / [JobSize]
)

If JobSize is a Valued Option, use the valued numeric instead:

IF(
  COALESCE([_JobSize-Value], 0) = 0,
  0,
  COALESCE([CostOfDelay], 0) / [_JobSize-Value]
)

This pattern ensures the WSJF score remains predictable and suitable for sorting, filtering, and reporting.

4. How do I band WSJF into Low / Medium / High?

Once you have a numeric WSJF score, you can derive a WSJF Level to make prioritisation easier to scan and apply in reporting or governance rules. This is typically implemented as a calculated Option property using a CASE() expression.

Thresholds should be chosen to match your scoring scale and the distribution of WSJF values in your backlog.

Example WSJF Level (copy/paste):

CASE(
  [WSJF] >= 10, "High",
  [WSJF] >= 5, "Medium",
  "Low"
)

As with RICE, start with simple thresholds and adjust them over time once you can see how WSJF scores cluster in practice.


Design and governance FAQs

1. How do I keep scoring consistent across teams?

Consistency matters more than precision when using scoring models for prioritisation. To keep scores comparable across teams and portfolios, apply the following controls:

  • Use Valued Options
    Define centrally agreed weights for subjective inputs such as Impact, Confidence, and Business Value.

  • Stabilise scoring scales
    Avoid frequent changes to scales, particularly mid-quarter or mid-planning cycle, as this breaks comparability.

  • Provide clear guidance
    Add short descriptions to each scale option so teams understand what a “3” versus a “5” represents in practice.

These controls help ensure that scores reflect genuine differences in priority, rather than differences in interpretation.

2. Should I use Reach as “users”, “accounts”, or “£ value”?

Choose a single unit of measure for Reach within a workspace or portfolio and make it explicit. Common approaches include:

  • Users or customers impacted (per month or per quarter).

  • Accounts impacted.

  • Transactions affected.

  • Revenue potential (only if this can be estimated consistently).

For prioritisation, consistency matters more than absolute precision. A simple, consistently applied measure will produce more reliable rankings than a highly detailed but inconsistently used one.

3. What are the most common implementation mistakes?

The following issues commonly undermine scoring models and lead to misleading results:

  • Returning a numeric score into a Text property instead of a Number.

  • Not handling blank values, allowing missing inputs to propagate unexpected results.

  • Dividing by zero (for example when Effort or Job Size is set to 0).

  • Using the display label rather than the valued numeric when working with Valued Options.

Avoiding these mistakes helps keep prioritisation outputs stable, predictable, and trustworthy.


Copy/paste templates

The expressions below bring together the recommended patterns from this article into a single reference section. You can copy and paste these directly into calculated properties, then adjust property names or thresholds to suit your configuration.

These templates assume:

  • numeric scores are stored in Number properties

  • weighted inputs use Valued Options

  • blank values are handled safely using COALESCE()

  • division-by-zero is explicitly guarded

RICE Score

IF(
  COALESCE([Effort], 0) = 0,
  0,
  COALESCE([Reach], 0)
  * COALESCE([_Impact-Value], 0)
  * COALESCE([_Confidence-Value], 0)
  / [Effort]
)

RICE Level

CASE(
  [RICE] >= 5001, "High",
  [RICE] >= 1001, "Medium",
  "Low"
)

Cost of Delay

COALESCE([_BusinessValue-Value], 0)
+ COALESCE([_TimeCriticality-Value], 0)
+ COALESCE([_RiskReduction-Value], 0)

WSJF

IF(
  COALESCE([JobSize], 0) = 0,
  0,
  COALESCE([CostOfDelay], 0) / [JobSize]
)

WSJF Level

CASE(
  [WSJF] >= 10, "High",
  [WSJF] >= 5, "Medium",
  "Low"
)

Was this article helpful?

Sorry about that! Care to tell us more?

Thanks for the feedback!

There was an issue submitting your feedback
Please check your connection and try again.