Why Portfolio Benchmarking Breaks Down in Mining

Benchmarking is widely used across mining portfolios to assess performance, guide intervention and support capital and improvement decisions. In most organisations, it is well-established, data-rich and embedded within routine governance and reporting processes.

 

Yet at group level, where benchmarking informs capital allocation and intervention decisions across multiple assets, confidence in benchmarking outcomes is often uneven.

 

This is not primarily a question of data quality or analytical capability. It reflects a more fundamental issue: the way benchmarking is commonly understood and applied in mining does not align with the role it is increasingly expected to play at portfolio and governance level.

Benchmarking is asked to support decisions it was not designed for

At site level, benchmarking is often treated as comparison, how one operation performs relative to another mine, a peer group or a global dataset. That framing is familiar and intuitive, and it can be useful for local insight.

 

At portfolio and board level, however, benchmarking is increasingly relied upon to support decisions with far greater consequence: capital allocation, performance intervention, improvement expectations and, increasingly, audit and assurance scrutiny of governance, controls and decision processes.

 

Under these conditions, benchmarking is no longer just informational. It becomes part of the decision environment. When the underlying assumptions do not hold, confidence erodes quickly.

 

The common misunderstanding of operational benchmarking

In mining, operational benchmarking is frequently assumed to mean comparison with other mines or global peers. Performance is interpreted through relative position, quartiles, rankings or deviations from external averages.

 

That assumption is widespread. It is also fragile.

 

Operational performance is shaped by the specific conditions under which an asset operates: mining method, physical constraints, equipment configuration, workforce model, maturity and local operating realities. Where those conditions differ materially, external comparison provides limited insight into what is reasonable, achievable or sustainable at a given site.

 

As a result, benchmarking intended to inform operational performance can become disconnected from operating reality. Discussion shifts toward why an operation is “different”, rather than whether assumptions, expectations and decisions are grounded in the conditions that actually exist.

 

Why portfolio application exposes the weakness

These limitations are often manageable at site level. At portfolio level, they compound.

 

When benchmarking is applied across diverse assets, differences in context dominate interpretation. Apparent performance gaps become contested. Intervention thresholds vary implicitly. Expectations drift. Decisions are explained rather than governed.

 

Under these conditions, benchmarking struggles to provide a stable reference for ownership decisions. What appears objective becomes negotiable. What was intended to support consistency instead amplifies debate.

 

Scrutiny has changed the tolerance for ambiguity

As capital discipline tightens and ESG, regulatory and assurance expectations increase, owners are increasingly expected to demonstrate not only what performance looks like, but why the assumptions and reference points applied were reasonable and supportable at the time decisions were made, including where sustainability factors may reasonably affect cash flows, access to finance or cost of capital.

 

Benchmarking that relies on assumed comparability is difficult to defend under this scrutiny. When performance narratives are tested, the question is no longer whether an operation is above or below a peer set, but whether the reference used to judge performance reflects the conditions under which the asset actually operates.

 

Where that connection cannot be made clearly, benchmarking loses credibility at precisely the point it is most relied upon.

 

When benchmarking cannot hold, governance becomes exposed

Portfolio benchmarking is often expected to provide clarity, alignment and confidence. When it cannot do so, governance absorbs the strain.

 

Debate shifts from decisions to interpretation. Accountability becomes diffuse. Improvement remains dependent on effort and momentum rather than method. Over time, reliance on benchmarking increases even as confidence in it diminishes, making it difficult to maintain an assurance-grade basis for portfolio decisions.

 

This is not a failure of intent, capability or commitment. It is a failure of alignment between what benchmarking is assumed to represent and what it is actually capable of supporting under modern ownership conditions.

 

A question for owners

The question for owners is not whether benchmarking should be used. It is whether the benchmarking relied upon can provide a defensible reference for decisions across assets with materially different operating conditions.

 

Where it can, benchmarking supports governance.
Where it cannot, it becomes a source of contention rather than control.