In Australian grocery, shelf space is the scarcest resource a CPG brand competes for. Coles and Woolworths together control roughly two-thirds of national grocery sales, and both retailers run tightly managed category reviews that determine which products get space, how much of it, and where. For brands that don't earn their position on shelf — or can't justify it with data — the review process is existential.

Shelf space optimization is the discipline of ensuring that every facing, every position, and every linear centimetre works as hard as possible for the category. Done well, it lifts category performance, which is why retailers engage in it. Done badly — or not at all — it produces planograms that look logical on paper but underperform in the aisle. The gap between a planogram that was designed and a planogram that was validated is where most shelf performance is won or lost.

How Coles and Woolworths allocate shelf space

Both major retailers approach shelf allocation through a structured category management process. Space is not allocated product by product — it's allocated at the category level first, then distributed across the range based on a combination of sales velocity, strategic importance, supplier negotiation, and format requirements.

Sales-weighted allocation as the baseline

The most common starting point is space-to-sales alignment: a product that represents 15% of category sales should ideally hold roughly 15% of category shelf space. This principle drives the allocation logic in planogram software like JDA (Blue Yonder) and Relex, which both major retailers use to build and maintain category planograms.

In practice, pure sales-weighted allocation is a floor, not a ceiling. High-growth products are often given more space than their current sales justify — a forward investment in velocity. New product introductions get a minimum viable facing count to give them a chance to establish before the next review. And premium or high-margin SKUs may hold disproportionate space relative to volume because of their contribution to basket value.

Format and fixture constraints

Not all space decisions are data-driven. Physical fixture dimensions, bay widths, shelf heights, and chiller configurations all constrain what's possible. A planogram for a compact metropolitan Woolworths Metro has materially different space constraints than a full-format Coles Regional store. Category managers working across multiple formats must develop separate space strategies for each, not a single national planogram applied uniformly.

Metcash-supplied independents add another layer of complexity. IGA stores negotiate range decisions through a different structure than the centralised Coles and Woolworths model, giving independent retailers more local flexibility — but also producing more variability in how space is actually allocated store by store.

Space is allocated at the category level first. Before any individual brand argues for more facings, the retailer has already determined how much total linear space the category receives. The supplier's job is to show that their allocation within that fixed total drives better category outcomes — not just better brand outcomes.

The role of category captains

In many categories, retailers formally designate a category captain — typically the largest or most analytically capable supplier — to lead category strategy, range recommendations, and planogram development. Category captains don't control shelf allocation decisions; retailers make those calls. But they have privileged access to the process: the ability to recommend range structure, present shelf configurations, and shape the retailer's view of category opportunity.

The category captain relationship creates a structural advantage for dominant brands. A category captain presenting a planogram recommendation can design scenarios that favour their own product — more prominent positions, increased facings, adjacency with high-velocity complements. Retailers are aware of this dynamic and apply scrutiny, but the analytical depth a category captain brings is often genuinely useful, which means the relationship persists even under that scrutiny.

What this means for challenger brands

For mid-market and emerging CPG brands that don't hold category captain status, the shelf allocation process is less transparent and more competitive. The path to more space requires demonstrating, credibly, that the proposed allocation improves the category — not just the brand. A challenger brand asking for three facings instead of two needs to show the retailer's category buyer why that change benefits the shopper and the category, not just why the brand deserves it.

That's a higher evidentiary bar than many brands are prepared to meet. Most range review submissions are built from scan data and supplier-produced category analyses — useful, but not differentiated. Every other brand in the review has access to similar data. The suppliers who win shelf space are the ones who arrive with evidence the retailer doesn't already have.

The manual planogram review process

Between formal range reviews, planograms are updated through a continuous review process managed by the retailer's category team. A typical cycle involves:

The manual review process is slow by design. Retailers make thousands of planogram decisions across hundreds of categories, and the review cadence is built around manageable workload, not speed. For brands waiting on a decision, a typical category review cycle can run six to twelve weeks from submission to implementation.

The review cadence is fixed. The quality of your submission isn't. You can't accelerate the retailer's timeline. You can control how compelling your evidence is when your category comes up for review. Brands that treat every review as a routine process get routine outcomes.

How virtual shelf testing validates space allocation decisions

The fundamental problem with the standard shelf allocation process is that space decisions are made on retrospective data — what sold, not what will sell under a different configuration. By the time a planogram change is implemented and its performance is measured, months have passed and the next review cycle is approaching.

Virtual shelf testing inserts a validation step before implementation. By replicating the proposed planogram in a digital shelf environment and running structured shopper tests against it, brands can generate prospective evidence — behavioural data showing how the proposed configuration affects navigation, dwell time, and purchase intent — before the review, not after.

What virtual testing produces that scan data doesn't

Scan data tells you what sold at a given price in a given period. It doesn't tell you why. It doesn't tell you whether a shopper who didn't buy your product noticed it but rejected it, or never saw it at all. It doesn't tell you how much of your product's underperformance is a shelf position problem versus a packaging problem versus a price problem.

Virtual shelf testing separates these variables. In a structured shopper test using real Australian planogram data, you can see exactly where shoppers look, how long they spend in the category, which positions generate the most attention, and what happens to purchase intent when your product moves from position 4 to position 2. That's causal evidence, not correlation — and it's the kind of data that changes a category buyer's view of a shelf allocation proposal.

Fitting testing into the review cycle

The practical value of virtual shelf testing is its speed. A formal shopper study using traditional eye-tracking or in-store observation can take six to eight weeks. A virtual shelf test using an existing planogram dataset can produce results in days. That turnaround fits inside a category review preparation window — meaning you can test, refine, and arrive at the review with validated evidence rather than a theoretical argument.

For brands managing submissions across multiple retailers and formats, this matters even more. A space allocation that works in a Woolworths metropolitan format may not work in a Coles regional store with different fixture dimensions. Virtual testing against the specific planogram you're pitching for — not a generic template — produces evidence that's directly relevant to the decision being made.

Where ShelfLab fits in the allocation process

ShelfLab is a virtual store testing platform built specifically for Australian grocery. It uses real Coles, Woolworths, and Metcash planogram data to replicate the shelf environments where your product competes — and runs structured shopper tests against proposed configurations to produce the behavioural evidence your category review needs.

It doesn't replace the planogram tools or the scan data that category managers already use. It adds the pre-validation layer that those tools can't produce: prospective shopper behaviour data tied to a specific allocation scenario. For CPG brands looking to strengthen their range review submissions — or challenger brands trying to break through against entrenched category captains — that evidence is the difference between a data-supported argument and a data-backed one.

Explore a live Australian grocery planogram to see how the platform works — or request a demo tailored to your category and the specific retailer you're presenting to.