Skip to content
Work

RetailHub

A retail analytics SaaS for multi-store operators. Designed and built end-to-end with AI tools.

Role
Product designer + builder
Type
B2B SaaS
Period
2026
Status
Live (Private)
Stack
Next.js · TypeScript · Mantine · Supabase
Fig 1 · RetailHub dashboard.

Context

Multi-store retail operators in the UK make decisions on stale data. Channel-level revenue lives in POS exports. Store managers report numbers through WhatsApp groups. Weekly summaries arrive Mondays describing last week. Nobody has a single view of what happened last night across every store by 9am the next morning.

The cost of that gap is real. An operator running ten franchise units in London needs to know if delivery sales dropped at one store on Tuesday so they can investigate Wednesday morning, not next Monday afternoon. The data exists. The visibility doesn't.

RetailHub is the dashboard for those operators. The first release focused on three primitives operators already think in: channels (in store, delivery, aggregator), stores, and time-bounded events.

Approach

The first decision was about what not to build. Most retail-analytics tools start by asking the operator to map their data model first (channels, stores, products, SKUs) before showing them a single number. That's the fastest way to lose attention from someone running a business rather than configuring a tool.

We inverted that. The default view assumes the most common shape of a franchise network (3 to 15 stores, 2 to 4 sales channels, GBP) and renders something useful immediately on first login. Configuration earns its way in only when the operator outgrows the defaults.

The second decision was about grain. Daily roll-ups are too coarse for cross-store comparison; per-transaction is too noisy. The dashboard settled on hourly buckets surfaced through a stacked-bar channel breakdown, a single component the rest of the page rotates around.

Process

Designing the channel breakdown

The channel breakdown is the most-looked-at component on the dashboard, so most of the design effort sat there. Two constraints turned out to matter more than visual polish.

First, the smallest channel needs to remain visible at any scale. Operators need to see that the smallest channel exists, even if it represents 1.4% of revenue. A naive proportional bar makes tiny segments disappear at typical retail scales where one channel dominates.

Second, the legend needs to read in the operator's currency, not just in percentages. "Channel B is 3% of revenue" tells you nothing useful at 3am. "Channel B is £247 of £8,200" tells you whether to act.

The shipped component enforces a 3% floor on segment widths and renders both GBP value and percentage in a right-aligned legend.

The Cash Watch event feed

Cash Watch is the dashboard's live tape: a chronological feed of significant events across stores. The design problem was triage. Operators don't want every till transaction; they want the events that matter (large refunds, cash-handling exceptions, sudden channel shifts, cancellations).

The feed surfaces only events that cross a configurable threshold, with a one-line summary, a store badge, and a timestamp. Everything else stays in the underlying log. The cost of getting this right is the difference between a feed an operator scans daily and one they ignore.

Mock-data parity

Production-mock parity is the silent killer of UI dashboards. The team built a seed script that generates 60 days of realistic Nisa Local sales across three London stores using fixed UUIDs, so design and engineering work against the same mock state across local environments. Cleanup scripts allow safe re-runs without touching real customer data via FK cascade. Two days of "infra" work that paid back every sprint after.

From wireframes to component system

The dashboard was built component-first, not page-first. ChannelBreakdown, KPI cards, and Cash Watch event rows were built as standalone units with stable APIs before any layout was finalized. This let layout decisions happen at the page level without breaking lower-level primitives, and made it possible to introduce a sidebar-and-content shell late in the project without redoing component work.

Build notes

Stack: Next.js, Mantine UI, Supabase, TypeScript.

Build approach: designed and built end-to-end with AI tools across the workflow (code generation, prototyping, refactoring, test scaffolding). One designer, one AI pair-builder, no separate engineering team for the first release.

The shift this caused: working with an AI pair-builder changed which decisions were expensive. Pixel-level UI adjustments and tactical refactors became cheap. Architectural decisions (data model, auth and RBAC shape, chart library choice, seed-data strategy) became the work that actually merited the time.

Most of the design value lived in those architectural calls, not on the visual surface. The transferable lesson: treat the AI as fast at tactics and weak at strategy, and the time allocation becomes obvious.

Outcome

The first release shipped the channel breakdown, KPI cards, and the Cash Watch event feed across web, alongside auth, onboarding, and a multi-role access model.

Roles. The role model maps onto how multi-store operators actually delegate:

Auth and isolation. Authentication, role enforcement, and tenant data isolation are part of the architecture, not an afterthought. Access boundaries are enforced at the data layer, not at the UI. Implementation specifics are deliberately kept out of this case study.

Onboarding. New accounts route through tenant setup, store mapping, and channel calibration before the dashboard unlocks, so the default view is meaningful from the first session rather than asking the operator to configure their way in.

The dashboard is live, privately, for the first client. The component library, mock-data system, and access scaffolding are the foundation for the next round of features: per-store deep-dives, alerting, weekly digests, and the employee app.

What this case study deliberately omits: vanity metrics. RetailHub is too early for honest "X% engagement uplift" claims. The shipped surface, the architectural decisions, and the component primitives are the work. That's what this study shows.

Reflection

If I were starting again, I'd push the data-model decisions earlier. I spent more time than I should have on visual polish in week one when the schema was still moving. Polish always finds its time; data shape calcifies fast. The lesson held: when in doubt, settle the data shape before opening Figma.