Retail · Contact Centre Adviser Scorecard

One Source of Truth
for Adviser Performance

Team leaders were managing adviser performance with disconnected spreadsheets and inconsistent reports. I unified four data sources, built a three-tier Power BI scorecard, and gave every team leader a single trusted source of truth for every performance conversation.

Retail · Contact Centre
Power BI · DAX · SQL
TLs · Managers · Exec
End-to-end project
📉
50%
Reduction in manual reporting time
📈
+12%
SLA adherence improvement over 3 months
4→1
Data sources unified into a single model
01 · The Problem

Four Systems,
No Agreed Numbers

The contact centre had no shortage of data. What it lacked was a consistent, trustworthy view of it. Team leaders were tracking adviser performance across manual spreadsheets built by different people with different formulas, alongside reports from multiple operational systems that rarely told the same story.

When a team leader sat down to review an adviser's performance, they might look at three different numbers for the same metric depending on which report they opened. Performance conversations were built on shaky ground. Interventions were reactive and sometimes misdirected.

The core problem wasn't a lack of reporting; it was a lack of trust in reporting. No one agreed on which number was right.

Before
  • Manual spreadsheets updated inconsistently by different TLs
  • Multiple disconnected system reports with conflicting figures
  • No single agreed source of truth for adviser metrics
  • Performance reviews based on whichever data felt most familiar
  • SLA tracking done retrospectively, not in real time
  • Workload imbalances invisible until advisers complained
After
  • Single Power BI model fed from four unified data sources
  • One agreed set of metrics used by TLs, managers, and exec
  • AHT, SLA, and workload visible in real time with daily refresh
  • Performance conversations backed by the same data every time
  • SLA breach patterns surfaced proactively by time slot and team
  • Workload distribution visible and actionable
02 · Data Investigation

Three Findings That
Reshaped the Brief

Before any dashboard could be built, the data had to be made trustworthy. I analysed records from four operational systems: call management, quality, workforce management, and CRM, then worked with IT to clean and integrate them into a unified dataset.

What emerged during that process wasn't just dirty data. It was three findings that fundamentally shaped what the dashboard needed to show.

Finding 01
Systems Disagreed on the Same Metrics
AHT figures from the call management system didn't match those in the workforce tool. Neither matched what TLs had in their spreadsheets. The root cause: different systems were measuring slightly different things and calling them the same name. Definitions had to be standardised before any metric could be trusted.
Finding 02
Workload Was Heavily Unbalanced
When calls handled were mapped per adviser against shift hours, a clear pattern emerged: a small group of advisers was absorbing a disproportionate volume of contacts. This wasn't visible in any existing report. The advisers The advisers carrying the heaviest loads also had the highest AHT, not because they were slower, but because they were handling more complex escalations.
Finding 03
SLA Breaches Clustered in Specific Slots
SLA adherence failures weren't random. They concentrated in two time windows: late morning and post-lunch, and in specific teams. This pattern was invisible in the existing reports, which only showed monthly aggregate figures. Granular time-slot analysis turned an abstract "we miss SLA" problem into a schedulable, fixable one.

None of these findings required complex modelling. They required joining the right tables and asking the right questions. Neither of which had been done before.

03 · Dashboard Design

Three Tiers,
One Model

The dashboard needed to serve three different audiences with different needs and different levels of data literacy. Rather than building three separate reports, I designed a single Power BI model with three distinct pages, each surfacing the right level of detail for the right role.

Tier 1
Team Leaders
  • Individual adviser scorecards with RAG status
  • AHT, calls handled, SLA compliance per adviser
  • Workload distribution across their team
  • Drill-through to individual performance detail
Page 1: Adviser Detail
Tier 2
Operations Managers
  • Team-level comparison and ranking
  • SLA breach heatmap by time slot and team
  • Trend lines for AHT and quality scores
  • Adviser performance distribution charts
Page 2: Team Overview
Tier 3
Exec / Directors
  • Single-page headline KPIs with MOM movement
  • SLA adherence vs target trend
  • CSAT and NPS correlation with operational metrics
  • No drill-through; summary only, high trust
Page 3: Executive Summary
Adviser Scorecard · Team Overview · March 2024
Avg AHT
4m 32s
↑ 8s vs last month
SLA Adherence
87.4%
↑ 4.2% vs last month
Calls Handled
14,823
↓ 2.1% vs last month
CSAT Score
82%
↑ 1.5% vs last month
FCR Rate
71%
→ flat vs last month
SLA Adherence by Time Slot: Breach Heatmap
Calls Handled per Adviser
Adviser Performance Table · Sortable by any KPI
Adviser
Calls
AHT
SLA %
CSAT
Status
A. Williams
312
4m 12s
94%
88%
On Track
B. Chen
481
5m 44s
78%
79%
Review
C. Okafor
267
4m 05s
96%
91%
On Track
D. Patel
498
6m 02s
71%
74%
At Risk

📸 Power BI dashboard screenshots · Insert exports here

A key design decision was colour discipline. RAG status (green/amber/red) applied only to performance against agreed thresholds, never used as a comparison between advisers. An adviser flagged amber wasn't being compared to their peers; they were being compared to their own target. This distinction mattered in conversations with team leaders, who were cautious about how data would be used.

Row-level security ensured team leaders could only see their own team's data. Managers could see all teams. The exec summary had no drill-through at all; trust in the headline numbers was the goal, not access to individual records.

Three Months Post-Launch:
SLA Up 12%, Reporting Time Down 50%

50%
Less time on manual reporting Hours previously spent compiling spreadsheets redirected to actual performance management.
+12%
SLA adherence improvement Measured over three months post-launch. Driven by targeted scheduling changes in the breach time slots identified in the data.
4→1
Data sources unified Call management, quality, workforce, and CRM: one model, one version of the truth.

The 12% SLA improvement didn't come from the dashboard alone; it came from decisions the dashboard made possible. Once the time-slot breach pattern was visible, operations could adjust scheduling in those windows. Once workload imbalance was visible, TLs could redistribute contacts more fairly.

The deeper outcome was cultural. Team leaders who had previously avoided data, because it was unreliable or took too long to compile, started opening the dashboard at the start of each shift. That behaviour change was the real measure of success.