Team leaders were managing adviser performance with disconnected spreadsheets and inconsistent reports. I unified four data sources, built a three-tier Power BI scorecard, and gave every team leader a single trusted source of truth for every performance conversation.
The contact centre had no shortage of data. What it lacked was a consistent, trustworthy view of it. Team leaders were tracking adviser performance across manual spreadsheets built by different people with different formulas, alongside reports from multiple operational systems that rarely told the same story.
When a team leader sat down to review an adviser's performance, they might look at three different numbers for the same metric depending on which report they opened. Performance conversations were built on shaky ground. Interventions were reactive and sometimes misdirected.
The core problem wasn't a lack of reporting; it was a lack of trust in reporting. No one agreed on which number was right.
Before any dashboard could be built, the data had to be made trustworthy. I analysed records from four operational systems: call management, quality, workforce management, and CRM, then worked with IT to clean and integrate them into a unified dataset.
What emerged during that process wasn't just dirty data. It was three findings that fundamentally shaped what the dashboard needed to show.
None of these findings required complex modelling. They required joining the right tables and asking the right questions. Neither of which had been done before.
The dashboard needed to serve three different audiences with different needs and different levels of data literacy. Rather than building three separate reports, I designed a single Power BI model with three distinct pages, each surfacing the right level of detail for the right role.
📸 Power BI dashboard screenshots · Insert exports here
A key design decision was colour discipline. RAG status (green/amber/red) applied only to performance against agreed thresholds, never used as a comparison between advisers. An adviser flagged amber wasn't being compared to their peers; they were being compared to their own target. This distinction mattered in conversations with team leaders, who were cautious about how data would be used.
Row-level security ensured team leaders could only see their own team's data. Managers could see all teams. The exec summary had no drill-through at all; trust in the headline numbers was the goal, not access to individual records.
The 12% SLA improvement didn't come from the dashboard alone; it came from decisions the dashboard made possible. Once the time-slot breach pattern was visible, operations could adjust scheduling in those windows. Once workload imbalance was visible, TLs could redistribute contacts more fairly.
The deeper outcome was cultural. Team leaders who had previously avoided data, because it was unreliable or took too long to compile, started opening the dashboard at the start of each shift. That behaviour change was the real measure of success.