AFM 207: Introduction to Performance Analytics

Nancy Vanden Bosch

Estimated study time: 1 hr 2 min

Table of contents

Sources and References

Primary textbook — Knaflic, Cole Nussbaumer. Storytelling with Data: Let’s Practice! Wiley, 2019. Supplementary texts — Kaplan, Robert S. & Norton, David P. The Balanced Scorecard: Translating Strategy into Action. Harvard Business School Press, 1996. | Niven, Paul R. Balanced Scorecard Step-by-Step: Maximizing Performance and Maintaining Results (2nd ed.). Wiley, 2006. | Horngren, Charles T., Datar, Srikant M., & Rajan, Madhav V. Cost Accounting: A Managerial Emphasis (16th ed.). Pearson, 2018. Online resources — EY ARC, “Introduction to Data Visualization”; Tableau Public documentation; Gartner Analytics Maturity Model; CPA Canada Performance Management resources.


Chapter 1: Foundations of Performance Analytics

1.1 What Is Performance Analytics?

Performance analytics is the discipline of examining business data systematically to answer three fundamental questions: what happened, why it happened, and what should be done about it. It sits at the intersection of business knowledge, data analysis, and communication — requiring a practitioner to understand the organization, work with available data, and convey findings to decision-makers in a form they can act on.

Performance Analytics: The structured process of measuring, analyzing, and communicating business outcomes in order to diagnose performance gaps and support strategic or operational decision-making. It integrates quantitative methods, organizational strategy, and communication design to turn raw data into actionable insight.

This differs from general data analysis in an important way: performance analytics is explicitly goal-oriented and audience-aware. Every choice — what data to pull, which chart to build, how to sequence the story — is made with the stakeholder’s decision in mind. A technical analysis that cannot be communicated to its audience has no practical value.

1.2 The Analytical Mindset

Adopting an analytical mindset means approaching a business problem with curiosity, skepticism, and structure. It involves:

  1. Asking good questions before touching data. What does the stakeholder actually need to know? What decision will this analysis support?
  2. Understanding the business model first. A retail analyst who does not know how gross margin is calculated cannot meaningfully diagnose margin compression.
  3. Letting data guide, not confirm. Exploratory analysis should be open-ended; the analyst should be willing to be surprised by findings rather than cherry-picking evidence for a predetermined conclusion.
  4. Communicating with precision. Analytical conclusions must be expressed clearly, without jargon, at the level of detail appropriate to the audience.
  5. Maintaining healthy skepticism about the data itself. Data quality issues — missing values, mislabelled categories, stale figures — are the norm rather than the exception in real business environments.

1.3 The Performance Diagnostic Framework

The course structures a performance diagnostic around three sequential questions:

QuestionAnalytical PhaseOutput
What happened?Descriptive analysisSummary metrics, trend lines, distribution charts
Why did it happen?Diagnostic (root-cause) analysisDrill-downs, scatter plots, segment comparisons
Now what?Prescriptive framingRecommendations, scenario comparisons

A good diagnostic starts at the highest level — overall performance against a target or prior period — and then progressively decomposes that result by segment, product, geography, or time period to identify the root cause of any gap. This hierarchical decomposition is sometimes called a waterfall analysis or variance tree.

The three-question framework aligns with Gartner's four-stage Analytics Maturity Model: descriptive (what happened), diagnostic (why it happened), predictive (what will happen), and prescriptive (what should be done). AFM 207 focuses primarily on the descriptive and diagnostic stages, with an introduction to prescriptive framing in the "now what" step.

1.4 Professional Ethics in Analytics

Because analytics shapes decisions, the analyst carries ethical obligations. Misleading visualizations — truncated axes, cherry-picked time windows, inappropriate chart types — distort perception even when the underlying numbers are accurate. Ethical practice requires:

  • Presenting data in context (baselines, benchmarks, confidence intervals where relevant)
  • Disclosing data limitations and caveats explicitly
  • Distinguishing correlation from causation in all communications
  • Maintaining confidentiality of sensitive business information
  • Avoiding selective disclosure that serves a particular agenda

The CPA Canada Code of Professional Conduct and the broader literature on data ethics both emphasize the professional’s duty to report honestly and to protect the integrity of the information they handle. For an accounting professional, the obligation is particularly acute: financial performance data underpins capital allocation decisions, compensation plans, and regulatory filings.


Chapter 2: Performance Measurement Frameworks

2.1 Why Measurement Frameworks Matter

Organizations face a fundamental challenge: strategy is abstract, but management requires concrete, measurable targets. A performance measurement framework is a structured approach to translating strategic goals into observable, quantifiable indicators. Without such a framework, organizations risk optimizing individual functions in ways that do not advance the overall strategy — or, worse, that actively trade off against it.

Performance Measurement Framework: A structured set of metrics, organized by strategic perspective or organizational level, that collectively capture whether an organization is executing its strategy effectively. A good framework balances leading indicators (which predict future outcomes) with lagging indicators (which confirm past results).

2.2 The Balanced Scorecard

The Balanced Scorecard (BSC), developed by Robert Kaplan and David Norton and introduced in a landmark 1992 Harvard Business Review article, is the most widely adopted performance measurement framework in use today. Its central insight is that financial metrics alone are insufficient — by the time a financial problem shows up in the income statement, it is often too late to course-correct. The BSC supplements financial measures with three additional perspectives that provide earlier signals.

Balanced Scorecard: A strategic performance management framework that measures organizational performance across four perspectives — financial, customer, internal process, and learning and growth — in order to provide a balanced view of both lagging financial outcomes and leading operational and organizational drivers.

2.2.1 The Four Perspectives

Financial Perspective

The financial perspective answers the question: How do we look to shareholders? It captures the ultimate goal of a for-profit organization — to generate returns for investors. Typical financial KPIs include:

  • Revenue growth rate
  • Gross margin percentage
  • Operating income (EBIT)
  • Return on assets (ROA) and return on equity (ROE)
  • Economic Value Added (EVA)
  • Earnings per share (EPS)
Financial measures are lagging indicators: they tell you what already happened. They are essential for accountability but insufficient for management, because by the time revenue falls or margins compress, the underlying operational or customer problems have often been present for quarters.

Customer Perspective

The customer perspective answers: How do customers see us? Customer outcomes drive financial outcomes — a business that loses customers or erodes satisfaction will eventually see it in revenue and margin. Typical customer KPIs include:

  • Customer satisfaction score (CSAT)
  • Net Promoter Score (NPS)
  • Customer retention rate
  • Customer acquisition cost (CAC)
  • Market share
  • Average order value (AOV)
  • Customer lifetime value (CLV)
Net Promoter Score (NPS): A customer loyalty metric calculated as the percentage of respondents who rate their likelihood to recommend the company 9–10 (Promoters) minus the percentage who rate it 0–6 (Detractors). NPS ranges from –100 to +100. A score above 50 is generally considered excellent. \[ \text{NPS} = \%\text{Promoters} - \%\text{Detractors} \]

Internal Process Perspective

The internal process perspective asks: What must we excel at? It identifies the operational processes most critical to delivering customer value and financial results. KPIs vary by business model:

Business TypeExample Internal Process KPIs
ManufacturingDefect rate (PPM), cycle time, machine utilization
RetailInventory turnover, on-shelf availability, order fulfillment time
Professional servicesProject on-time delivery rate, utilization rate, rework rate
HealthcarePatient wait time, readmission rate, treatment error rate

Learning and Growth Perspective

The learning and growth perspective asks: Can we continue to improve and create value? It captures the organizational capacity — human capital, information capital, and organizational culture — required to execute the other three perspectives over time. Typical KPIs include:

  • Employee engagement score
  • Training hours per employee per year
  • Employee retention rate
  • Percentage of roles filled internally (succession depth)
  • Technology capability index
  • Innovation revenue (revenue from products launched in last three years as a % of total)
Learning and growth measures are the most leading of all four perspectives. Investments in people and technology take years to translate into process improvements, customer outcomes, and finally financial results. Organizations that cut training budgets during downturns often see financial improvement in the short term but long-term deterioration in all four perspectives.

2.2.2 Cause-and-Effect Logic

A key principle of the Balanced Scorecard is that the four perspectives are connected by cause-and-effect hypotheses. The logic runs: if we invest in our people (Learning & Growth), they will improve our processes (Internal Process); better processes will improve customer satisfaction (Customer); satisfied customers will grow revenue and margin (Financial). This chain of logic is made explicit in a Strategy Map.

Strategy Map: A visual representation of an organization's strategy that shows the cause-and-effect relationships among strategic objectives across the four Balanced Scorecard perspectives. Each objective is a node; arrows connect objectives that are hypothesized to drive one another.

2.3 KPI Design and Selection

Not every metric that can be measured should be measured. An excess of KPIs creates noise and consumes reporting resources without adding decision-making value. Good KPI design follows several principles.

2.3.1 The SMART Framework

A well-designed KPI is:

  • Specific: It measures one clearly defined phenomenon, not a vague concept.
  • Measurable: It can be quantified using available data.
  • Achievable: The target is challenging but realistic given available resources.
  • Relevant: It is linked to a strategic objective that matters to the organization.
  • Time-bound: It is reported on a defined frequency (weekly, monthly, quarterly) against a defined time horizon.

2.3.2 Leading vs. Lagging Indicators

Lagging Indicator: A metric that measures the outcome of past activities. Examples: quarterly revenue, annual profit, customer churn rate (reported at period end). Lagging indicators confirm whether strategy has worked but cannot be influenced directly.
Leading Indicator: A metric that predicts or drives future outcomes. Examples: sales pipeline value, customer satisfaction score, employee training completion rate. Leading indicators can be influenced today to shape future lagging outcomes.

A balanced KPI portfolio includes both. Relying exclusively on lagging indicators is like driving a car by looking only in the rear-view mirror.

2.3.3 Common KPI Design Pitfalls

PitfallDescriptionExample
GamingEmployees optimize the KPI rather than the underlying goalCall center measures average handle time → agents rush calls
SurrogationThe KPI replaces the strategic objective in managers’ mindsNPS becomes the goal rather than customer loyalty
Too many KPIsReporting burden overwhelms decision value50-metric monthly scorecard where 45 are green
Poorly defined numerators/denominatorsDifferent teams calculate the same metric differently“Revenue” includes or excludes returns depending on the system
No defined targetA metric without a benchmark has no diagnostic valueReporting gross margin with no comparison period or goal

Chapter 3: Financial Performance Metrics

3.1 Profitability Ratios

Financial performance analysis begins with understanding profitability — the ability of an organization to generate income relative to its revenue, assets, or equity. The key profitability ratios form a hierarchy from revenue to net income.

RatioFormulaWhat It Measures
Gross Margin %\(\frac{\text{Revenue} - \text{COGS}}{\text{Revenue}}\)Efficiency of production / procurement
Operating Margin %\(\frac{\text{EBIT}}{\text{Revenue}}\)Profitability after operating costs
Net Profit Margin %\(\frac{\text{Net Income}}{\text{Revenue}}\)Overall profitability after all costs and taxes
Return on Assets (ROA)\(\frac{\text{Net Income}}{\text{Total Assets}}\)Asset utilization efficiency
Return on Equity (ROE)\(\frac{\text{Net Income}}{\text{Shareholders' Equity}}\)Return generated for shareholders
Example — Profitability Analysis: Maple Retail Corp. reports the following for FY2025:

Revenue: $24,000,000 Cost of Goods Sold: $15,600,000 Operating Expenses: $5,280,000 Interest Expense: $480,000 Tax Rate: 25% Total Assets: $18,000,000 Shareholders’ Equity: $9,600,000

Calculations:

Gross Profit = $24,000,000 − $15,600,000 = $8,400,000 Gross Margin % = $8,400,000 / $24,000,000 = 35.0%

EBIT = $8,400,000 − $5,280,000 = $3,120,000 Operating Margin % = $3,120,000 / $24,000,000 = 13.0%

EBT = $3,120,000 − $480,000 = $2,640,000 Net Income = $2,640,000 × (1 − 0.25) = $1,980,000 Net Profit Margin % = $1,980,000 / $24,000,000 = 8.25%

ROA = $1,980,000 / $18,000,000 = 11.0% ROE = $1,980,000 / $9,600,000 = 20.6%

3.2 DuPont Decomposition

The DuPont framework decomposes ROE into component ratios, revealing which drivers — profitability, asset efficiency, or financial leverage — are responsible for a change in return on equity. This is one of the most powerful diagnostic tools in financial performance analysis.

3.2.1 The Three-Factor DuPont Model

\[ \text{ROE} = \underbrace{\frac{\text{Net Income}}{\text{Revenue}}}_{\text{Net Profit Margin}} \times \underbrace{\frac{\text{Revenue}}{\text{Total Assets}}}_{\text{Asset Turnover}} \times \underbrace{\frac{\text{Total Assets}}{\text{Shareholders' Equity}}}_{\text{Equity Multiplier (Leverage)}} \]
Asset Turnover: Measures how efficiently a company generates revenue from its asset base. A higher ratio indicates more efficient use of assets. \[ \text{Asset Turnover} = \frac{\text{Revenue}}{\text{Total Assets}} \]
Equity Multiplier: A measure of financial leverage. A higher equity multiplier means the company is financing more of its assets with debt. While leverage can amplify ROE, it also increases financial risk. \[ \text{Equity Multiplier} = \frac{\text{Total Assets}}{\text{Shareholders' Equity}} \]

3.2.2 Extended Five-Factor DuPont Model

The three-factor model can be expanded to a five-factor model that further separates operating performance from financial structure:

\[ \text{ROE} = \frac{\text{Net Income}}{\text{EBT}} \times \frac{\text{EBT}}{\text{EBIT}} \times \frac{\text{EBIT}}{\text{Revenue}} \times \frac{\text{Revenue}}{\text{Total Assets}} \times \frac{\text{Total Assets}}{\text{Equity}} \]
FactorRatioInterpretation
Tax burdenNet Income / EBTHigher = lower effective tax rate
Interest burdenEBT / EBITLower = higher interest load relative to operating profit
Operating marginEBIT / RevenueCore operational profitability
Asset turnoverRevenue / AssetsAsset utilization efficiency
LeverageAssets / EquityFinancial risk and magnification
Example — DuPont Decomposition with Year-over-Year Comparison:

Maple Retail Corp. financial data:

MetricFY2024FY2025
Revenue\$22,000,000\$24,000,000
Net Income\$1,540,000\$1,980,000
Total Assets\$17,000,000\$18,000,000
Shareholders' Equity\$9,200,000\$9,600,000

FY2024 DuPont: Net Profit Margin = 1,540,000 / 22,000,000 = 7.00% Asset Turnover = 22,000,000 / 17,000,000 = 1.294× Equity Multiplier = 17,000,000 / 9,200,000 = 1.848× ROE = 7.00% × 1.294 × 1.848 = 16.74%

FY2025 DuPont: Net Profit Margin = 1,980,000 / 24,000,000 = 8.25% Asset Turnover = 24,000,000 / 18,000,000 = 1.333× Equity Multiplier = 18,000,000 / 9,600,000 = 1.875× ROE = 8.25% × 1.333 × 1.875 = 20.63%

Interpretation: ROE improved from 16.74% to 20.63%. The decomposition shows all three drivers improved modestly. The largest driver of the improvement was the increase in net profit margin (+1.25 percentage points), which contributed the most to the ROE gain. Asset turnover and leverage both increased slightly, consistent with moderate revenue growth on a larger asset base with slightly higher debt financing.

3.3 Economic Value Added (EVA)

ROE and ROA are accounting-based measures that do not account for the cost of equity capital. A business can show positive net income and still be destroying shareholder value if it earns less than investors require. Economic Value Added corrects this by subtracting the full cost of capital from net operating profit.

Economic Value Added (EVA): A measure of economic profit that equals Net Operating Profit After Tax (NOPAT) minus the dollar cost of all capital employed — both debt and equity. \[ \text{EVA} = \text{NOPAT} - (\text{WACC} \times \text{Capital Employed}) \]

Where:

\[ \text{NOPAT} = \text{EBIT} \times (1 - \text{Tax Rate}) \]\[ \text{Capital Employed} = \text{Total Assets} - \text{Non-Interest-Bearing Current Liabilities} \]
Weighted Average Cost of Capital (WACC): The weighted average of a firm's cost of debt and cost of equity, weighted by their proportions in the capital structure. \[ \text{WACC} = \frac{E}{E+D} \times r_e + \frac{D}{E+D} \times r_d \times (1-T) \]

Where \(E\) = market value of equity, \(D\) = market value of debt, \(r_e\) = cost of equity, \(r_d\) = cost of debt (pre-tax), \(T\) = tax rate.

Example — EVA Calculation:

Assume Maple Retail Corp. has: EBIT = $3,120,000; Tax Rate = 25%; WACC = 9%; Capital Employed = $16,200,000

NOPAT = $3,120,000 × (1 − 0.25) = $2,340,000 Capital Charge = 9% × $16,200,000 = $1,458,000 EVA = $2,340,000 − $1,458,000 = $882,000

Since EVA > 0, Maple Retail is creating shareholder value above and beyond its cost of capital. A negative EVA would indicate value destruction even with positive accounting profits.

3.4 Market Value Added (MVA)

Market Value Added (MVA): The difference between the total market value of the firm (equity market cap + market value of debt) and the total capital invested. It represents the cumulative wealth created for investors. \[ \text{MVA} = \text{Market Value of Firm} - \text{Capital Invested} \]

MVA is the present value of all future expected EVAs. A firm that consistently generates positive EVA will have a high positive MVA; a firm that destroys value will trade at a discount to invested capital (negative MVA).

EVA and MVA are particularly useful for evaluating divisional performance in multi-segment organizations because they charge each division for the capital it employs, creating accountability for asset-light versus asset-heavy business models.


Chapter 4: Non-Financial Performance Metrics

4.1 Customer Performance Metrics

Financial metrics confirm past results; customer metrics provide earlier signals about future revenue and growth. Understanding the full customer journey — from acquisition through retention — requires a portfolio of customer-focused KPIs.

4.1.1 Customer Satisfaction Score (CSAT)

Customer Satisfaction Score (CSAT): A survey-based metric that asks customers to rate their satisfaction with a specific interaction or experience, typically on a scale of 1–5 or 1–10. CSAT is usually calculated as the percentage of respondents selecting the top two scores. \[ \text{CSAT} = \frac{\text{Number of Satisfied Responses (top 2 scores)}}{\text{Total Responses}} \times 100 \]

CSAT is best used to evaluate transactional satisfaction (e.g., after a support call or a delivery). It is highly specific to the measured interaction and does not capture overall brand sentiment or loyalty.

4.1.2 Net Promoter Score (NPS)

NPS, developed by Fred Reichheld and published in the Harvard Business Review in 2003, asks a single question: “How likely are you to recommend us to a friend or colleague?” Respondents rate from 0–10:

  • Promoters (9–10): Loyal enthusiasts who actively refer others
  • Passives (7–8): Satisfied but not enthusiastic; vulnerable to competitive offers
  • Detractors (0–6): Unhappy customers who may actively discourage others
\[ \text{NPS} = \%\text{Promoters} - \%\text{Detractors} \]
Example — NPS Calculation:

A financial services firm surveys 400 recent clients. Results: Promoters (9–10): 180 respondents → 45% Passives (7–8): 140 respondents → 35% Detractors (0–6): 80 respondents → 20%

NPS = 45% − 20% = +25

Industry context matters: an NPS of +25 is mediocre in software (average ~30–40) but strong in financial services (industry average ~15–20). The diagnostic value of NPS lies in tracking it over time and segmenting by customer type, channel, or product to identify where satisfaction is improving or deteriorating.

4.1.3 Customer Lifetime Value (CLV)

Customer Lifetime Value (CLV): The present value of all future net cash flows expected from a customer relationship. CLV quantifies how much a customer is worth to the business over their entire engagement, enabling rational decisions about acquisition spending and retention investment.

For a simple subscription model:

\[ \text{CLV} = \frac{\text{Average Monthly Margin per Customer}}{\text{Monthly Churn Rate}} \]

More generally:

\[ \text{CLV} = \sum_{t=1}^{T} \frac{m_t}{(1+d)^t} \]

where \(m_t\) is the net margin from the customer in period \(t\) and \(d\) is the discount rate.

4.1.4 Customer Acquisition Cost (CAC) and the CLV/CAC Ratio

Customer Acquisition Cost (CAC): The total sales and marketing cost incurred to acquire one new customer over a given period. \[ \text{CAC} = \frac{\text{Total Sales & Marketing Spend}}{\text{New Customers Acquired}} \]

The CLV-to-CAC ratio measures the return on customer acquisition investment. A ratio of at least 3:1 is generally considered healthy — meaning the lifetime value of a customer is at least three times what it cost to acquire them. A ratio below 1 means the company loses money on every customer it acquires.

4.2 Employee Engagement

Employee engagement is a measure of the degree to which employees are committed to, motivated by, and satisfied with their work and workplace. Research consistently shows that engaged employees are more productive, provide better customer service, and are less likely to leave — making engagement a key leading indicator of both operational performance and financial outcomes.

Employee Engagement Score: Typically measured through periodic surveys using validated instruments (e.g., Gallup Q12, Aon Hewitt model). Results are usually reported as a percentage of "engaged" employees, a composite score, or both. Benchmarking against industry norms adds interpretive context.

Common employee engagement KPIs include:

  • Engagement survey score (% highly engaged)
  • Voluntary turnover rate (annualized)
  • Absenteeism rate
  • Internal promotion rate
  • Training hours per employee

4.3 Operational Efficiency Metrics

Operational efficiency metrics measure how well an organization uses its inputs to produce outputs. They are central to the internal process perspective of the Balanced Scorecard.

4.3.1 Cycle Time and Throughput

Cycle Time: The total elapsed time from the start to the completion of a process. Shorter cycle times generally indicate more efficient processes and correlate with better customer responsiveness.
Throughput: The rate at which a process produces outputs over a given time period. In manufacturing, it might be units per hour; in a professional services firm, it might be cases completed per week.

4.3.2 Utilization Rate

\[ \text{Utilization Rate} = \frac{\text{Actual Output (or Time Used)}}{\text{Maximum Possible Output (or Available Time)}} \times 100 \]

In professional services, utilization rate is the proportion of available employee hours that are billed to clients. A utilization rate that is consistently too high (above 85–90%) signals risk of burnout; too low (below 60%) signals excess capacity and margin pressure.

4.3.3 Inventory Metrics

For businesses that carry inventory, two key metrics are:

\[ \text{Inventory Turnover} = \frac{\text{Cost of Goods Sold}}{\text{Average Inventory}} \]\[ \text{Days Sales in Inventory (DSI)} = \frac{365}{\text{Inventory Turnover}} \]

Higher turnover (lower DSI) generally indicates more efficient inventory management, though what is optimal varies significantly by industry.


Chapter 5: Data Analytics for Performance Management

5.1 The Analytics Maturity Spectrum

Organizations and analytical projects can be positioned along a spectrum from simple description to sophisticated optimization. Gartner’s framework identifies four levels:

LevelTypeQuestion AnsweredComplexityValue
1DescriptiveWhat happened?LowModerate
2DiagnosticWhy did it happen?ModerateHigh
3PredictiveWhat will happen?HighHigher
4PrescriptiveWhat should we do?Very HighHighest

Most organizations operate primarily at Levels 1–2. Moving to Levels 3–4 requires more sophisticated data infrastructure, statistical or machine learning capabilities, and organizational readiness to act on model outputs.

5.2 Descriptive Analytics

Descriptive analytics summarizes historical data to provide a clear picture of past performance. It is the foundation upon which all other analytics types are built.

Common descriptive analytics outputs:

  • Summary statistics: Means, medians, ranges, percentiles for key metrics
  • Trend analysis: Revenue or cost over time, typically visualized with line charts
  • Distribution analysis: How a metric (e.g., customer spend) is distributed across the population
  • Segmentation: Breaking aggregate totals into meaningful sub-groups (by region, product, customer type)
Descriptive analytics answers "what happened" but not "why." A dashboard showing that Q3 sales dropped 12% is descriptive. The investigation of which products, regions, or customer segments drove that decline — and the identification of the root cause — requires diagnostic analytics.

5.3 Diagnostic Analytics

Diagnostic analytics goes beyond description to explain why a performance gap occurred. It relies on techniques that decompose an aggregate result into its drivers.

5.3.1 Drill-Down Analysis

Drill-down analysis decomposes a top-level metric progressively into sub-components. The analyst starts at the highest level of aggregation and systematically breaks the result down until the source of the gap is isolated.

Example — Drill-Down Diagnostic:

Step 1 (What happened?): National revenue is down 10% vs. prior year. Step 2 (Where?): By region: Ontario −15%, Quebec −2%, West +1%. The Ontario gap drives the total. Step 3 (What product?): Within Ontario: Electronics −28%, Apparel +3%, Home & Garden −4%. Electronics drives the Ontario gap. Step 4 (Why Electronics?): Unit volume down 22%; average selling price down 8%. Volume is the primary driver. Step 5 (Root cause): A major competitor launched a competing product line in Ontario in Q2, capturing significant market share in Electronics.

Root cause identified: Competitor entry into Ontario Electronics category.

5.3.2 Scatter Plots and Correlation Analysis

Scatter plots visualize the relationship between two continuous variables. They are the primary diagnostic tool for identifying whether a potential driver variable is correlated with a performance outcome.

Correlation is not causation. A scatter plot showing a positive relationship between marketing spend and sales does not prove that spend caused the sales increase — both could be driven by a third variable (e.g., economic growth). Proper causal analysis requires experimental design or advanced econometric methods. In a performance analytics context, correlation is evidence that warrants investigation, not a final answer.

5.3.3 Cohort Analysis

Cohort analysis groups customers or observations by a shared characteristic (typically the time period in which they were acquired) and tracks their behaviour over time. It is particularly powerful for diagnosing churn and retention dynamics.

Example — Cohort Retention Table:

A subscription software company tracks three customer cohorts by their month of acquisition:

CohortMonth 0Month 1Month 2Month 3
Jan 2025100%82%71%65%
Feb 2025100%79%68%61%
Mar 2025100%74%62%55%

The declining Month 1 retention across cohorts (82% → 79% → 74%) suggests a worsening onboarding experience for newer customers. This is an early warning signal that would be invisible in aggregate churn figures.

5.4 Predictive Analytics

Predictive analytics uses historical data to forecast future outcomes. Common techniques include:

  • Time-series forecasting: Using patterns in past data (trends, seasonality, cycles) to project future values. Methods include moving averages, exponential smoothing, and ARIMA models.
  • Regression analysis: Using one or more predictor variables to estimate the expected value of a target variable (e.g., predicting next quarter’s sales based on leading indicators).
  • Classification models: Predicting which category an observation falls into — for example, classifying customers as high-churn-risk or low-churn-risk based on behavioural signals.
Predictive Model: A statistical or machine learning model trained on historical data to produce probability estimates or point forecasts about future events. The model's usefulness depends on the quality and relevance of its training data and the stability of the underlying patterns over time.

5.5 Prescriptive Analytics

Prescriptive analytics recommends specific actions to achieve desired outcomes, often using optimization algorithms or simulation. It answers “what should we do?” rather than merely “what will happen?”

Examples:

  • Price optimization: Algorithms that recommend the profit-maximizing price for each product given demand elasticity estimates
  • Capacity planning: Simulation models that recommend staffing levels given forecasted demand and service-level targets
  • Portfolio optimization: Financial models that recommend the capital allocation across business units that maximizes expected EVA subject to risk constraints

Prescriptive analytics is the most valuable and most complex tier. It requires a reliable predictive model as its foundation and organizational processes capable of acting on its recommendations.


Chapter 6: Benchmarking

6.1 What Is Benchmarking?

Benchmarking: The process of comparing an organization's performance metrics against a reference point — internal history, competitors, industry averages, or best-in-class organizations — in order to identify performance gaps and set improvement targets.

Benchmarking is not about copying what others do. It is about understanding the performance gap, investigating the practices that account for it, and adapting those practices to the organization’s own context.

6.2 Types of Benchmarking

6.2.1 Internal Benchmarking

Internal benchmarking compares performance across units, divisions, geographies, or time periods within the same organization. It is the simplest form — data is readily available, definitions are consistent, and cultural context is shared.

Example: A bank benchmarks the loan processing cycle time across its 12 regional branches. Branch G processes loans in an average of 4.2 days; the worst-performing branch takes 9.1 days. The bank investigates Branch G's workflow to identify practices that could be standardized across all branches.

Advantages: Data accessibility, consistent definitions, ease of sharing practices Limitations: Best internal practice may still lag best-in-class external performance; risk of benchmarking to a low standard

6.2.2 Competitive Benchmarking

Competitive benchmarking compares performance against direct competitors. It answers the question: are we winning or losing relative to the rivals our customers can choose?

Data sources: Public financial statements, industry association reports, analyst research, market intelligence services, customer surveys (share-of-wallet, brand preference)

Limitations: Competitors do not disclose operational data; comparisons may be distorted by different accounting policies, geographic mix, or business model differences.

6.2.3 Functional Benchmarking

Functional benchmarking compares a specific business function against organizations in different industries that perform the same function. The premise is that the best procurement department in the world may not be in your industry.

Example: A hospital benchmarks its supply chain management practices against Toyota's automotive supply chain, since both involve managing high-mix, time-sensitive inventory in complex operations. The hospital adopts a modified kanban system after the study.

6.2.4 Generic (Best-in-Class) Benchmarking

Generic benchmarking identifies the world-class performers of a specific process regardless of industry and benchmarks against them. It is the most ambitious and most transformative form of benchmarking, but also the most difficult to implement because the context differences between the benchmark organization and the subject are large.

6.3 The Benchmarking Process

A rigorous benchmarking study follows a structured process:

  1. Identify what to benchmark: Which process or KPI is the subject? Is it a priority for organizational performance?
  2. Identify benchmark partners: Internal units, competitors, or functional leaders?
  3. Collect data: From partners (with their cooperation) or from public sources
  4. Analyze performance gaps: Quantify the gap and understand its components
  5. Identify enabling practices: What does the benchmark partner do differently that accounts for the performance gap?
  6. Adapt and implement: Translate the identified practices into the organization’s own context
  7. Monitor progress: Track whether the gap is closing; recycle the process

Chapter 7: Variance Analysis

7.1 Standard Costing and the Purpose of Variance Analysis

Standard Cost: A predetermined estimate of the cost to produce one unit of output, established using engineering studies, historical data, and management judgment. Standards exist for both inputs (materials, labour) and costs (price per unit of input).
Variance Analysis: The process of comparing actual costs and revenues against standard or budgeted amounts, and decomposing the total difference into component variances that identify the source of the gap.

Variance analysis serves several management purposes:

  • Performance evaluation: Did a production manager control costs effectively?
  • Operational diagnosis: Is a price variance driven by supplier pricing or procurement inefficiency?
  • Standard revision: Are the standards themselves still valid, or do they need updating?
  • Learning: What do the variances reveal about process efficiency?

7.2 Direct Materials Variances

The total materials variance decomposes into a price variance and a quantity (efficiency) variance.

Materials Price Variance (MPV): Measures whether the actual price paid per unit of material differed from the standard price. \[ \text{MPV} = (\text{Actual Price} - \text{Standard Price}) \times \text{Actual Quantity Purchased} \]

Favorable (F) if actual price < standard price; Unfavorable (U) if actual price > standard price.

Materials Quantity Variance (MQV): Measures whether more or fewer units of material were used than the standard quantity allowed for actual output. \[ \text{MQV} = (\text{Actual Quantity Used} - \text{Standard Quantity Allowed}) \times \text{Standard Price} \]

Favorable if actual quantity used < standard allowed; Unfavorable if actual > standard.

7.3 Direct Labour Variances

Labour Rate Variance (LRV): Measures whether the actual wage rate paid differed from the standard rate. \[ \text{LRV} = (\text{Actual Rate} - \text{Standard Rate}) \times \text{Actual Hours Worked} \]
Labour Efficiency Variance (LEV): Measures whether more or fewer labour hours were used than the standard hours allowed for actual output. \[ \text{LEV} = (\text{Actual Hours} - \text{Standard Hours Allowed}) \times \text{Standard Rate} \]

7.4 Flexible Budget Variance Analysis

Flexible budget variance analysis separates the volume effect from the price and efficiency effects of actual performance.

Static Budget: A budget prepared at the beginning of the period for a single planned level of output. Actual results compared directly to a static budget confound volume differences with efficiency differences.
Flexible Budget: A budget that is adjusted to the actual volume of activity achieved. By comparing actual costs to the flexible budget (i.e., what costs should have been at actual volume), the analyst isolates pure price and efficiency variances from volume effects.

7.4.1 The Three-Way Variance Framework

VarianceCalculationInsight
Sales Volume Variance(Actual Units − Budgeted Units) × Budgeted Unit Contribution MarginEffect of selling more or fewer units than planned
Flexible Budget VarianceActual Result − Flexible Budget Result at Actual VolumeCombined effect of price, rate, and efficiency differences
Total VarianceActual Result − Static Budget ResultTotal difference from plan
Example — Full Variance Analysis:

Northside Manufacturing produces a single product with the following standards per unit: Direct Materials: 3 kg × $4.00/kg = $12.00 Direct Labour: 2 hours × $18.00/hr = $36.00 Variable Overhead: 2 hours × $6.00/hr = $12.00 Standard Variable Cost per Unit: $60.00 Standard Selling Price: $95.00 Standard Contribution Margin: $35.00

Budgeted output: 5,000 units

Actual results for the period: Units produced and sold: 5,400 Revenue: $499,500 (actual price $92.50/unit) Direct Materials purchased and used: 16,740 kg at $4.20/kg = $70,308 Direct Labour: 11,340 hours at $17.50/hr = $198,450 Variable Overhead: $65,772

Step 1 — Sales Variances:

Sales Price Variance = (Actual Price − Standard Price) × Actual Units = ($92.50 − $95.00) × 5,400 = −$13,500 (U)

Sales Volume Variance = (Actual Units − Budgeted Units) × Standard CM = (5,400 − 5,000) × $35.00 = +$14,000 (F)

Step 2 — Materials Variances:

Standard Quantity Allowed = 5,400 units × 3 kg = 16,200 kg

MPV = ($4.20 − $4.00) × 16,740 = +$3,348 (U) MQV = (16,740 − 16,200) × $4.00 = +$2,160 (U) Total Materials Variance = $5,508 (U)

Step 3 — Labour Variances:

Standard Hours Allowed = 5,400 × 2 = 10,800 hours

LRV = ($17.50 − $18.00) × 11,340 = −$5,670 (F) LEV = (11,340 − 10,800) × $18.00 = +$9,720 (U) Total Labour Variance = $4,050 (U)

Step 4 — Summary:

VarianceAmountF/U
Sales Price Variance\$13,500U
Sales Volume Variance\$14,000F
Materials Price Variance\$3,348U
Materials Quantity Variance\$2,160U
Labour Rate Variance\$5,670F
Labour Efficiency Variance\$9,720U

Management Interpretation: The business sold 400 more units than budgeted (F volume variance), but at a lower price, nearly offsetting the volume benefit. Materials were more expensive per kg (U price) and were used inefficiently (U quantity). Labour was paid at a lower rate (F rate — perhaps more junior workers were used), but those workers were less efficient, requiring 540 more hours than the standard allowed. This pattern — lower rate, higher hours — is a common signal of a skill-mix substitution that did not deliver the expected efficiency.

7.5 Sales Mix and Quantity Variances

When a company sells multiple products, the total sales volume variance can be further decomposed into a sales mix variance (did the actual mix of products sold differ from the planned mix?) and a sales quantity variance (did total volume differ from plan?).

Sales Mix Variance: Measures the effect of selling a different proportion of products than planned, valued at the budgeted contribution margins. \[ \text{Sales Mix Variance} = (\text{Actual Units in Actual Mix} - \text{Actual Units in Budgeted Mix}) \times \text{Budgeted CM per Unit} \]
Sales Quantity Variance: Measures the effect of selling a different total volume than planned, holding the budgeted mix constant. \[ \text{Sales Quantity Variance} = (\text{Actual Total Units in Budgeted Mix} - \text{Budgeted Units}) \times \text{Budgeted CM per Unit} \]
Example — Sales Mix and Quantity Variances:

Lakeside Products Ltd. sells two products:

ProductBudgeted UnitsBudgeted MixBudgeted CM/Unit
Alpha3,00060%\$40
Beta2,00040%\$25
Total5,000100%\$34 (weighted avg)

Actual results: Alpha sold 2,800 units; Beta sold 2,700 units. Total actual: 5,500 units.

Actual Mix: Alpha 2,800/5,500 = 50.9%; Beta 2,700/5,500 = 49.1%

Sales Quantity Variance = (5,500 − 5,000) × $34 = +$17,000 (F) (Selling 500 more units at the budgeted weighted-average CM)

Sales Mix Variance (Alpha): Actual units in budgeted mix = 5,500 × 60% = 3,300 Mix Variance Alpha = (2,800 − 3,300) × $40 = −$20,000 (U)

Sales Mix Variance (Beta): Actual units in budgeted mix = 5,500 × 40% = 2,200 Mix Variance Beta = (2,700 − 2,200) × $25 = +$12,500 (F)

Total Sales Mix Variance = −$20,000 + $12,500 = −$7,500 (U)

Interpretation: The company sold 500 more total units than planned (favorable quantity), but sold a higher proportion of the lower-margin Beta product and fewer of the high-margin Alpha product. The mix shift cost $7,500 of contribution margin, partially offsetting the volume benefit.


Chapter 8: Target Costing and Kaizen Costing

8.1 Target Costing

Traditional cost management asks: what does it cost to make this product, and can we sell it for enough to make a profit? Target costing reverses this logic: given the market price needed to be competitive, and the required profit margin, what is the maximum allowable cost?

Target Costing: A market-driven cost management approach in which the target selling price is determined first (based on competitive market analysis), the required profit margin is then subtracted, and the resulting figure is the target cost — the maximum cost to manufacture and sell the product. \[ \text{Target Cost} = \text{Target Selling Price} - \text{Required Profit Margin} \]

If the current estimated cost exceeds the target cost, the design team must work to close the cost gap through product redesign, value engineering, or supplier negotiations — before the product is launched, not after.

8.1.1 The Target Costing Process

  1. Conduct market research: Identify the price at which customers will purchase the product given competitive alternatives (the competitive market price).
  2. Determine required margin: Management establishes the minimum acceptable profit margin for the product.
  3. Compute the target cost: Target Cost = Market Price − Required Margin.
  4. Estimate the current cost: Using the preliminary design, estimate the full cost to produce.
  5. Identify the cost gap: Current estimated cost − target cost = cost gap to be eliminated.
  6. Value engineering: Cross-functional teams systematically review every component and process to find cost reductions that do not compromise customer-valued quality or functionality.
  7. Launch or abandon: If the cost gap can be closed, the product proceeds. If not, the product is redesigned or abandoned.
Example — Target Costing:

Clearview Technologies is developing a new consumer electronics product. Market research indicates the competitive selling price is $149.99. Management requires a profit margin of at least 20% of selling price.

Target Cost = $149.99 × (1 − 0.20) = $119.99

Preliminary engineering estimates the cost to produce at $136.50. The cost gap is: $136.50 − $119.99 = $16.51 per unit

The value engineering team identifies:

  • Substitute a lower-cost speaker component (same acoustic quality): saves $4.20
  • Redesign the plastic casing to reduce material use: saves $3.80
  • Negotiate volume pricing with display supplier: saves $5.50
  • Simplify internal cable routing (reduces labour): saves $3.80

Total savings identified: $17.30 — sufficient to close the gap. The product proceeds to production with the revised design.

8.2 Kaizen Costing

While target costing focuses on the design phase before production begins, kaizen costing focuses on continuous cost reduction during the production phase through incremental improvements.

Kaizen Costing: A cost management approach, rooted in the Japanese concept of kaizen (continuous improvement), that sets cost-reduction targets for existing production operations and challenges employees at all levels to find and implement incremental improvements to achieve those targets.

Kaizen costing differs from standard costing in a fundamental way: standard costing compares actual costs to a fixed standard (set once, typically annually) and generates variances. Kaizen costing sets a target below the current standard and continuously reduces the standard as improvements are realized.

FeatureStandard CostingKaizen Costing
When appliedBoth design and productionProduction phase only
Standard basisEngineering/historicalCurrent actual cost
DirectionControl to standardReduce below current actual
Employee roleFollow established proceduresIdentify and implement improvements
Variance meaningDeviation from standardFailure to achieve improvement target

Chapter 9: Dashboard Design Principles

9.1 The Purpose of a Dashboard

Dashboard: A visual display of the most important information needed to achieve one or more objectives, consolidated and arranged on a single screen so that information can be monitored at a glance. A dashboard is not a data dump — it is a curated communication tool designed for a specific audience and purpose.

The term “dashboard” derives from the automotive dashboard: a small set of high-priority gauges (speed, fuel level, engine temperature) that a driver needs to monitor while focused primarily on the road. A well-designed business dashboard applies the same discipline: only essential information, organized for rapid comprehension.

9.2 Dashboard Types

Dashboard TypePrimary PurposePrimary AudienceUpdate Frequency
StrategicMonitor progress toward long-term objectivesExecutives, boardMonthly, quarterly
OperationalMonitor day-to-day performanceOperations managersDaily, real-time
AnalyticalExplore data for insightsAnalystsAd hoc
TacticalTrack project or initiative progressMiddle managementWeekly

9.3 Design Principles

9.3.1 The Five-Second Rule

A well-designed dashboard communicates its most important message within five seconds of viewing. If a reader needs more than five seconds to understand what the dashboard is telling them about performance, it is overloaded, poorly organized, or poorly labeled.

9.3.2 Preattentive Attributes

Certain visual properties are processed by the human brain before conscious attention is engaged — these are called preattentive attributes. Effective dashboard design deploys these strategically to direct the viewer’s eye to the most important information.

Key preattentive attributes:

  • Color hue: Red draws attention; use it sparingly to signal problems
  • Size: Larger elements appear more important
  • Position: Elements in the upper left are typically seen first (Western reading pattern)
  • Contrast: High-contrast elements stand out from low-contrast backgrounds
A common design mistake is using too many colors or too many sizes, which eliminates the signal value of these attributes. If everything is red, nothing is urgent. If every chart is the same size, hierarchy is lost.

9.3.3 Data-Ink Ratio

Edward Tufte’s principle of data-ink ratio holds that every element on a visualization should serve a data-communication purpose. Elements that consume visual space without encoding data (gridlines, borders, backgrounds, decorative icons) are “chart junk” and should be minimized or eliminated.

\[ \text{Data-Ink Ratio} = \frac{\text{Ink Used to Encode Data}}{\text{Total Ink Used in the Chart}} \]

A ratio approaching 1.0 is ideal. In practice, this means removing default gridlines, using thin or no borders on chart panels, avoiding 3D chart effects, and eliminating shadow or gradient fills.

9.3.4 Sparklines and Small Multiples

Sparklines are small, word-sized trend lines embedded in tables or text. They communicate directional trend information in minimal space — ideal for dashboards where space is constrained.

Small multiples are a series of charts with identical structure displaying different sub-segments of the data. They allow direct comparison across many categories without cognitive overhead, because the viewer only needs to learn the chart structure once.

9.3.5 Color Usage

  • Use color purposefully, not decoratively
  • Limit the palette to 2–4 colors in most cases
  • Use sequential color scales (light to dark of one hue) for ordered data (e.g., sales volume from low to high)
  • Use diverging color scales (e.g., red to white to blue) for data that has a meaningful midpoint (e.g., variance above/below target)
  • Use categorical color scales (distinct hues) sparingly for labeling discrete groups
  • Never use red and green as the only distinguishing colors (colorblind accessibility)

9.4 Choosing the Right Chart Type

If you want to show…Use this chart type
Change over time (continuous)Line chart
Comparison across discrete categoriesBar chart (horizontal or vertical)
Part-to-whole compositionStacked bar, pie (only for 2–3 categories)
Relationship between two variablesScatter plot
Geographic distributionChoropleth map
Distribution of a single variableHistogram, box plot
Single key metric vs. targetKPI card with sparkline, bullet chart
Multiple metrics in one viewDashboard with small multiples

9.5 From Charts to Stories: The Narrative Arc

A dashboard presents a snapshot; a story presents a sequence of insights with a narrative arc. Effective business communication typically uses the SCR structure:

  • Situation: Context and baseline — what is the environment, and what do we expect?
  • Complication: The performance gap or issue — what happened that requires attention?
  • Resolution: The root cause and recommended action — why did it happen, and what should we do?

In Tableau, the Story feature allows analysts to sequence dashboards and worksheets into a connected presentation, with captions at each step articulating the “so what” of that screen.


Chapter 10: Performance Management in Public Sector and Not-for-Profit Settings

10.1 The Distinctive Context

Performance analytics in public sector organizations and not-for-profit (NFP) entities differs from the for-profit context in several important ways:

DimensionFor-ProfitPublic Sector / NFP
Primary objectiveFinancial return to shareholdersMission fulfillment (public value, social impact)
“Bottom line”Net income, EVAMission achievement, efficiency of resource use
Key stakeholdersShareholders, customersCitizens, clients, funders, taxpayers, government
Revenue sourceCustomer paymentsTaxes, grants, donations, government transfers
AccountabilityMarket discipline, investor oversightDemocratic accountability, regulatory compliance, donor stewardship
The absence of profit as a primary objective does not reduce the importance of performance measurement — it increases it. Without the pricing signal of a market, public and NFP organizations must rely more heavily on explicit performance frameworks to know whether they are creating value.

10.2 Adapting the Balanced Scorecard for Public Sector

The original Balanced Scorecard places financial outcomes at the top of the hierarchy. In the public sector, this hierarchy is typically inverted or reframed:

Public Sector Balanced Scorecard: An adaptation of the BSC framework in which mission fulfillment (serving citizens or clients effectively) is the primary goal, with financial stewardship repositioned as a constraint (staying within budget, achieving value for money) rather than an end in itself.

Typical adaptations:

BSC PerspectiveFor-Profit QuestionPublic Sector Adaptation
FinancialHow do we look to shareholders?How do we demonstrate value for money to funders and taxpayers?
CustomerHow do customers see us?How do citizens / clients experience our services?
Internal ProcessWhat must we excel at?How do we design and deliver services efficiently?
Learning & GrowthCan we continue to improve?What capabilities do we need to fulfill our evolving mission?

10.3 Efficiency vs. Effectiveness in the Public Sector

A critical distinction in public sector performance management:

Efficiency: Producing outputs at minimum cost — doing things right. Measured by unit cost (e.g., cost per hospital visit, cost per student educated, cost per social housing unit managed).
Effectiveness: Producing outcomes that achieve the intended social purpose — doing the right things. Measured by outcome metrics (e.g., student literacy rates, recidivism rates, health outcomes improvement, reduction in homelessness).

An organization can be efficient without being effective (e.g., processing welfare claims quickly but incorrectly) and effective without being efficient (e.g., achieving excellent health outcomes at extremely high cost). The best public sector performance frameworks measure both.

10.4 Logic Models

Logic Model: A visual framework used in the public and NFP sectors to map the causal chain from program inputs through activities and outputs to short-term, medium-term, and long-term outcomes. It makes the "theory of change" explicit and provides the basis for selecting appropriate KPIs at each stage.
StageDefinitionExample (Employment Training Program)
InputsResources investedFunding, staff, facilities, curriculum
ActivitiesWhat the program doesTraining sessions, job coaching, employer partnerships
OutputsDirect products of activitiesParticipants trained, sessions delivered
Short-term outcomesImmediate changes in participantsSkills gained, résumé quality improved
Medium-term outcomesBehavioral changesJob interviews obtained, employment secured
Long-term outcomesSustained social impactReduced unemployment, increased income, reduced social assistance use

10.5 Challenges in NFP Performance Measurement

Public sector and NFP performance measurement faces challenges not typically encountered in for-profit settings:

  1. Attribution problem: It is difficult to isolate the causal effect of a specific program from other social, economic, and environmental factors affecting outcomes.
  2. Time horizon mismatch: Long-term social outcomes (e.g., reducing incarceration rates) manifest years or decades after the intervention, while funding cycles are annual.
  3. Multiple principal problem: NFP organizations answer to multiple stakeholders (government funders, private donors, clients, boards) who may have different and conflicting performance expectations.
  4. Crowding out of mission: If funders demand easily measurable outputs, organizations may shift toward measurable activities that are not the most mission-aligned (“teaching to the test”).
  5. Data availability: Unlike for-profit firms with integrated financial systems, many NFP organizations lack robust data collection infrastructure.

Chapter 11: Communicating Performance — Storyboards and Presentations

11.1 The Communication Challenge in Analytics

Completing rigorous analysis is necessary but insufficient. The analyst’s insights must reach decision-makers in a form they can understand and act on — and decision-makers are busy, often non-technical, and exposed to many competing claims on their attention. The communication challenge is as demanding as the analytical challenge.

The chart is not the product. The recommendation is the product. Every visualization, table, and data point in a presentation exists solely to build the evidence base for a specific, actionable recommendation. Analytical work that cannot be communicated has no practical value.

11.2 Understanding the Audience

Before designing any communication, the analyst must explicitly consider:

  • Who is the primary audience? (Executive, operational manager, board, regulator, client)
  • What do they already know? (Background knowledge, familiarity with the data, prior exposure to the issue)
  • What decision are they making? (This defines what conclusion the analysis must deliver)
  • What is their attitude toward the subject? (Neutral and curious? Skeptical? Resistant to a particular conclusion?)
  • How much time do they have? (60-second elevator pitch vs. 30-minute board presentation)

Answers to these questions should drive every design choice: the level of detail, the chart types, the amount of text, and the logical structure of the narrative.

11.3 The SCR Framework for Analytical Narratives

The Situation-Complication-Resolution structure, adapted from management consulting practice, provides a robust framework for analytical communication:

  • Situation: Establishes shared context. What is the business, and what is the normal state of affairs? This section should be brief — it is not news to the audience.
  • Complication: Introduces the change or tension that motivates the analysis. Something has happened that disrupts the expected state of affairs. This is the “so what” that justifies the analysis.
  • Resolution: Provides the analytical finding, root cause, and recommended action. This is the substance of the analytical work.
Example SCR Structure — Retail Performance Report:

Situation: Rideau Outdoor Retail Co. operates 42 stores across Ontario and Quebec, targeting the outdoor recreation segment. Q3 2025 is historically the peak quarter, accounting for approximately 38% of annual revenue.

Complication: Q3 2025 revenue of $18.4M missed the budget of $21.0M by 12.4%, and fell 8% below Q3 2024 actual revenue of $20.0M. This represents the largest Q3 shortfall in five years and threatens the annual plan.

Resolution: Diagnostic analysis identifies that 85% of the shortfall is attributable to the camping equipment category in the Ontario market, where a new competitor opened six stores in Q2 2025 and launched an aggressive pricing promotion. Recommended actions: (1) immediate price matching on the 12 highest-volume camping SKUs, (2) differentiation through loyalty program enhancements, (3) investigation of potential exclusive supplier arrangements.

11.4 Storyboarding

A storyboard is a planned sequence of slides or screens designed before any final visualizations are built. It forces the analyst to solve the narrative problem — what is the logical sequence of insights? — before investing time in production.

Storyboard process:

  1. Write the key message of each screen in one sentence (the “caption-first” approach)
  2. Arrange screens so each one builds on the previous
  3. Verify the sequence answers all three diagnostic questions (what, why, now what)
  4. Identify which chart type and which data will support each screen
  5. Only then open the visualization tool

A storyboard need not be digital — sketching screens on sticky notes or paper is often faster and more flexible.

11.5 Summary Communication

After a full analytical presentation, a single summary slide condenses the entire diagnostic into the three core answers. This slide serves as both the conclusion of a live presentation and a standalone artifact that the audience can share with others:

QuestionAnswer
What happened?[Concise statement of observed performance vs. benchmark]
Why did it happen?[Root cause, expressed as a single sentence with key evidence]
Now what?[Recommended action, stated as a concrete next step]

If any of these three cells cannot be completed clearly and concisely, the diagnostic is incomplete.


Chapter 12: Synthesis — Connecting Analytics to Strategy

12.1 The Performance Management System as a Whole

The topics covered in this course — the Balanced Scorecard, KPI design, financial ratios, DuPont analysis, EVA, non-financial metrics, benchmarking, variance analysis, target costing, dashboard design, and public sector applications — are not isolated tools. They form a system of performance management that operates at multiple organizational levels simultaneously.

Performance Management System: An integrated set of processes, tools, and governance mechanisms through which an organization translates strategy into measurable objectives, monitors progress, diagnoses gaps, and triggers corrective action. It connects strategic planning (long-term) to operational control (short-term) through a coherent chain of cause-and-effect logic.

12.2 Connecting the Course Topics

Course TopicRole in the Performance Management System
Balanced ScorecardThe organizing framework for strategic objectives and KPIs
Strategy MapsThe causal logic connecting objectives across BSC perspectives
Financial Metrics (ROA, ROE, EVA)Lagging financial outcomes: the ultimate accountability measure
DuPont DecompositionDiagnostic tool for understanding drivers of financial outcome change
Non-Financial Metrics (NPS, Engagement)Leading indicators that predict future financial outcomes
BenchmarkingExternal reference point for performance targets and improvement ideas
Variance AnalysisOperational control: comparing actual to plan in granular detail
Target & Kaizen CostingCost management: setting and progressively tightening cost targets
Dashboard DesignThe communication layer: making performance visible to decision-makers
Public Sector AdaptationsContext-specific adjustments for non-market organizations

12.3 The Analytical Workflow

In practice, a performance analyst working in an organization follows a recurring cycle:

  1. Plan: Understand the strategic context, stakeholder needs, and available data. Design the analytical approach before opening any tool.
  2. Collect & Prepare: Access data sources, profile the data, clean and reshape for analysis.
  3. Explore (Descriptive): Build summary metrics, trend charts, and segmentation views to understand what happened.
  4. Diagnose: Use drill-downs, scatter plots, variance decompositions, and cohort analyses to understand why it happened.
  5. Frame (Prescriptive): Translate findings into recommendations: now what?
  6. Communicate: Build a storyboard, create polished visualizations, and deliver findings through a structured narrative.
  7. Monitor: Establish dashboards and reporting rhythms that allow stakeholders to track whether recommendations are being implemented and whether the performance gap is closing.

This cycle repeats continuously — each round generates new data, new insights, and new questions.

12.4 The Ethical Obligation of the Performance Analyst

The performance analyst occupies a position of significant influence. The metrics selected, the benchmarks chosen, the variances highlighted, and the recommendations made shape resource allocation decisions, compensation outcomes, and organizational strategy. This influence carries ethical responsibilities:

  • Objectivity: Present findings that are supported by evidence, even when they contradict management preferences
  • Completeness: Do not selectively omit unflattering data; present a balanced picture
  • Accuracy: Verify data quality and disclose limitations; do not report false precision
  • Independence: Resist pressure to reverse-engineer analysis toward a predetermined conclusion
  • Transparency: Make assumptions explicit; explain the logic of the analysis
  • Confidentiality: Handle sensitive business data with appropriate discretion

For accounting and finance professionals, these obligations are reinforced by the CPA Canada Code of Professional Conduct, which requires objectivity, integrity, and due care in all professional work. Performance analytics is a domain where these principles are tested regularly.

12.5 Looking Ahead: Analytics in a Changing Environment

The field of performance analytics is evolving rapidly. Several developments are reshaping how organizations measure and manage performance:

Integrated Reporting: The move toward reporting that combines financial and non-financial (ESG: environmental, social, governance) performance in a single integrated framework. The International Integrated Reporting Council (IIRC) framework — now part of the IFRS Sustainability Disclosure Standards ecosystem — is gaining traction among large public companies.

Real-Time Analytics: Cloud-based data warehouses and modern business intelligence tools (Tableau, Power BI, Looker) are enabling near-real-time performance monitoring, replacing monthly static reports with continuously updated dashboards.

Artificial Intelligence and Machine Learning: Predictive and prescriptive analytics capabilities that previously required specialized data science teams are increasingly embedded in mainstream business intelligence tools, making them accessible to accounting and finance professionals.

People Analytics: The systematic application of data analytics to human resources decisions — workforce planning, engagement, performance management, retention. This extends the “learning and growth” perspective of the Balanced Scorecard into the domain of data-driven HR.

For AFM students entering co-op positions, fluency in performance analytics — the ability to ask the right questions, load and clean data, apply appropriate analytical techniques, and communicate findings clearly — is one of the most immediately applicable and highly valued skills in any business function.

Key Formulas Reference

FormulaExpression
Gross Margin %\(\frac{\text{Revenue} - \text{COGS}}{\text{Revenue}} \times 100\)
Operating Margin %\(\frac{\text{EBIT}}{\text{Revenue}} \times 100\)
ROA\(\frac{\text{Net Income}}{\text{Total Assets}}\)
ROE\(\frac{\text{Net Income}}{\text{Shareholders' Equity}}\)
DuPont ROE (3-factor)\(\text{Net Profit Margin} \times \text{Asset Turnover} \times \text{Equity Multiplier}\)
EVA\(\text{NOPAT} - (\text{WACC} \times \text{Capital Employed})\)
NPS\(\%\text{Promoters} - \%\text{Detractors}\)
CLV (simple)\(\frac{\text{Avg Monthly Margin}}{\text{Monthly Churn Rate}}\)
CAC\(\frac{\text{Total Sales & Marketing Spend}}{\text{New Customers Acquired}}\)
Inventory Turnover\(\frac{\text{COGS}}{\text{Average Inventory}}\)
DSI\(\frac{365}{\text{Inventory Turnover}}\)
Target Cost\(\text{Target Selling Price} - \text{Required Profit Margin}\)
Materials Price Variance\((\text{AP} - \text{SP}) \times \text{AQ Purchased}\)
Materials Quantity Variance\((\text{AQ Used} - \text{SQ Allowed}) \times \text{SP}\)
Labour Rate Variance\((\text{AR} - \text{SR}) \times \text{AH}\)
Labour Efficiency Variance\((\text{AH} - \text{SH Allowed}) \times \text{SR}\)

Abbreviations: AP = Actual Price, SP = Standard Price, AQ = Actual Quantity, SQ = Standard Quantity, AR = Actual Rate, SR = Standard Rate, AH = Actual Hours, SH = Standard Hours

Back to top