AFM 207: Introduction to Performance Analytics
Nancy Vanden Bosch
Estimated study time: 1 hr 2 min
Table of contents
Sources and References
Primary textbook — Knaflic, Cole Nussbaumer. Storytelling with Data: Let’s Practice! Wiley, 2019. Supplementary texts — Kaplan, Robert S. & Norton, David P. The Balanced Scorecard: Translating Strategy into Action. Harvard Business School Press, 1996. | Niven, Paul R. Balanced Scorecard Step-by-Step: Maximizing Performance and Maintaining Results (2nd ed.). Wiley, 2006. | Horngren, Charles T., Datar, Srikant M., & Rajan, Madhav V. Cost Accounting: A Managerial Emphasis (16th ed.). Pearson, 2018. Online resources — EY ARC, “Introduction to Data Visualization”; Tableau Public documentation; Gartner Analytics Maturity Model; CPA Canada Performance Management resources.
Chapter 1: Foundations of Performance Analytics
1.1 What Is Performance Analytics?
Performance analytics is the discipline of examining business data systematically to answer three fundamental questions: what happened, why it happened, and what should be done about it. It sits at the intersection of business knowledge, data analysis, and communication — requiring a practitioner to understand the organization, work with available data, and convey findings to decision-makers in a form they can act on.
This differs from general data analysis in an important way: performance analytics is explicitly goal-oriented and audience-aware. Every choice — what data to pull, which chart to build, how to sequence the story — is made with the stakeholder’s decision in mind. A technical analysis that cannot be communicated to its audience has no practical value.
1.2 The Analytical Mindset
Adopting an analytical mindset means approaching a business problem with curiosity, skepticism, and structure. It involves:
- Asking good questions before touching data. What does the stakeholder actually need to know? What decision will this analysis support?
- Understanding the business model first. A retail analyst who does not know how gross margin is calculated cannot meaningfully diagnose margin compression.
- Letting data guide, not confirm. Exploratory analysis should be open-ended; the analyst should be willing to be surprised by findings rather than cherry-picking evidence for a predetermined conclusion.
- Communicating with precision. Analytical conclusions must be expressed clearly, without jargon, at the level of detail appropriate to the audience.
- Maintaining healthy skepticism about the data itself. Data quality issues — missing values, mislabelled categories, stale figures — are the norm rather than the exception in real business environments.
1.3 The Performance Diagnostic Framework
The course structures a performance diagnostic around three sequential questions:
| Question | Analytical Phase | Output |
|---|---|---|
| What happened? | Descriptive analysis | Summary metrics, trend lines, distribution charts |
| Why did it happen? | Diagnostic (root-cause) analysis | Drill-downs, scatter plots, segment comparisons |
| Now what? | Prescriptive framing | Recommendations, scenario comparisons |
A good diagnostic starts at the highest level — overall performance against a target or prior period — and then progressively decomposes that result by segment, product, geography, or time period to identify the root cause of any gap. This hierarchical decomposition is sometimes called a waterfall analysis or variance tree.
1.4 Professional Ethics in Analytics
Because analytics shapes decisions, the analyst carries ethical obligations. Misleading visualizations — truncated axes, cherry-picked time windows, inappropriate chart types — distort perception even when the underlying numbers are accurate. Ethical practice requires:
- Presenting data in context (baselines, benchmarks, confidence intervals where relevant)
- Disclosing data limitations and caveats explicitly
- Distinguishing correlation from causation in all communications
- Maintaining confidentiality of sensitive business information
- Avoiding selective disclosure that serves a particular agenda
The CPA Canada Code of Professional Conduct and the broader literature on data ethics both emphasize the professional’s duty to report honestly and to protect the integrity of the information they handle. For an accounting professional, the obligation is particularly acute: financial performance data underpins capital allocation decisions, compensation plans, and regulatory filings.
Chapter 2: Performance Measurement Frameworks
2.1 Why Measurement Frameworks Matter
Organizations face a fundamental challenge: strategy is abstract, but management requires concrete, measurable targets. A performance measurement framework is a structured approach to translating strategic goals into observable, quantifiable indicators. Without such a framework, organizations risk optimizing individual functions in ways that do not advance the overall strategy — or, worse, that actively trade off against it.
2.2 The Balanced Scorecard
The Balanced Scorecard (BSC), developed by Robert Kaplan and David Norton and introduced in a landmark 1992 Harvard Business Review article, is the most widely adopted performance measurement framework in use today. Its central insight is that financial metrics alone are insufficient — by the time a financial problem shows up in the income statement, it is often too late to course-correct. The BSC supplements financial measures with three additional perspectives that provide earlier signals.
2.2.1 The Four Perspectives
Financial Perspective
The financial perspective answers the question: How do we look to shareholders? It captures the ultimate goal of a for-profit organization — to generate returns for investors. Typical financial KPIs include:
- Revenue growth rate
- Gross margin percentage
- Operating income (EBIT)
- Return on assets (ROA) and return on equity (ROE)
- Economic Value Added (EVA)
- Earnings per share (EPS)
Customer Perspective
The customer perspective answers: How do customers see us? Customer outcomes drive financial outcomes — a business that loses customers or erodes satisfaction will eventually see it in revenue and margin. Typical customer KPIs include:
- Customer satisfaction score (CSAT)
- Net Promoter Score (NPS)
- Customer retention rate
- Customer acquisition cost (CAC)
- Market share
- Average order value (AOV)
- Customer lifetime value (CLV)
Internal Process Perspective
The internal process perspective asks: What must we excel at? It identifies the operational processes most critical to delivering customer value and financial results. KPIs vary by business model:
| Business Type | Example Internal Process KPIs |
|---|---|
| Manufacturing | Defect rate (PPM), cycle time, machine utilization |
| Retail | Inventory turnover, on-shelf availability, order fulfillment time |
| Professional services | Project on-time delivery rate, utilization rate, rework rate |
| Healthcare | Patient wait time, readmission rate, treatment error rate |
Learning and Growth Perspective
The learning and growth perspective asks: Can we continue to improve and create value? It captures the organizational capacity — human capital, information capital, and organizational culture — required to execute the other three perspectives over time. Typical KPIs include:
- Employee engagement score
- Training hours per employee per year
- Employee retention rate
- Percentage of roles filled internally (succession depth)
- Technology capability index
- Innovation revenue (revenue from products launched in last three years as a % of total)
2.2.2 Cause-and-Effect Logic
A key principle of the Balanced Scorecard is that the four perspectives are connected by cause-and-effect hypotheses. The logic runs: if we invest in our people (Learning & Growth), they will improve our processes (Internal Process); better processes will improve customer satisfaction (Customer); satisfied customers will grow revenue and margin (Financial). This chain of logic is made explicit in a Strategy Map.
2.3 KPI Design and Selection
Not every metric that can be measured should be measured. An excess of KPIs creates noise and consumes reporting resources without adding decision-making value. Good KPI design follows several principles.
2.3.1 The SMART Framework
A well-designed KPI is:
- Specific: It measures one clearly defined phenomenon, not a vague concept.
- Measurable: It can be quantified using available data.
- Achievable: The target is challenging but realistic given available resources.
- Relevant: It is linked to a strategic objective that matters to the organization.
- Time-bound: It is reported on a defined frequency (weekly, monthly, quarterly) against a defined time horizon.
2.3.2 Leading vs. Lagging Indicators
A balanced KPI portfolio includes both. Relying exclusively on lagging indicators is like driving a car by looking only in the rear-view mirror.
2.3.3 Common KPI Design Pitfalls
| Pitfall | Description | Example |
|---|---|---|
| Gaming | Employees optimize the KPI rather than the underlying goal | Call center measures average handle time → agents rush calls |
| Surrogation | The KPI replaces the strategic objective in managers’ minds | NPS becomes the goal rather than customer loyalty |
| Too many KPIs | Reporting burden overwhelms decision value | 50-metric monthly scorecard where 45 are green |
| Poorly defined numerators/denominators | Different teams calculate the same metric differently | “Revenue” includes or excludes returns depending on the system |
| No defined target | A metric without a benchmark has no diagnostic value | Reporting gross margin with no comparison period or goal |
Chapter 3: Financial Performance Metrics
3.1 Profitability Ratios
Financial performance analysis begins with understanding profitability — the ability of an organization to generate income relative to its revenue, assets, or equity. The key profitability ratios form a hierarchy from revenue to net income.
| Ratio | Formula | What It Measures |
|---|---|---|
| Gross Margin % | \(\frac{\text{Revenue} - \text{COGS}}{\text{Revenue}}\) | Efficiency of production / procurement |
| Operating Margin % | \(\frac{\text{EBIT}}{\text{Revenue}}\) | Profitability after operating costs |
| Net Profit Margin % | \(\frac{\text{Net Income}}{\text{Revenue}}\) | Overall profitability after all costs and taxes |
| Return on Assets (ROA) | \(\frac{\text{Net Income}}{\text{Total Assets}}\) | Asset utilization efficiency |
| Return on Equity (ROE) | \(\frac{\text{Net Income}}{\text{Shareholders' Equity}}\) | Return generated for shareholders |
Revenue: $24,000,000 Cost of Goods Sold: $15,600,000 Operating Expenses: $5,280,000 Interest Expense: $480,000 Tax Rate: 25% Total Assets: $18,000,000 Shareholders’ Equity: $9,600,000
Calculations:
Gross Profit = $24,000,000 − $15,600,000 = $8,400,000 Gross Margin % = $8,400,000 / $24,000,000 = 35.0%
EBIT = $8,400,000 − $5,280,000 = $3,120,000 Operating Margin % = $3,120,000 / $24,000,000 = 13.0%
EBT = $3,120,000 − $480,000 = $2,640,000 Net Income = $2,640,000 × (1 − 0.25) = $1,980,000 Net Profit Margin % = $1,980,000 / $24,000,000 = 8.25%
ROA = $1,980,000 / $18,000,000 = 11.0% ROE = $1,980,000 / $9,600,000 = 20.6%
3.2 DuPont Decomposition
The DuPont framework decomposes ROE into component ratios, revealing which drivers — profitability, asset efficiency, or financial leverage — are responsible for a change in return on equity. This is one of the most powerful diagnostic tools in financial performance analysis.
3.2.1 The Three-Factor DuPont Model
\[ \text{ROE} = \underbrace{\frac{\text{Net Income}}{\text{Revenue}}}_{\text{Net Profit Margin}} \times \underbrace{\frac{\text{Revenue}}{\text{Total Assets}}}_{\text{Asset Turnover}} \times \underbrace{\frac{\text{Total Assets}}{\text{Shareholders' Equity}}}_{\text{Equity Multiplier (Leverage)}} \]3.2.2 Extended Five-Factor DuPont Model
The three-factor model can be expanded to a five-factor model that further separates operating performance from financial structure:
\[ \text{ROE} = \frac{\text{Net Income}}{\text{EBT}} \times \frac{\text{EBT}}{\text{EBIT}} \times \frac{\text{EBIT}}{\text{Revenue}} \times \frac{\text{Revenue}}{\text{Total Assets}} \times \frac{\text{Total Assets}}{\text{Equity}} \]| Factor | Ratio | Interpretation |
|---|---|---|
| Tax burden | Net Income / EBT | Higher = lower effective tax rate |
| Interest burden | EBT / EBIT | Lower = higher interest load relative to operating profit |
| Operating margin | EBIT / Revenue | Core operational profitability |
| Asset turnover | Revenue / Assets | Asset utilization efficiency |
| Leverage | Assets / Equity | Financial risk and magnification |
Maple Retail Corp. financial data:
| Metric | FY2024 | FY2025 |
|---|---|---|
| Revenue | \$22,000,000 | \$24,000,000 |
| Net Income | \$1,540,000 | \$1,980,000 |
| Total Assets | \$17,000,000 | \$18,000,000 |
| Shareholders' Equity | \$9,200,000 | \$9,600,000 |
FY2024 DuPont: Net Profit Margin = 1,540,000 / 22,000,000 = 7.00% Asset Turnover = 22,000,000 / 17,000,000 = 1.294× Equity Multiplier = 17,000,000 / 9,200,000 = 1.848× ROE = 7.00% × 1.294 × 1.848 = 16.74%
FY2025 DuPont: Net Profit Margin = 1,980,000 / 24,000,000 = 8.25% Asset Turnover = 24,000,000 / 18,000,000 = 1.333× Equity Multiplier = 18,000,000 / 9,600,000 = 1.875× ROE = 8.25% × 1.333 × 1.875 = 20.63%
Interpretation: ROE improved from 16.74% to 20.63%. The decomposition shows all three drivers improved modestly. The largest driver of the improvement was the increase in net profit margin (+1.25 percentage points), which contributed the most to the ROE gain. Asset turnover and leverage both increased slightly, consistent with moderate revenue growth on a larger asset base with slightly higher debt financing.
3.3 Economic Value Added (EVA)
ROE and ROA are accounting-based measures that do not account for the cost of equity capital. A business can show positive net income and still be destroying shareholder value if it earns less than investors require. Economic Value Added corrects this by subtracting the full cost of capital from net operating profit.
Where:
\[ \text{NOPAT} = \text{EBIT} \times (1 - \text{Tax Rate}) \]\[ \text{Capital Employed} = \text{Total Assets} - \text{Non-Interest-Bearing Current Liabilities} \]Where \(E\) = market value of equity, \(D\) = market value of debt, \(r_e\) = cost of equity, \(r_d\) = cost of debt (pre-tax), \(T\) = tax rate.
Assume Maple Retail Corp. has: EBIT = $3,120,000; Tax Rate = 25%; WACC = 9%; Capital Employed = $16,200,000
NOPAT = $3,120,000 × (1 − 0.25) = $2,340,000 Capital Charge = 9% × $16,200,000 = $1,458,000 EVA = $2,340,000 − $1,458,000 = $882,000
Since EVA > 0, Maple Retail is creating shareholder value above and beyond its cost of capital. A negative EVA would indicate value destruction even with positive accounting profits.
3.4 Market Value Added (MVA)
MVA is the present value of all future expected EVAs. A firm that consistently generates positive EVA will have a high positive MVA; a firm that destroys value will trade at a discount to invested capital (negative MVA).
EVA and MVA are particularly useful for evaluating divisional performance in multi-segment organizations because they charge each division for the capital it employs, creating accountability for asset-light versus asset-heavy business models.
Chapter 4: Non-Financial Performance Metrics
4.1 Customer Performance Metrics
Financial metrics confirm past results; customer metrics provide earlier signals about future revenue and growth. Understanding the full customer journey — from acquisition through retention — requires a portfolio of customer-focused KPIs.
4.1.1 Customer Satisfaction Score (CSAT)
CSAT is best used to evaluate transactional satisfaction (e.g., after a support call or a delivery). It is highly specific to the measured interaction and does not capture overall brand sentiment or loyalty.
4.1.2 Net Promoter Score (NPS)
NPS, developed by Fred Reichheld and published in the Harvard Business Review in 2003, asks a single question: “How likely are you to recommend us to a friend or colleague?” Respondents rate from 0–10:
- Promoters (9–10): Loyal enthusiasts who actively refer others
- Passives (7–8): Satisfied but not enthusiastic; vulnerable to competitive offers
- Detractors (0–6): Unhappy customers who may actively discourage others
A financial services firm surveys 400 recent clients. Results: Promoters (9–10): 180 respondents → 45% Passives (7–8): 140 respondents → 35% Detractors (0–6): 80 respondents → 20%
NPS = 45% − 20% = +25
Industry context matters: an NPS of +25 is mediocre in software (average ~30–40) but strong in financial services (industry average ~15–20). The diagnostic value of NPS lies in tracking it over time and segmenting by customer type, channel, or product to identify where satisfaction is improving or deteriorating.
4.1.3 Customer Lifetime Value (CLV)
For a simple subscription model:
\[ \text{CLV} = \frac{\text{Average Monthly Margin per Customer}}{\text{Monthly Churn Rate}} \]More generally:
\[ \text{CLV} = \sum_{t=1}^{T} \frac{m_t}{(1+d)^t} \]where \(m_t\) is the net margin from the customer in period \(t\) and \(d\) is the discount rate.
4.1.4 Customer Acquisition Cost (CAC) and the CLV/CAC Ratio
The CLV-to-CAC ratio measures the return on customer acquisition investment. A ratio of at least 3:1 is generally considered healthy — meaning the lifetime value of a customer is at least three times what it cost to acquire them. A ratio below 1 means the company loses money on every customer it acquires.
4.2 Employee Engagement
Employee engagement is a measure of the degree to which employees are committed to, motivated by, and satisfied with their work and workplace. Research consistently shows that engaged employees are more productive, provide better customer service, and are less likely to leave — making engagement a key leading indicator of both operational performance and financial outcomes.
Common employee engagement KPIs include:
- Engagement survey score (% highly engaged)
- Voluntary turnover rate (annualized)
- Absenteeism rate
- Internal promotion rate
- Training hours per employee
4.3 Operational Efficiency Metrics
Operational efficiency metrics measure how well an organization uses its inputs to produce outputs. They are central to the internal process perspective of the Balanced Scorecard.
4.3.1 Cycle Time and Throughput
4.3.2 Utilization Rate
\[ \text{Utilization Rate} = \frac{\text{Actual Output (or Time Used)}}{\text{Maximum Possible Output (or Available Time)}} \times 100 \]In professional services, utilization rate is the proportion of available employee hours that are billed to clients. A utilization rate that is consistently too high (above 85–90%) signals risk of burnout; too low (below 60%) signals excess capacity and margin pressure.
4.3.3 Inventory Metrics
For businesses that carry inventory, two key metrics are:
\[ \text{Inventory Turnover} = \frac{\text{Cost of Goods Sold}}{\text{Average Inventory}} \]\[ \text{Days Sales in Inventory (DSI)} = \frac{365}{\text{Inventory Turnover}} \]Higher turnover (lower DSI) generally indicates more efficient inventory management, though what is optimal varies significantly by industry.
Chapter 5: Data Analytics for Performance Management
5.1 The Analytics Maturity Spectrum
Organizations and analytical projects can be positioned along a spectrum from simple description to sophisticated optimization. Gartner’s framework identifies four levels:
| Level | Type | Question Answered | Complexity | Value |
|---|---|---|---|---|
| 1 | Descriptive | What happened? | Low | Moderate |
| 2 | Diagnostic | Why did it happen? | Moderate | High |
| 3 | Predictive | What will happen? | High | Higher |
| 4 | Prescriptive | What should we do? | Very High | Highest |
Most organizations operate primarily at Levels 1–2. Moving to Levels 3–4 requires more sophisticated data infrastructure, statistical or machine learning capabilities, and organizational readiness to act on model outputs.
5.2 Descriptive Analytics
Descriptive analytics summarizes historical data to provide a clear picture of past performance. It is the foundation upon which all other analytics types are built.
Common descriptive analytics outputs:
- Summary statistics: Means, medians, ranges, percentiles for key metrics
- Trend analysis: Revenue or cost over time, typically visualized with line charts
- Distribution analysis: How a metric (e.g., customer spend) is distributed across the population
- Segmentation: Breaking aggregate totals into meaningful sub-groups (by region, product, customer type)
5.3 Diagnostic Analytics
Diagnostic analytics goes beyond description to explain why a performance gap occurred. It relies on techniques that decompose an aggregate result into its drivers.
5.3.1 Drill-Down Analysis
Drill-down analysis decomposes a top-level metric progressively into sub-components. The analyst starts at the highest level of aggregation and systematically breaks the result down until the source of the gap is isolated.
Step 1 (What happened?): National revenue is down 10% vs. prior year. Step 2 (Where?): By region: Ontario −15%, Quebec −2%, West +1%. The Ontario gap drives the total. Step 3 (What product?): Within Ontario: Electronics −28%, Apparel +3%, Home & Garden −4%. Electronics drives the Ontario gap. Step 4 (Why Electronics?): Unit volume down 22%; average selling price down 8%. Volume is the primary driver. Step 5 (Root cause): A major competitor launched a competing product line in Ontario in Q2, capturing significant market share in Electronics.
Root cause identified: Competitor entry into Ontario Electronics category.
5.3.2 Scatter Plots and Correlation Analysis
Scatter plots visualize the relationship between two continuous variables. They are the primary diagnostic tool for identifying whether a potential driver variable is correlated with a performance outcome.
5.3.3 Cohort Analysis
Cohort analysis groups customers or observations by a shared characteristic (typically the time period in which they were acquired) and tracks their behaviour over time. It is particularly powerful for diagnosing churn and retention dynamics.
A subscription software company tracks three customer cohorts by their month of acquisition:
| Cohort | Month 0 | Month 1 | Month 2 | Month 3 |
|---|---|---|---|---|
| Jan 2025 | 100% | 82% | 71% | 65% |
| Feb 2025 | 100% | 79% | 68% | 61% |
| Mar 2025 | 100% | 74% | 62% | 55% |
The declining Month 1 retention across cohorts (82% → 79% → 74%) suggests a worsening onboarding experience for newer customers. This is an early warning signal that would be invisible in aggregate churn figures.
5.4 Predictive Analytics
Predictive analytics uses historical data to forecast future outcomes. Common techniques include:
- Time-series forecasting: Using patterns in past data (trends, seasonality, cycles) to project future values. Methods include moving averages, exponential smoothing, and ARIMA models.
- Regression analysis: Using one or more predictor variables to estimate the expected value of a target variable (e.g., predicting next quarter’s sales based on leading indicators).
- Classification models: Predicting which category an observation falls into — for example, classifying customers as high-churn-risk or low-churn-risk based on behavioural signals.
5.5 Prescriptive Analytics
Prescriptive analytics recommends specific actions to achieve desired outcomes, often using optimization algorithms or simulation. It answers “what should we do?” rather than merely “what will happen?”
Examples:
- Price optimization: Algorithms that recommend the profit-maximizing price for each product given demand elasticity estimates
- Capacity planning: Simulation models that recommend staffing levels given forecasted demand and service-level targets
- Portfolio optimization: Financial models that recommend the capital allocation across business units that maximizes expected EVA subject to risk constraints
Prescriptive analytics is the most valuable and most complex tier. It requires a reliable predictive model as its foundation and organizational processes capable of acting on its recommendations.
Chapter 6: Benchmarking
6.1 What Is Benchmarking?
Benchmarking is not about copying what others do. It is about understanding the performance gap, investigating the practices that account for it, and adapting those practices to the organization’s own context.
6.2 Types of Benchmarking
6.2.1 Internal Benchmarking
Internal benchmarking compares performance across units, divisions, geographies, or time periods within the same organization. It is the simplest form — data is readily available, definitions are consistent, and cultural context is shared.
Advantages: Data accessibility, consistent definitions, ease of sharing practices Limitations: Best internal practice may still lag best-in-class external performance; risk of benchmarking to a low standard
6.2.2 Competitive Benchmarking
Competitive benchmarking compares performance against direct competitors. It answers the question: are we winning or losing relative to the rivals our customers can choose?
Data sources: Public financial statements, industry association reports, analyst research, market intelligence services, customer surveys (share-of-wallet, brand preference)
Limitations: Competitors do not disclose operational data; comparisons may be distorted by different accounting policies, geographic mix, or business model differences.
6.2.3 Functional Benchmarking
Functional benchmarking compares a specific business function against organizations in different industries that perform the same function. The premise is that the best procurement department in the world may not be in your industry.
6.2.4 Generic (Best-in-Class) Benchmarking
Generic benchmarking identifies the world-class performers of a specific process regardless of industry and benchmarks against them. It is the most ambitious and most transformative form of benchmarking, but also the most difficult to implement because the context differences between the benchmark organization and the subject are large.
6.3 The Benchmarking Process
A rigorous benchmarking study follows a structured process:
- Identify what to benchmark: Which process or KPI is the subject? Is it a priority for organizational performance?
- Identify benchmark partners: Internal units, competitors, or functional leaders?
- Collect data: From partners (with their cooperation) or from public sources
- Analyze performance gaps: Quantify the gap and understand its components
- Identify enabling practices: What does the benchmark partner do differently that accounts for the performance gap?
- Adapt and implement: Translate the identified practices into the organization’s own context
- Monitor progress: Track whether the gap is closing; recycle the process
Chapter 7: Variance Analysis
7.1 Standard Costing and the Purpose of Variance Analysis
Variance analysis serves several management purposes:
- Performance evaluation: Did a production manager control costs effectively?
- Operational diagnosis: Is a price variance driven by supplier pricing or procurement inefficiency?
- Standard revision: Are the standards themselves still valid, or do they need updating?
- Learning: What do the variances reveal about process efficiency?
7.2 Direct Materials Variances
The total materials variance decomposes into a price variance and a quantity (efficiency) variance.
Favorable (F) if actual price < standard price; Unfavorable (U) if actual price > standard price.
Favorable if actual quantity used < standard allowed; Unfavorable if actual > standard.
7.3 Direct Labour Variances
7.4 Flexible Budget Variance Analysis
Flexible budget variance analysis separates the volume effect from the price and efficiency effects of actual performance.
7.4.1 The Three-Way Variance Framework
| Variance | Calculation | Insight |
|---|---|---|
| Sales Volume Variance | (Actual Units − Budgeted Units) × Budgeted Unit Contribution Margin | Effect of selling more or fewer units than planned |
| Flexible Budget Variance | Actual Result − Flexible Budget Result at Actual Volume | Combined effect of price, rate, and efficiency differences |
| Total Variance | Actual Result − Static Budget Result | Total difference from plan |
Northside Manufacturing produces a single product with the following standards per unit: Direct Materials: 3 kg × $4.00/kg = $12.00 Direct Labour: 2 hours × $18.00/hr = $36.00 Variable Overhead: 2 hours × $6.00/hr = $12.00 Standard Variable Cost per Unit: $60.00 Standard Selling Price: $95.00 Standard Contribution Margin: $35.00
Budgeted output: 5,000 units
Actual results for the period: Units produced and sold: 5,400 Revenue: $499,500 (actual price $92.50/unit) Direct Materials purchased and used: 16,740 kg at $4.20/kg = $70,308 Direct Labour: 11,340 hours at $17.50/hr = $198,450 Variable Overhead: $65,772
Step 1 — Sales Variances:
Sales Price Variance = (Actual Price − Standard Price) × Actual Units = ($92.50 − $95.00) × 5,400 = −$13,500 (U)
Sales Volume Variance = (Actual Units − Budgeted Units) × Standard CM = (5,400 − 5,000) × $35.00 = +$14,000 (F)
Step 2 — Materials Variances:
Standard Quantity Allowed = 5,400 units × 3 kg = 16,200 kg
MPV = ($4.20 − $4.00) × 16,740 = +$3,348 (U) MQV = (16,740 − 16,200) × $4.00 = +$2,160 (U) Total Materials Variance = $5,508 (U)
Step 3 — Labour Variances:
Standard Hours Allowed = 5,400 × 2 = 10,800 hours
LRV = ($17.50 − $18.00) × 11,340 = −$5,670 (F) LEV = (11,340 − 10,800) × $18.00 = +$9,720 (U) Total Labour Variance = $4,050 (U)
Step 4 — Summary:
| Variance | Amount | F/U |
|---|---|---|
| Sales Price Variance | \$13,500 | U |
| Sales Volume Variance | \$14,000 | F |
| Materials Price Variance | \$3,348 | U |
| Materials Quantity Variance | \$2,160 | U |
| Labour Rate Variance | \$5,670 | F |
| Labour Efficiency Variance | \$9,720 | U |
Management Interpretation: The business sold 400 more units than budgeted (F volume variance), but at a lower price, nearly offsetting the volume benefit. Materials were more expensive per kg (U price) and were used inefficiently (U quantity). Labour was paid at a lower rate (F rate — perhaps more junior workers were used), but those workers were less efficient, requiring 540 more hours than the standard allowed. This pattern — lower rate, higher hours — is a common signal of a skill-mix substitution that did not deliver the expected efficiency.
7.5 Sales Mix and Quantity Variances
When a company sells multiple products, the total sales volume variance can be further decomposed into a sales mix variance (did the actual mix of products sold differ from the planned mix?) and a sales quantity variance (did total volume differ from plan?).
Lakeside Products Ltd. sells two products:
| Product | Budgeted Units | Budgeted Mix | Budgeted CM/Unit |
|---|---|---|---|
| Alpha | 3,000 | 60% | \$40 |
| Beta | 2,000 | 40% | \$25 |
| Total | 5,000 | 100% | \$34 (weighted avg) |
Actual results: Alpha sold 2,800 units; Beta sold 2,700 units. Total actual: 5,500 units.
Actual Mix: Alpha 2,800/5,500 = 50.9%; Beta 2,700/5,500 = 49.1%
Sales Quantity Variance = (5,500 − 5,000) × $34 = +$17,000 (F) (Selling 500 more units at the budgeted weighted-average CM)
Sales Mix Variance (Alpha): Actual units in budgeted mix = 5,500 × 60% = 3,300 Mix Variance Alpha = (2,800 − 3,300) × $40 = −$20,000 (U)
Sales Mix Variance (Beta): Actual units in budgeted mix = 5,500 × 40% = 2,200 Mix Variance Beta = (2,700 − 2,200) × $25 = +$12,500 (F)
Total Sales Mix Variance = −$20,000 + $12,500 = −$7,500 (U)
Interpretation: The company sold 500 more total units than planned (favorable quantity), but sold a higher proportion of the lower-margin Beta product and fewer of the high-margin Alpha product. The mix shift cost $7,500 of contribution margin, partially offsetting the volume benefit.
Chapter 8: Target Costing and Kaizen Costing
8.1 Target Costing
Traditional cost management asks: what does it cost to make this product, and can we sell it for enough to make a profit? Target costing reverses this logic: given the market price needed to be competitive, and the required profit margin, what is the maximum allowable cost?
If the current estimated cost exceeds the target cost, the design team must work to close the cost gap through product redesign, value engineering, or supplier negotiations — before the product is launched, not after.
8.1.1 The Target Costing Process
- Conduct market research: Identify the price at which customers will purchase the product given competitive alternatives (the competitive market price).
- Determine required margin: Management establishes the minimum acceptable profit margin for the product.
- Compute the target cost: Target Cost = Market Price − Required Margin.
- Estimate the current cost: Using the preliminary design, estimate the full cost to produce.
- Identify the cost gap: Current estimated cost − target cost = cost gap to be eliminated.
- Value engineering: Cross-functional teams systematically review every component and process to find cost reductions that do not compromise customer-valued quality or functionality.
- Launch or abandon: If the cost gap can be closed, the product proceeds. If not, the product is redesigned or abandoned.
Clearview Technologies is developing a new consumer electronics product. Market research indicates the competitive selling price is $149.99. Management requires a profit margin of at least 20% of selling price.
Target Cost = $149.99 × (1 − 0.20) = $119.99
Preliminary engineering estimates the cost to produce at $136.50. The cost gap is: $136.50 − $119.99 = $16.51 per unit
The value engineering team identifies:
- Substitute a lower-cost speaker component (same acoustic quality): saves $4.20
- Redesign the plastic casing to reduce material use: saves $3.80
- Negotiate volume pricing with display supplier: saves $5.50
- Simplify internal cable routing (reduces labour): saves $3.80
Total savings identified: $17.30 — sufficient to close the gap. The product proceeds to production with the revised design.
8.2 Kaizen Costing
While target costing focuses on the design phase before production begins, kaizen costing focuses on continuous cost reduction during the production phase through incremental improvements.
Kaizen costing differs from standard costing in a fundamental way: standard costing compares actual costs to a fixed standard (set once, typically annually) and generates variances. Kaizen costing sets a target below the current standard and continuously reduces the standard as improvements are realized.
| Feature | Standard Costing | Kaizen Costing |
|---|---|---|
| When applied | Both design and production | Production phase only |
| Standard basis | Engineering/historical | Current actual cost |
| Direction | Control to standard | Reduce below current actual |
| Employee role | Follow established procedures | Identify and implement improvements |
| Variance meaning | Deviation from standard | Failure to achieve improvement target |
Chapter 9: Dashboard Design Principles
9.1 The Purpose of a Dashboard
The term “dashboard” derives from the automotive dashboard: a small set of high-priority gauges (speed, fuel level, engine temperature) that a driver needs to monitor while focused primarily on the road. A well-designed business dashboard applies the same discipline: only essential information, organized for rapid comprehension.
9.2 Dashboard Types
| Dashboard Type | Primary Purpose | Primary Audience | Update Frequency |
|---|---|---|---|
| Strategic | Monitor progress toward long-term objectives | Executives, board | Monthly, quarterly |
| Operational | Monitor day-to-day performance | Operations managers | Daily, real-time |
| Analytical | Explore data for insights | Analysts | Ad hoc |
| Tactical | Track project or initiative progress | Middle management | Weekly |
9.3 Design Principles
9.3.1 The Five-Second Rule
A well-designed dashboard communicates its most important message within five seconds of viewing. If a reader needs more than five seconds to understand what the dashboard is telling them about performance, it is overloaded, poorly organized, or poorly labeled.
9.3.2 Preattentive Attributes
Certain visual properties are processed by the human brain before conscious attention is engaged — these are called preattentive attributes. Effective dashboard design deploys these strategically to direct the viewer’s eye to the most important information.
Key preattentive attributes:
- Color hue: Red draws attention; use it sparingly to signal problems
- Size: Larger elements appear more important
- Position: Elements in the upper left are typically seen first (Western reading pattern)
- Contrast: High-contrast elements stand out from low-contrast backgrounds
9.3.3 Data-Ink Ratio
Edward Tufte’s principle of data-ink ratio holds that every element on a visualization should serve a data-communication purpose. Elements that consume visual space without encoding data (gridlines, borders, backgrounds, decorative icons) are “chart junk” and should be minimized or eliminated.
\[ \text{Data-Ink Ratio} = \frac{\text{Ink Used to Encode Data}}{\text{Total Ink Used in the Chart}} \]A ratio approaching 1.0 is ideal. In practice, this means removing default gridlines, using thin or no borders on chart panels, avoiding 3D chart effects, and eliminating shadow or gradient fills.
9.3.4 Sparklines and Small Multiples
Sparklines are small, word-sized trend lines embedded in tables or text. They communicate directional trend information in minimal space — ideal for dashboards where space is constrained.
Small multiples are a series of charts with identical structure displaying different sub-segments of the data. They allow direct comparison across many categories without cognitive overhead, because the viewer only needs to learn the chart structure once.
9.3.5 Color Usage
- Use color purposefully, not decoratively
- Limit the palette to 2–4 colors in most cases
- Use sequential color scales (light to dark of one hue) for ordered data (e.g., sales volume from low to high)
- Use diverging color scales (e.g., red to white to blue) for data that has a meaningful midpoint (e.g., variance above/below target)
- Use categorical color scales (distinct hues) sparingly for labeling discrete groups
- Never use red and green as the only distinguishing colors (colorblind accessibility)
9.4 Choosing the Right Chart Type
| If you want to show… | Use this chart type |
|---|---|
| Change over time (continuous) | Line chart |
| Comparison across discrete categories | Bar chart (horizontal or vertical) |
| Part-to-whole composition | Stacked bar, pie (only for 2–3 categories) |
| Relationship between two variables | Scatter plot |
| Geographic distribution | Choropleth map |
| Distribution of a single variable | Histogram, box plot |
| Single key metric vs. target | KPI card with sparkline, bullet chart |
| Multiple metrics in one view | Dashboard with small multiples |
9.5 From Charts to Stories: The Narrative Arc
A dashboard presents a snapshot; a story presents a sequence of insights with a narrative arc. Effective business communication typically uses the SCR structure:
- Situation: Context and baseline — what is the environment, and what do we expect?
- Complication: The performance gap or issue — what happened that requires attention?
- Resolution: The root cause and recommended action — why did it happen, and what should we do?
In Tableau, the Story feature allows analysts to sequence dashboards and worksheets into a connected presentation, with captions at each step articulating the “so what” of that screen.
Chapter 10: Performance Management in Public Sector and Not-for-Profit Settings
10.1 The Distinctive Context
Performance analytics in public sector organizations and not-for-profit (NFP) entities differs from the for-profit context in several important ways:
| Dimension | For-Profit | Public Sector / NFP |
|---|---|---|
| Primary objective | Financial return to shareholders | Mission fulfillment (public value, social impact) |
| “Bottom line” | Net income, EVA | Mission achievement, efficiency of resource use |
| Key stakeholders | Shareholders, customers | Citizens, clients, funders, taxpayers, government |
| Revenue source | Customer payments | Taxes, grants, donations, government transfers |
| Accountability | Market discipline, investor oversight | Democratic accountability, regulatory compliance, donor stewardship |
10.2 Adapting the Balanced Scorecard for Public Sector
The original Balanced Scorecard places financial outcomes at the top of the hierarchy. In the public sector, this hierarchy is typically inverted or reframed:
Typical adaptations:
| BSC Perspective | For-Profit Question | Public Sector Adaptation |
|---|---|---|
| Financial | How do we look to shareholders? | How do we demonstrate value for money to funders and taxpayers? |
| Customer | How do customers see us? | How do citizens / clients experience our services? |
| Internal Process | What must we excel at? | How do we design and deliver services efficiently? |
| Learning & Growth | Can we continue to improve? | What capabilities do we need to fulfill our evolving mission? |
10.3 Efficiency vs. Effectiveness in the Public Sector
A critical distinction in public sector performance management:
An organization can be efficient without being effective (e.g., processing welfare claims quickly but incorrectly) and effective without being efficient (e.g., achieving excellent health outcomes at extremely high cost). The best public sector performance frameworks measure both.
10.4 Logic Models
| Stage | Definition | Example (Employment Training Program) |
|---|---|---|
| Inputs | Resources invested | Funding, staff, facilities, curriculum |
| Activities | What the program does | Training sessions, job coaching, employer partnerships |
| Outputs | Direct products of activities | Participants trained, sessions delivered |
| Short-term outcomes | Immediate changes in participants | Skills gained, résumé quality improved |
| Medium-term outcomes | Behavioral changes | Job interviews obtained, employment secured |
| Long-term outcomes | Sustained social impact | Reduced unemployment, increased income, reduced social assistance use |
10.5 Challenges in NFP Performance Measurement
Public sector and NFP performance measurement faces challenges not typically encountered in for-profit settings:
- Attribution problem: It is difficult to isolate the causal effect of a specific program from other social, economic, and environmental factors affecting outcomes.
- Time horizon mismatch: Long-term social outcomes (e.g., reducing incarceration rates) manifest years or decades after the intervention, while funding cycles are annual.
- Multiple principal problem: NFP organizations answer to multiple stakeholders (government funders, private donors, clients, boards) who may have different and conflicting performance expectations.
- Crowding out of mission: If funders demand easily measurable outputs, organizations may shift toward measurable activities that are not the most mission-aligned (“teaching to the test”).
- Data availability: Unlike for-profit firms with integrated financial systems, many NFP organizations lack robust data collection infrastructure.
Chapter 11: Communicating Performance — Storyboards and Presentations
11.1 The Communication Challenge in Analytics
Completing rigorous analysis is necessary but insufficient. The analyst’s insights must reach decision-makers in a form they can understand and act on — and decision-makers are busy, often non-technical, and exposed to many competing claims on their attention. The communication challenge is as demanding as the analytical challenge.
11.2 Understanding the Audience
Before designing any communication, the analyst must explicitly consider:
- Who is the primary audience? (Executive, operational manager, board, regulator, client)
- What do they already know? (Background knowledge, familiarity with the data, prior exposure to the issue)
- What decision are they making? (This defines what conclusion the analysis must deliver)
- What is their attitude toward the subject? (Neutral and curious? Skeptical? Resistant to a particular conclusion?)
- How much time do they have? (60-second elevator pitch vs. 30-minute board presentation)
Answers to these questions should drive every design choice: the level of detail, the chart types, the amount of text, and the logical structure of the narrative.
11.3 The SCR Framework for Analytical Narratives
The Situation-Complication-Resolution structure, adapted from management consulting practice, provides a robust framework for analytical communication:
- Situation: Establishes shared context. What is the business, and what is the normal state of affairs? This section should be brief — it is not news to the audience.
- Complication: Introduces the change or tension that motivates the analysis. Something has happened that disrupts the expected state of affairs. This is the “so what” that justifies the analysis.
- Resolution: Provides the analytical finding, root cause, and recommended action. This is the substance of the analytical work.
Situation: Rideau Outdoor Retail Co. operates 42 stores across Ontario and Quebec, targeting the outdoor recreation segment. Q3 2025 is historically the peak quarter, accounting for approximately 38% of annual revenue.
Complication: Q3 2025 revenue of $18.4M missed the budget of $21.0M by 12.4%, and fell 8% below Q3 2024 actual revenue of $20.0M. This represents the largest Q3 shortfall in five years and threatens the annual plan.
Resolution: Diagnostic analysis identifies that 85% of the shortfall is attributable to the camping equipment category in the Ontario market, where a new competitor opened six stores in Q2 2025 and launched an aggressive pricing promotion. Recommended actions: (1) immediate price matching on the 12 highest-volume camping SKUs, (2) differentiation through loyalty program enhancements, (3) investigation of potential exclusive supplier arrangements.
11.4 Storyboarding
A storyboard is a planned sequence of slides or screens designed before any final visualizations are built. It forces the analyst to solve the narrative problem — what is the logical sequence of insights? — before investing time in production.
Storyboard process:
- Write the key message of each screen in one sentence (the “caption-first” approach)
- Arrange screens so each one builds on the previous
- Verify the sequence answers all three diagnostic questions (what, why, now what)
- Identify which chart type and which data will support each screen
- Only then open the visualization tool
A storyboard need not be digital — sketching screens on sticky notes or paper is often faster and more flexible.
11.5 Summary Communication
After a full analytical presentation, a single summary slide condenses the entire diagnostic into the three core answers. This slide serves as both the conclusion of a live presentation and a standalone artifact that the audience can share with others:
| Question | Answer |
|---|---|
| What happened? | [Concise statement of observed performance vs. benchmark] |
| Why did it happen? | [Root cause, expressed as a single sentence with key evidence] |
| Now what? | [Recommended action, stated as a concrete next step] |
If any of these three cells cannot be completed clearly and concisely, the diagnostic is incomplete.
Chapter 12: Synthesis — Connecting Analytics to Strategy
12.1 The Performance Management System as a Whole
The topics covered in this course — the Balanced Scorecard, KPI design, financial ratios, DuPont analysis, EVA, non-financial metrics, benchmarking, variance analysis, target costing, dashboard design, and public sector applications — are not isolated tools. They form a system of performance management that operates at multiple organizational levels simultaneously.
12.2 Connecting the Course Topics
| Course Topic | Role in the Performance Management System |
|---|---|
| Balanced Scorecard | The organizing framework for strategic objectives and KPIs |
| Strategy Maps | The causal logic connecting objectives across BSC perspectives |
| Financial Metrics (ROA, ROE, EVA) | Lagging financial outcomes: the ultimate accountability measure |
| DuPont Decomposition | Diagnostic tool for understanding drivers of financial outcome change |
| Non-Financial Metrics (NPS, Engagement) | Leading indicators that predict future financial outcomes |
| Benchmarking | External reference point for performance targets and improvement ideas |
| Variance Analysis | Operational control: comparing actual to plan in granular detail |
| Target & Kaizen Costing | Cost management: setting and progressively tightening cost targets |
| Dashboard Design | The communication layer: making performance visible to decision-makers |
| Public Sector Adaptations | Context-specific adjustments for non-market organizations |
12.3 The Analytical Workflow
In practice, a performance analyst working in an organization follows a recurring cycle:
- Plan: Understand the strategic context, stakeholder needs, and available data. Design the analytical approach before opening any tool.
- Collect & Prepare: Access data sources, profile the data, clean and reshape for analysis.
- Explore (Descriptive): Build summary metrics, trend charts, and segmentation views to understand what happened.
- Diagnose: Use drill-downs, scatter plots, variance decompositions, and cohort analyses to understand why it happened.
- Frame (Prescriptive): Translate findings into recommendations: now what?
- Communicate: Build a storyboard, create polished visualizations, and deliver findings through a structured narrative.
- Monitor: Establish dashboards and reporting rhythms that allow stakeholders to track whether recommendations are being implemented and whether the performance gap is closing.
This cycle repeats continuously — each round generates new data, new insights, and new questions.
12.4 The Ethical Obligation of the Performance Analyst
The performance analyst occupies a position of significant influence. The metrics selected, the benchmarks chosen, the variances highlighted, and the recommendations made shape resource allocation decisions, compensation outcomes, and organizational strategy. This influence carries ethical responsibilities:
- Objectivity: Present findings that are supported by evidence, even when they contradict management preferences
- Completeness: Do not selectively omit unflattering data; present a balanced picture
- Accuracy: Verify data quality and disclose limitations; do not report false precision
- Independence: Resist pressure to reverse-engineer analysis toward a predetermined conclusion
- Transparency: Make assumptions explicit; explain the logic of the analysis
- Confidentiality: Handle sensitive business data with appropriate discretion
For accounting and finance professionals, these obligations are reinforced by the CPA Canada Code of Professional Conduct, which requires objectivity, integrity, and due care in all professional work. Performance analytics is a domain where these principles are tested regularly.
12.5 Looking Ahead: Analytics in a Changing Environment
The field of performance analytics is evolving rapidly. Several developments are reshaping how organizations measure and manage performance:
Integrated Reporting: The move toward reporting that combines financial and non-financial (ESG: environmental, social, governance) performance in a single integrated framework. The International Integrated Reporting Council (IIRC) framework — now part of the IFRS Sustainability Disclosure Standards ecosystem — is gaining traction among large public companies.
Real-Time Analytics: Cloud-based data warehouses and modern business intelligence tools (Tableau, Power BI, Looker) are enabling near-real-time performance monitoring, replacing monthly static reports with continuously updated dashboards.
Artificial Intelligence and Machine Learning: Predictive and prescriptive analytics capabilities that previously required specialized data science teams are increasingly embedded in mainstream business intelligence tools, making them accessible to accounting and finance professionals.
People Analytics: The systematic application of data analytics to human resources decisions — workforce planning, engagement, performance management, retention. This extends the “learning and growth” perspective of the Balanced Scorecard into the domain of data-driven HR.
Key Formulas Reference
| Formula | Expression |
|---|---|
| Gross Margin % | \(\frac{\text{Revenue} - \text{COGS}}{\text{Revenue}} \times 100\) |
| Operating Margin % | \(\frac{\text{EBIT}}{\text{Revenue}} \times 100\) |
| ROA | \(\frac{\text{Net Income}}{\text{Total Assets}}\) |
| ROE | \(\frac{\text{Net Income}}{\text{Shareholders' Equity}}\) |
| DuPont ROE (3-factor) | \(\text{Net Profit Margin} \times \text{Asset Turnover} \times \text{Equity Multiplier}\) |
| EVA | \(\text{NOPAT} - (\text{WACC} \times \text{Capital Employed})\) |
| NPS | \(\%\text{Promoters} - \%\text{Detractors}\) |
| CLV (simple) | \(\frac{\text{Avg Monthly Margin}}{\text{Monthly Churn Rate}}\) |
| CAC | \(\frac{\text{Total Sales & Marketing Spend}}{\text{New Customers Acquired}}\) |
| Inventory Turnover | \(\frac{\text{COGS}}{\text{Average Inventory}}\) |
| DSI | \(\frac{365}{\text{Inventory Turnover}}\) |
| Target Cost | \(\text{Target Selling Price} - \text{Required Profit Margin}\) |
| Materials Price Variance | \((\text{AP} - \text{SP}) \times \text{AQ Purchased}\) |
| Materials Quantity Variance | \((\text{AQ Used} - \text{SQ Allowed}) \times \text{SP}\) |
| Labour Rate Variance | \((\text{AR} - \text{SR}) \times \text{AH}\) |
| Labour Efficiency Variance | \((\text{AH} - \text{SH Allowed}) \times \text{SR}\) |
Abbreviations: AP = Actual Price, SP = Standard Price, AQ = Actual Quantity, SQ = Standard Quantity, AR = Actual Rate, SR = Standard Rate, AH = Actual Hours, SH = Standard Hours