Prove causation in business initiatives

At some point in every FP&A professional’s career, we are asked to prove the causation of an implemented business activity.

  • It could be a new productivity software.

  • Or a marketing program.

  • Or call center automation.

Regardless the request, there is 1 technique I’ve used dozens of times prove causation:

Difference in Differences (DID)

Short of running a randomized experiment, a Difference in Differences (DID) is the closest you can get to proving causation.

Without it, you are left to say “we saw sales increase, but we’re not sure if the initiative caused it”

With it, you can confidently say “this initiative caused x% of the sales increase”

See the difference?

Here is how to do it, step-by-step:

Step 1: Understand Your Initiatives

It’s important that the initiative you are trying to measure fits the Difference in Differences (DID) template. Here’s what you’ll need for the analysis:

  • 1 specific metric you are measuring

  • 2 time periods - one before the initiative and one after

  • A ‘treatment’ group that was impacted by the new initiative

  • A ‘control’ group that was not impacted by the new initiative and does similar work to the treatment group

For the example below, we’ll be analyzing a new phone system that was rolled out to a single team in a call center to improve average handle time.

Step 2: Arrange Your Data

In this example, we implemented a new call center software in mid-January to ‘Team 2’.

In a few months we’ll also roll it out to ‘Team 1’, but for now they are on the old system.

In the data above, ‘Team 1’ is the control group - they didn’t get the new software. And ‘Team 2’ is the treatment group since they implemented the new software in mid-January.

On a quick glance without a DID analysis, you would be led to believe the new system did not improve Average Handle Time - since ‘Team 2’ increased their handle time by 0.1 minutes from December to February.

But that’s not the full picture.

Step 3: Perform the Analysis

We can see that the control group increased 0.3 minutes from December to February.

This means the treatment group would have increased 0.3 minutes if not for the new phone system that was implemented in January.

Since the treatment group only increased 0.1 minutes, we would say: ‘The new phone system caused a 0.2 minute improvement in Average Handle Time’

This means when we roll out the new phone system to ‘Team 1’ in the future, we can expect the same improvement.

In Summary:

You don’t need to be a data scientist to perform causal analysis.

Finding an initiative with 1) a control group 2) a treatment group and 3) multiple time periods is critical - and not all business initiatives are set up this way.

Find an initiative that launched in your company and try this today.

How will you take action on this?

See you next Saturday.