We wrote a post about how we implemented a process that saved SDRs an hour of time per day by automating account distribution.
After we published this post, a question that kept coming up from readers was:
“Can you share details around how your team was able to narrow down to this criteria?”
Ask and we shall deliver 🫡
Let’s get into it.
One of the biggest challenges for outbound sales is identifying which accounts should be prioritized.
We had over 100,000 records in our CRM. New Accounts were being added constantly. And there was no way to distinguish great accounts from bad ones.
Additionally, our best reps kept getting promoted out of the team — which we love to see!
But this meant that we needed a way to teach brand-new incoming reps how to identify a great fit account.
“Oh wow account scoring, how novel” I hear you saying sarcastically.
It’s actually pretty cool. And I’m going to get deep into the secret sauce of how we calculated account score — stay with us.
So we calculate our account score using 300+ Account-level data points using Goodfit that correlate with win rate.
Here’s a sample of those data points:
Coming up with a valuable account score is… challenging.
But not impossible.
Here’s how we did it:
We evaluated about 10 different data providers across 70 different metrics for firmographic, technographic, and custom data fields.
We ended up going with Goodfit because they had:
Wahid, the GOAT VP of Revenue at Goodfit, recommended something at the beginning of the process that made a HUGE impact on how we created our first account score:
Trust your gut when building out the first iteration of an account score.
We started by having conversations internally and polling different groups/divisions about what makes a great customer. Sales development, Sales, Post-Sales, Customer Success, and Marketing all had valuable, unique insights on what makes a great customer.
This informed the data that we would eventually need from a provider, the evaluation process for deciding on a provider, and gave us a starting point for testing our hypotheses.
Goodfit ran an analysis to see how often factors occurred in a Closed Won set versus all other companies in our TAM.
For each data point, they then calculated the number of companies with that factor in the good set versus other, not good companies that also had that factor.
From there, you can turn those two counts into percentages and run statistics based on confidence between the proportions zscore to determine the confidence of those proportions actually being different.
Then we tested the accuracy by seeing how this score stacked up against the accounts that we manually determined were great.
Our initial analysis used data from companies that had been customers for at least one year and had renewed.
We knew that a limitation of this analysis would be that we were uncertain of how a company’s firmographic and technographic states might change once they’ve been a Chili Piper customer for longer than a year.
So we re-ran the analysis 6 months after having Goodfit data in our CRM based on recent Closed Won data and that’s how we came to our newest predictive model and account score.
After we implemented this new account score, we set up a lead routing flow using Distro to route leads to SDRs when they become a good fit.
Here’s our flow from a bird’s eye view:
Essentially:
For the full deets on how to set this up, check our full article on how we automated the assignment of high-score accounts to our SDR team.
Since implementing this system, we’ve been able to save each SDR 20 hours per month.
And our Average Contract Value (ACV) has increased by 25% — although this was based on a multitude of factors (we’ll touch more on this in a later post).
So, yeah. I’d say it worked.
If you want to spice up your workflows using this same flow… Get a demo of Distro today.
We wrote a post about how we implemented a process that saved SDRs an hour of time per day by automating account distribution.
After we published this post, a question that kept coming up from readers was:
“Can you share details around how your team was able to narrow down to this criteria?”
Ask and we shall deliver 🫡
Let’s get into it.
One of the biggest challenges for outbound sales is identifying which accounts should be prioritized.
We had over 100,000 records in our CRM. New Accounts were being added constantly. And there was no way to distinguish great accounts from bad ones.
Additionally, our best reps kept getting promoted out of the team — which we love to see!
But this meant that we needed a way to teach brand-new incoming reps how to identify a great fit account.
“Oh wow account scoring, how novel” I hear you saying sarcastically.
It’s actually pretty cool. And I’m going to get deep into the secret sauce of how we calculated account score — stay with us.
So we calculate our account score using 300+ Account-level data points using Goodfit that correlate with win rate.
Here’s a sample of those data points:
Coming up with a valuable account score is… challenging.
But not impossible.
Here’s how we did it:
We evaluated about 10 different data providers across 70 different metrics for firmographic, technographic, and custom data fields.
We ended up going with Goodfit because they had:
Wahid, the GOAT VP of Revenue at Goodfit, recommended something at the beginning of the process that made a HUGE impact on how we created our first account score:
Trust your gut when building out the first iteration of an account score.
We started by having conversations internally and polling different groups/divisions about what makes a great customer. Sales development, Sales, Post-Sales, Customer Success, and Marketing all had valuable, unique insights on what makes a great customer.
This informed the data that we would eventually need from a provider, the evaluation process for deciding on a provider, and gave us a starting point for testing our hypotheses.
Goodfit ran an analysis to see how often factors occurred in a Closed Won set versus all other companies in our TAM.
For each data point, they then calculated the number of companies with that factor in the good set versus other, not good companies that also had that factor.
From there, you can turn those two counts into percentages and run statistics based on confidence between the proportions zscore to determine the confidence of those proportions actually being different.
Then we tested the accuracy by seeing how this score stacked up against the accounts that we manually determined were great.
Our initial analysis used data from companies that had been customers for at least one year and had renewed.
We knew that a limitation of this analysis would be that we were uncertain of how a company’s firmographic and technographic states might change once they’ve been a Chili Piper customer for longer than a year.
So we re-ran the analysis 6 months after having Goodfit data in our CRM based on recent Closed Won data and that’s how we came to our newest predictive model and account score.
After we implemented this new account score, we set up a lead routing flow using Distro to route leads to SDRs when they become a good fit.
Here’s our flow from a bird’s eye view:
Essentially:
For the full deets on how to set this up, check our full article on how we automated the assignment of high-score accounts to our SDR team.
Since implementing this system, we’ve been able to save each SDR 20 hours per month.
And our Average Contract Value (ACV) has increased by 25% — although this was based on a multitude of factors (we’ll touch more on this in a later post).
So, yeah. I’d say it worked.
If you want to spice up your workflows using this same flow… Get a demo of Distro today.