SpicyOps: How we refined our account scoring process with Goodfit

Morgan Cliburn
October 11, 2024
min to read

SpicyOps: How we refined our account scoring process with Goodfit

Morgan Cliburn
October 11, 2024
min to read

We wrote a post about how we implemented a process that saved SDRs an hour of time per day by automating account distribution.

After we published this post, a question that kept coming up from readers was:

“Can you share details around how your team was able to narrow down to this criteria?” 

Ask and we shall deliver 🫡

Let’s get into it.

Our problem: We needed to scale outbound without increasing headcount

One of the biggest challenges for outbound sales is identifying which accounts should be prioritized. 

We had over 100,000 records in our CRM. New Accounts were being added constantly. And there was no way to distinguish great accounts from bad ones.

Additionally, our best reps kept getting promoted out of the team — which we love to see! 

But this meant that we needed a way to teach brand-new incoming reps how to identify a great fit account.

Enter: Account scoring

“Oh wow account scoring, how novel” I hear you saying sarcastically. 

It’s actually pretty cool. And I’m going to get deep into the secret sauce of how we calculated account score — stay with us.

So we calculate our account score using 300+ Account-level data points using Goodfit that correlate with win rate. 

Here’s a sample of those data points: 

  • CRM (Because Chili Piper only works with Salesforce and HubSpot CRM) 
  • Industry (B2B + more specific breakdowns) 
  • CTAs on their website that indicate investment in demand generation (e.g. Get a Demo, Free Trial, Download Now, Register Now, etc.)
  • Geography
  • Employee count
  • XDR team size and hiring count
  • Demand gen size and hiring count
  • Presence of paid campaigns
  • Whether or not they’re utilizing a competitor
  • Tech stack (email server, marketing automation, etc.)
  • Have high complexities in their routing (no surprise here), meaning they: some text
    • Have 10+ sales reps 
    • Have a high influx of web traffic
    • Have a complex sales cycle (have AMs, CS) in addition to AEs and ADRs
  • They use multiple tools to optimize their funnel:some text
    • Marketing automation (Pardot, Marketo, Hubspot) 
    • Sales Engagement 
    • In-app optimization tools 

How did we calculate our account score?

Coming up with a valuable account score is… challenging. 

But not impossible. 

Here’s how we did it: 

First, we needed a data provider. 

We evaluated about 10 different data providers across 70 different metrics for firmographic, technographic, and custom data fields. 

We ended up going with Goodfit because they had:

  1. The best match rate on the data points we needed
  2. We could customize data we received from them
  3. Competitive price — AKA we paid for what we needed without the additional fluff that tends to come with data packages 

Then, we needed a hypothesis

Wahid, the GOAT VP of Revenue at Goodfit, recommended something at the beginning of the process that made a HUGE impact on how we created our first account score: 

Trust your gut when building out the first iteration of an account score. 

We started by having conversations internally and polling different groups/divisions about what makes a great customer. Sales development, Sales, Post-Sales, Customer Success, and Marketing all had valuable, unique insights on what makes a great customer. 

This informed the data that we would eventually need from a provider, the evaluation process for deciding on a provider, and gave us a starting point for testing our hypotheses.

Next, we came up with our data points

Goodfit ran an analysis to see how often factors occurred in a Closed Won set versus all other companies in our TAM. 

For each data point, they then calculated the number of companies with that factor in the good set versus other, not good companies that also had that factor. 

From there, you can turn those two counts into percentages and run statistics based on confidence between the proportions zscore to determine the confidence of those proportions actually being different. 

Then we tested the accuracy by seeing how this score stacked up against the accounts that we manually determined were great. 

And we continue to iterate (and iterate and iterate and iterate)

Our initial analysis used data from companies that had been customers for at least one year and had renewed.

We knew that a limitation of this analysis would be that we were uncertain of how a company’s firmographic and technographic states might change once they’ve been a Chili Piper customer for longer than a year. 

So we re-ran the analysis 6 months after having Goodfit data in our CRM based on recent Closed Won data and that’s how we came to our newest predictive model and account score. 

Has it worked? 

After we implemented this new account score, we set up a lead routing flow using Distro to route leads to SDRs when they become a good fit. 

Here’s our flow from a bird’s eye view: 

Essentially: 

  • If a Record’s Goodfit score is updated and there’s no Open Opportunity, then assign and update Ownership, round-robining through SDRs
  • If the ownership field hasn’t changed after 72 hours, then reassign the record 

For the full deets on how to set this up, check our full article on how we automated the assignment of high-score accounts to our SDR team.

Since implementing this system, we’ve been able to save each SDR 20 hours per month.

And our Average Contract Value (ACV) has increased by 25% — although this was based on a multitude of factors (we’ll touch more on this in a later post). 

So, yeah. I’d say it worked.

If you want to spice up your workflows using this same flow… Get a demo of Distro today.

See the power of Distro in action today!

We wrote a post about how we implemented a process that saved SDRs an hour of time per day by automating account distribution.

After we published this post, a question that kept coming up from readers was:

“Can you share details around how your team was able to narrow down to this criteria?” 

Ask and we shall deliver 🫡

Let’s get into it.

Our problem: We needed to scale outbound without increasing headcount

One of the biggest challenges for outbound sales is identifying which accounts should be prioritized. 

We had over 100,000 records in our CRM. New Accounts were being added constantly. And there was no way to distinguish great accounts from bad ones.

Additionally, our best reps kept getting promoted out of the team — which we love to see! 

But this meant that we needed a way to teach brand-new incoming reps how to identify a great fit account.

Enter: Account scoring

“Oh wow account scoring, how novel” I hear you saying sarcastically. 

It’s actually pretty cool. And I’m going to get deep into the secret sauce of how we calculated account score — stay with us.

So we calculate our account score using 300+ Account-level data points using Goodfit that correlate with win rate. 

Here’s a sample of those data points: 

  • CRM (Because Chili Piper only works with Salesforce and HubSpot CRM) 
  • Industry (B2B + more specific breakdowns) 
  • CTAs on their website that indicate investment in demand generation (e.g. Get a Demo, Free Trial, Download Now, Register Now, etc.)
  • Geography
  • Employee count
  • XDR team size and hiring count
  • Demand gen size and hiring count
  • Presence of paid campaigns
  • Whether or not they’re utilizing a competitor
  • Tech stack (email server, marketing automation, etc.)
  • Have high complexities in their routing (no surprise here), meaning they: some text
    • Have 10+ sales reps 
    • Have a high influx of web traffic
    • Have a complex sales cycle (have AMs, CS) in addition to AEs and ADRs
  • They use multiple tools to optimize their funnel:some text
    • Marketing automation (Pardot, Marketo, Hubspot) 
    • Sales Engagement 
    • In-app optimization tools 

How did we calculate our account score?

Coming up with a valuable account score is… challenging. 

But not impossible. 

Here’s how we did it: 

First, we needed a data provider. 

We evaluated about 10 different data providers across 70 different metrics for firmographic, technographic, and custom data fields. 

We ended up going with Goodfit because they had:

  1. The best match rate on the data points we needed
  2. We could customize data we received from them
  3. Competitive price — AKA we paid for what we needed without the additional fluff that tends to come with data packages 

Then, we needed a hypothesis

Wahid, the GOAT VP of Revenue at Goodfit, recommended something at the beginning of the process that made a HUGE impact on how we created our first account score: 

Trust your gut when building out the first iteration of an account score. 

We started by having conversations internally and polling different groups/divisions about what makes a great customer. Sales development, Sales, Post-Sales, Customer Success, and Marketing all had valuable, unique insights on what makes a great customer. 

This informed the data that we would eventually need from a provider, the evaluation process for deciding on a provider, and gave us a starting point for testing our hypotheses.

Next, we came up with our data points

Goodfit ran an analysis to see how often factors occurred in a Closed Won set versus all other companies in our TAM. 

For each data point, they then calculated the number of companies with that factor in the good set versus other, not good companies that also had that factor. 

From there, you can turn those two counts into percentages and run statistics based on confidence between the proportions zscore to determine the confidence of those proportions actually being different. 

Then we tested the accuracy by seeing how this score stacked up against the accounts that we manually determined were great. 

And we continue to iterate (and iterate and iterate and iterate)

Our initial analysis used data from companies that had been customers for at least one year and had renewed.

We knew that a limitation of this analysis would be that we were uncertain of how a company’s firmographic and technographic states might change once they’ve been a Chili Piper customer for longer than a year. 

So we re-ran the analysis 6 months after having Goodfit data in our CRM based on recent Closed Won data and that’s how we came to our newest predictive model and account score. 

Has it worked? 

After we implemented this new account score, we set up a lead routing flow using Distro to route leads to SDRs when they become a good fit. 

Here’s our flow from a bird’s eye view: 

Essentially: 

  • If a Record’s Goodfit score is updated and there’s no Open Opportunity, then assign and update Ownership, round-robining through SDRs
  • If the ownership field hasn’t changed after 72 hours, then reassign the record 

For the full deets on how to set this up, check our full article on how we automated the assignment of high-score accounts to our SDR team.

Since implementing this system, we’ve been able to save each SDR 20 hours per month.

And our Average Contract Value (ACV) has increased by 25% — although this was based on a multitude of factors (we’ll touch more on this in a later post). 

So, yeah. I’d say it worked.

If you want to spice up your workflows using this same flow… Get a demo of Distro today.

See the power of Distro in action today!

Lead routing you can set and forget

Improve your workflows and bring in more pipeline with Distro.

Get a demo
Linked in logo
Follow Us

Most Recent Articles