Chili Piper: What is Segment’s outlook on evaluating tech and how does this tie into your business goals?
Mark Miller: We evaluate tech based on whether it’s a new technology approaching us or if it’s us seeking out a technology to accomplish a project. If it’s the former, we try to immediately get our hands on the product to try it out.
We find that many vendors try to push us through a full sales pitch when their product isn’t great. Thus, we try to get access to test the product or data immediately. If we’re seeking out a technology to accomplish a project, we have a list of requirements that we then see how the vendor matches.
With regards to evaluating smaller tech companies, our VP of Growth, Guillaume, likes to help these companies test use cases and product market fit. If a small, up and coming company helps Segment succeed, [Guillaume] can also help with their brand and product research.
CP: What is Segment’s current martech stack?
MM: Infrastructure: Segment, Hull.io, Zapier, Redshift. Marketing: Customer.io, Google AdWords, Facebook Ads, Drift, Chili Piper.
Analytics: Mode. Data Enrichment: Clearbit, Madkudu, Datanyze
CP: What considerations go into purchasing tech?
MM: Generally, we are driven almost entirely with functionality when it comes to technology vendors. When it comes to data enrichment tools, the quality of the data is most important. Luckily, budget is typically not an issue because the lifetime value of Segment customers is very high, which enables us to spend more money on acquisition.
CP: Why does Segment prefer to work with up and coming tech companies?
MM: A few different reasons.
CP: How did you design an experiment to test Chili Piper and Madkudu? How did you ensure the confidence of the experiment?
MM: Segment practices what we preach when it comes to being a data-driven company. We started testing our web form because we recognized that inbound leads account for a bulk of the company’s revenue, and tweaking our form could potentially have a big impact down the funnel.
From this A/B test, we found that using Madkudu with Chili Piper in our forms improved the conversion of highly qualified demo requests to opportunities by 61%. This is a HUGE increase. We were setting expectations at more like 10-20%.
This experiment was based on Segment’s infrastructure, since any events triggered were sent back through Segment and to Redshift which allowed us to build a funnel report from demo request through opportunity reported. On top of that, we had an A/B test report looking at the volume of the sample size, volume of opportunities, and conversion to opportunity.
Here, we created a 50% control group, using Optimizely’s statistical significance converter which determined how big of a sample size we needed to run the experiment with. This translated to running the experiment for two months, which allowed us to conclude our experiment with over 95% confidence.
I can’t stress the importance of making a decision backed by statistical evidence enough. All too often, teams will automate a process without performing a legitimate A/B test, only to be left questioning whether the change they made to the process actually made a positive or negative impact in the future and no means to answer that question.