The photo above is from the LOVE park in Philadelphia. We were attending the Shop.Org Digital Summit organized by the National Retail Federation (NRF). Apropos, I will share the love … My compliments to the organizers for a fantastic summit. Also a big thanks to the NRF for recognizing us as one of the notable marketing technologies at the conference, and for recommending us to the CMO Council for their walk-through. And finally thank you! to all who stopped by our booth for a chat. We discussed business, talked about alliances, shared our respective experiences in the space. All in all – it was a great experience and we will be back.
After the show the team compiled the questions we had heard at our booth in an internal debrief. I thought it could serve as a good FAQ list. So, in no particular order, here are the top ten questions.
Q1: So why is relying on Google Analytics so risky?
Short answer: it is inaccurate and biased
Long answer: Google Analytics is a great tool for looking at site visitors. But in my opinion it’s the most expensive free tool you could use for measuring marketing effectiveness.
Why? Because it biases spend towards those paid channels that lead to conversion – arguably the least important touchpoint in the shoppers’ journey. Unless you do multi-touch attribution from the point of first engagement to the conversation leading up to the sale, the weight is skewed towards that single channel.
Q2: How do you prove your models are working?
Short answer: We predict future sales and the actual sales line up
Long answer: I understand the skepticism. The first phase of revenue attribution requires a set-up wherein we put in a predictive model to estimate sales in the future. The accuracy of the predictive model is tested using a holdout data in the development process, and is used to forecast future sales. The attribution is an output of this model. You know the model is working because the predictions track the actual sales (before they occur). If the prediction starts slipping, it’s time for a model refresh.
Q3: How do you connect mobile with the full picture?
- How do you do cross-device attribution?
- How can you know if people found you on mobile and buy on desktop via organic search?
Short answer: User and Device “fingerprinting”.
Long answer: The above are all versions of the same question. We collect data through a tracking pixel that we use for linking devices and customers across different sessions. Our technology uses probabilistic matching on device “fingerprints” and identity matching across the user sign-ins.
Q6: How do you do multitouch attribution?
- Do you use time decay curves?
- Would you need us to coach you on how to weight touches? You know your competitor ___ does that?
Short answer: Stochastic modeling
[Aside: It speaks to the caliber of the audience when they slip readily from marketing budget to deep technical questions.]
Long Answer: I will point you to the chart to the right. Look to the journey of the shopper across all the touches from the first contact through to the sale. We break out the occurrence of each channel across five stages of the shopping journey – from First Contact to Conversation to Sale to Re-engagement and Repeat Sale. The amount of weight allocated to each channel depends on the number of touchpoints at each of the listed stages, relative to the combinatorial possibilities in the shopping journey. Time decay curves are a rudimentary technology and I’m aware our competitors use these. Our preferred technology is stochastic modeling. I can explain further details in a separate call. Drop me a note below.
Q9: What’s the minimum size of data you need to do attribution?
Short answer: It depends
Long answer: This is a tough question with different layers to the answers. There is no lower limit if you are not interested in cross-channel (in-store sales) or cross-device attribution. In fact our starter offering is suited to retailers of all sizes. If however, doing algorithmic revenue attribution requiring cross-channel or cross-device attribution, small sample issues can affect the quality of the models. So my answer is “It depends”.
Q10: How is your report different from what I get from my vendors?
Short answer: The numbers from your vendors don’t add up.
Long answer: The metrics used by your vendors are “greedy” on participation. So if a channel participates at any stage on the shopping journey, the channel gets full credit for the sale. The allocation should be partial and based on the stage of the touch and the length of the journey. That’s the concept underlaying multi-touch revenue attribution.
I’ve distilled the discussion to key points and paraphrased the questions for brevity. But I’m sure some of the readers will see their points reflected above. If you want to discuss this any further, drop me a note – and will see you at next year’s summit in Dallas.