Why Your Attribution Stack Is Already Wrong by the Time You Act on It
Every marketer who has stared at a dashboard has quietly wondered the same thing. Is this number real? Does it reflect what actually caused that sale, or is it just the last thing the system happened to record?
The question determines where millions of dollars go. And the discomfort it produces is growing, as the number of channels expands, consideration windows stretch, and the platforms doing the reporting have the most to gain from generous self-assessment.
That tension was the foundation of a panel at eTail Palm Springs 2026, where five marketing leaders from REI, Newton Baby, A.L.C., John Fluevog Shoes, and Vibe.co gathered to discuss what actually drives conversions and which measurement methodologies hold up under real-world scrutiny. 
What followed was less a debate about which model is best and more a working confession: no single methodology is trustworthy on its own, the best teams are layering multiple imperfect tools, and the data is already stale by the time you act on it.
Aaron Zagha, CMO of Newton Baby, set the tone. “Our source of truth, whenever humanly possible, is incrementality tests,” he said. “We always like to depend on causal inference, not just correlation.” Newton Baby uses a causal MMM informed by incrementality, supplements with MTA for in-channel bid operations, and never uses MTA for budget allocation.
Max Green, who leads media analytics and SEO at REI, described a similar hierarchy. Incrementality first. MMM to fill gaps. In-platform data for real-time tactical decisions. The consensus across the panel was consistent: incrementality testing is the gold standard. But it has practical limits. Tests take time, and consumer behavior shifts while you wait.
“We went from Performance Max being wildly incremental two years ago to non-incremental at all. And that’s purely due to Google changes. We didn’t change anything.” – Aaron Zagha.
Rachel McGovern, Head of Sales at Vibe.co, reframed the conversation around what happens when the ad unit itself is not clickable. Streaming TV is a view-through conversion. Brands fixated on last-click attribution or Google Analytics as a single source of truth will never see it.
Vibe.co has integrated with roughly 40 measurement and incrementality partners to let its data flow into third-party MTA and MMM platforms. McGovern said she regularly encounters emerging brands that are told by those partners they are overspending on paid social, sometimes on channels with as much as 28% volatility in weekly performance. “If you’re not looking at the halo effect and how all of your media works together,” she said, “you’re missing a big decision funnel for yourself and your marketing budgets.”
Nicole Goldberg, Director of Ecommerce at A.L.C., reinforced this with simpler tools. Brand searches going up? New customer counts rising? Direct traffic increasing? These are accessible indicators that any team can monitor, even without enterprise-level measurement infrastructure.
The panelists’ frustration with siloed measurement and invisible halo effects directly mirrors the problem Fospha is built to solve. As the only full-funnel MMM delivering daily, ad-level insights, Fospha quantifies the full-funnel impact of channels like Meta and TikTok, including paid media’s halo effect on marketplace sales that most tools miss entirely.
Luiza Libardi, eCommerce and Marketing Senior Manager at John Fluevog Shoes, offered a counterpoint to the assumption that measurement is only for well-funded teams. She found a vendor at last year’s eTail, called InSite, that provides both MMA and MTA at roughly a third of the price of enterprise alternatives. For the first time, Fluevog is measuring in-store sales alongside online sales, and the results challenged several long-held assumptions.
Zagha backed this up with hard math. One of Newton Baby’s first measurement vendors cost about 10% of their media spend but improved performance by roughly 30% over two years. Green shared a similar arc at REI. Five years ago, they pulled media dollars away from campaigns to build an MMM. The resulting portfolio-wide ROI improvement justified the trade-off many times over.
Goldberg, who did not have the budget for a measurement tool last year, described a scrappy alternative: tracking MER (essentially overall ROAS), monitoring CAC, new customers, email signups, direct traffic, and brand search, while leveraging the free lift studies available from Meta and Google. “Looking at last click should be no one’s North Star at this point,” she said.
For high-consideration products, the standard 90-day attribution window is a fiction. Libardi noted that for John Fluevog’s distinctive, expensive footwear, customers sometimes take a year to purchase. Friends still ask her about styles the brand stopped making long ago. Zagha described similar challenges at Newton Baby, where Pinterest consideration cycles routinely exceed 90 days.
Green took this in a different direction. With 120,000 live SKUs at REI, the consideration window varies from a sock to a $1,000+ kayak. Fine-tuning a single attribution window across that range is not practical. REI instead focuses on understanding the customer broadly and adjusting creative and targeting by product type, rather than trying to force one attribution window across everything.
The closing exchange was the most revealing. When an audience member asked the panelists how often they feel they actually know where truth is in their data, the answers were uniformly humble.
Zagha: “There’s no true north. I mean, we wouldn’t have jobs.”
Green described telling his team that the budget will be wrong the week after they set it, because something new will have been learned. The discipline is not in arriving at certainty. It is in consistently using everything available today to move forward. Libardi framed it most simply: “I think the true north is actually how much money did we make.”
The panel made one thing clear: the measurement question has no stable answer. Incrementality is the best foundation, but it decays fast. MMM works only when it is calibrated by actual causal evidence. Platform reporting is structurally self-interested. Post-purchase surveys are directional at best and dangerously misleading at worst.
The competitive advantage belongs to the team that layers imperfect tools, tests constantly, and adjusts faster than the data ages.
Leave a Reply
You must be logged in to post a comment.