The (un)Common Logic Approach to Data-Driven Marketing
A few summers ago, a B2B SaaS leader asked me to diagnose why their paid search spend had doubled while qualified pipeline flatlined. On paper, the metrics looked healthy. Click through rate was up, conversion rate held steady, and cost per lead hovered near the forecast. But the sales team reported fewer deals and longer cycles. We pulled six months of CRM data, matched ad touchpoints at the contact level, and ran a simple cohort analysis by first click. The picture changed fast. An algorithmic bid strategy chased low quality, top of funnel keywords that inflated lead counts, while the keywords that usually brought in buying committees were capped by an overly tight target CPA. The average deal size on the “efficient” leads was 68 percent lower, and win rate fell by more than half. The system had been optimized to the wrong outcome.
That story captures the heart of the (un)Common Logic approach. Being data driven is not about staring harder at dashboards. It is about asking better questions, defining the right units of success, and designing feedback loops that reward the behaviors you actually want. It is also about judgment, the kind you earn by shipping campaigns, missing targets, learning where the data lies to you, and building guardrails so it lies less often.
What data-driven marketing really means
Most teams say they are data driven, then default to channel metrics that are easy to fetch and tidy to present. The list is familiar: impressions, clicks, view rates, cost per whatever. These are useful as instruments, like knowing your car’s RPM when you merge onto the highway. But you would not drive by the tachometer alone. True data-driven work uses tactical metrics to serve a commercial narrative. You connect the dots from creative to audience to offer to pipeline to cash flow, then set constraints in the system that reflect this end to end view.
The (un)Common Logic posture adds a twist. We accept that marketing systems contain noise, lag, and bias, and we plan around those realities. We do not outsource strategy to an algorithm, and we do not worship a single model of attribution. We invest in understanding how a given input changes meaningful business output, even when the evidence arrives on a delay. Then, we choose the simplest model that captures what matters, and we pressure test it in the wild.
The trap of convenient metrics
If you have ever set a global target CPA and let it govern every keyword, audience, and creative combination, you have seen convenience outmuscle sense. Targets become ceilings and floors, not levers. The platform will happily find pockets of cheap conversions that look good on reports and perform poorly in revenue. Likewise, strict last click attribution makes brand search look heroic and top of funnel look useless. Both results are predictable artifacts of the measurement frame.
A retail client of mine learned this the hard way during a season when shipping delays spooked buyers. Their remarketing funnel looked efficient on last click, yet new customer growth stalled. When we matched orders to first touch and looked at customer lifetime value by entry channel, we found that first impressions on non brand search and creator content brought in customers who purchased twice within 90 days. Remarketing was closing the sale, not creating it. Spend moved upstream, and growth returned, even though blended CPA rose by 12 percent. Gross margin improved because we filled the pipe with buyers who came back.
The lesson is not to abandon remarketing. The lesson is to treat each metric as a lens with distortion. Your job is to know which lens to use for which decision.
Choosing the right unit of success
Before any bid strategy, creative concept, or segmentation work, define what success means in units that line up with company economics. For a PLG SaaS, trials that activate within seven days and hit a usage threshold might be the right proxy for revenue. For a B2B subscription with high contract value, qualified opportunities that reach stage two in the pipeline often signal real intent. For e-commerce, new buyers above a contribution margin threshold beat pure ROAS.
I once worked with a marketplace where sellers paid a listing fee and took a cut of each sale. If we optimized for sign ups, we could hit heroic CPAs by stuffing the funnel with casual listers who never uploaded an item. When we switched the north star to first fulfilled order within 30 days, bids shifted toward channels and geos that attracted existing side hustlers with inventory. Volume dipped for a month, then rebounded at a higher quality mix, and contribution margin per acquired seller rose by 22 percent. The change did not require fancy modeling. It required courage to choose a metric that mattered, then hold the line while the system recalibrated.
Data quality is not glamorous, and it wins
There is a reason veteran marketers obsess over plumbing. A single broken parameter in a URL can shadowban a whole campaign from your analytics. A misconfigured event can inflate reported conversions and train your bid strategy to chase ghosts. Data engineering may not excite a room like a flashy new concept, but it quietly determines whether your machine learns or misleads.
Treat the tracking plan like a product. Assign an owner, publish a spec, version it, and test it. Audit naming conventions, ensure consistent IDs across systems, and document how each event is fired. When a platform rolls out a new conversion schema or privacy setting, do not accept defaults. Map what the change means for your funnel, test it in a sandbox, and review logs during rollout.
Teams that do this kind of grunt work often look lucky. Their experiments converge faster because the noise floor is lower. Their budgets recover quicker from a platform bug because they spot the deviation within hours, not weeks. This is the quiet discipline behind an (un)Common Logic mentality.
Modeling for incrementality, not just attribution
Attribution tells you how credit is assigned. Incrementality tells you what moved because you acted. Both matter, but only one pays your salary. When you rely on only attribution, you can wind up rewarding touches that harvest demand rather than create it. When you introduce incrementality testing, even in scrappy forms, you begin to see which levers change outcomes for new customers, not just who showed up at the finish line.
You do not need a PhD to start. Geo splits, holdouts, time series with covariates, or simple on off tests around seasonal peaks can reveal signal. One CPG brand I worked with allocated 10 percent of markets as rolling holdouts for connected TV. Over two quarters, markets with exposure showed a 5 to 8 percent lift in branded search share and a measurable bump in retail sell through during promo windows. The brand maintained CTV spend even when platform reported ROAS looked underwhelming because they understood where the lift truly showed up.
Multi touch attribution still has a place. We use it to allocate investment within a channel or to spot under supported touchpoints that carry weight in the path to purchase. But when budget decisions get serious, we lean on incrementality evidence and modeled reach, then treat attribution as a directional guide inside the sandbox.
Experimentation as an operating system
Too many teams run experiments as sporadic stunts. The calendar dictates tests rather than hypotheses. The control is half hearted, or the sample size collapses under impatience. In an (un)Common Logic practice, experimentation is the operating system. It is routine, it is documented, and it respects math.
A good test plan answers three questions. What decision will we make based on the result, what magnitude of effect do we care about, and how much data do we need to detect that effect with confidence? Sometimes the right call is to run a small pilot that only answers whether something is viable at all. Other times you want to push a mature tactic through a tight A/B split because your margin structure changed and you need to retune bids. Either way, write the decision rule before you launch. You will be kinder to your future self.
Expect tests to fail, and harvest value from those failures. A DTC apparel brand ran a creative series that reduced CPA by 18 percent on prospecting, but cohort analysis showed lower repeat purchase rates. The brand shelved the campaign on that basis and kept the audience learnings that drove initial efficiency. That kind of tradeoff is only visible when you pick the right evaluation window and refuse to declare victory too soon.
The gritty middle of the funnel
Everyone loves talking top of funnel storytelling and bottom funnel conversions. The middle is where clarity goes to die. It is also where you can win by being specific. Start with the jobs people are trying to get done between awareness and action. Are they comparing vendors, seeking reassurance on risk, or trying to understand fit for their edge cases? Map content and interactions to those jobs, then measure progress with proxies that make sense.
For a cybersecurity client, we found that prospects who engaged with a specific threat simulation tool on the site were 3 times more likely to book a demo. That insight reshaped the nurture program. We moved budget from one size fits all ebooks to targeted traffic for accounts that fit two risk profiles, then put the simulation tool front and center in the journey. Demo volume increased modestly, but qualified pipeline surged, and sales cycle length shrank by 21 days.
When you measure the middle, avoid vanity. Time on page is not a goal. Treat qualitative feedback, sales call notes, and user research as first class data. A pattern in lost deal reasons will beat a thousand heatmaps.
Creative is data too
Marketers sometimes speak as if creative were mystical and data were mechanical. The best teams treat creative choices as hypotheses and treat data as part of the craft. A credible value proposition, a human voice, and a clear ask all travel well across channels, yet the execution details that turn a message into response are specific to context.
When a fintech company targeted small business owners, we found that creative featuring real invoices and cash flow charts outperformed abstract branding by a wide margin in social feeds. The difference was not only click through. Downstream, accounts from those ads connected bank data at higher rates, a crucial activation step. The insight shaped not only ad creative but also onboarding screens and the way sales framed the first call. That is the kind of loop you want, where creative proof points bounce forward into the product and back into marketing.
If your creative process produces only a couple of assets per flight, you will learn slowly. If it produces dozens without a thesis, you will drown in noise. Aim for a middle ground where each asset has a purpose and a prediction attached. Then add a post mortem ritual where you mine not just winners but patterns across winners and losers.
The channel mix and its edges
Channel allocation is a function of reach, intent, cost, and control. Paid search offers high intent and fast feedback, but competition and brand bidding dynamics can warp costs. Social brings reach and storytelling, with more volatile performance and creative dependency. Affiliate and partnerships scale credibility but introduce channel conflict and risk of cannibalization. Email and owned channels generate the cheapest repeat engagement if you respect your list and keep your promises.
Edge cases deserve attention. Branded search looks like the sweetest fruit, yet overpaying for your name when you own the top organic slot and have a loyal base can dilute returns. Meanwhile, stepping into marketplaces or retail media forces you to consider whether the incremental reach offsets any erosion of direct relationships. The (un)Common Logic view is to test the edge cases with guardrails, instrument them tightly, and be ready to move in or out quickly.
I have seen young brands buy outdoor placements that seemed indulgent until we traced a spike in branded search and city level sales in the weeks following installations. I have also seen brands starve affiliate programs because last click rules made them look unprofitable, then regret it when new customer growth slowed. The judgment call depends on how each channel interacts with your funnel and whether you can prove it changes buyer behavior.
Structuring teams and rituals around outcomes
Tools do not fix broken incentives. If your media team is rewarded for cheap CPAs while sales cares about enterprise deals, you will fight each other in every planning meeting. Aligning goals starts with shared definitions. Marketing qualified https://israelvyxs696.theburnward.com/why-un-common-logic-beats-gut-feel-every-time https://israelvyxs696.theburnward.com/why-un-common-logic-beats-gut-feel-every-time leads mean nothing unless sales agrees on what qualifies and the CRM enforces it. Report the same metrics to leadership that you use to run the team. Nothing erodes trust faster than a pretty executive dashboard that contradicts sales reality.
Rituals help. A weekly growth review that includes marketing, sales, product, and analytics can surface blind spots while they are small. The best of these meetings are short, rooted in a standard set of charts, and focus on decisions, not theater. Rotate the owner of the narrative. When sales tells the story of what they see on the ground, marketing hears nuances that no dashboard will show.
One client rewired its process by moving a senior analyst into each channel pod as a first class member, not a service desk. Those analysts helped craft tests, defined success metrics upfront, and pushed back when a desired read could not be achieved with the available data. Within two quarters, test velocity increased and false positives dropped because the technical voice was embedded at the source.
What a practical stack looks like
I often get asked which tools to use. The answer depends on your size, constraints, and in house skills. The common thread is to favor interoperability and auditability. If a platform locks your data into a black box, be careful. If your measurement depends entirely on a vendor’s view, diversify.
A scrappy but capable setup for a midmarket team might include a central warehouse with event data piped in from web and app, a reverse ETL tool to power audiences back into ad platforms and CRM, a lightweight BI layer for exploration, and a server side conversion approach to improve signal quality and privacy compliance. For experimentation, a feature flagging tool and a habit of instrumented rollouts often beat overengineered testing suites that few people use.
Do not chase stack perfection. Aim for a setup that captures the key events cleanly, lets you join data sets on stable identifiers, and empowers marketers to pull their own numbers with guardrails. As you grow, you can layer on modeling and automation. Just keep an eye on the cost of complexity.
A real sequence from zero to signal
To make this less abstract, here is a sequence I have run when stepping into a noisy account at a growth stage company.
Clarify the north star metric and the diagnostic metrics that ladder into it. Write the definitions where everyone can see them. Validate them with a few live examples so sales and finance nod. Audit tracking, naming conventions, and conversion events. Fix the obvious leaks. Add a small number of events that capture the middle of the funnel, such as product engagement or high intent content interactions. Reset bidding strategies toward the real objective. If necessary, shift to manual or portfolio bidding for a few weeks while the system relearns. Protect known winners, but do not trap yourself with too narrow targets. Launch two to three high intent experiments with clear decision rules. At the same time, start one incrementality test on an upper funnel channel with a clean holdout. Establish a weekly review that tells a single story from spend to outcome by cohort. Celebrate how learning improves, not just how numbers move.
Within four to six weeks, you should see more stable relationships between spend and the outcomes that matter. Within two to three months, the compounding effect of better signal, smarter creative, and sharper bidding usually becomes visible in pipeline quality and payback period.
Working with GenAI without letting it run you
Creative generation tools have changed how fast teams can draft assets, but velocity without a point of view just produces more average work. Use these tools to explore variations, to transcreate for new markets with a human editor in the loop, and to accelerate production of performance copy that you already know resonates. Do not let them flatten your voice.
On the analysis side, assistants can speed up exploratory data work and help engineer joins or checks you used to avoid because they took too long. Still, keep a human review step before a number enters the shared narrative. The risk is not that a model invents a figure out of thin air, though that happens. The deeper risk is subtle, when a plausible answer fits a familiar story and slips past your skepticism.
The (un)Common Logic stance is practical. Let machines take the toil out of work that is already well specified. Keep humans in charge of what to measure, how to decide, and when to break the rules.
Budgeting with lag and uncertainty in mind
Budget decisions are where logic gets tested by nerves. If you need a 3 month payback to satisfy cash constraints, you have to see past the lag in your funnel. A top of funnel push in January may not show its full impact until March or April. If you judge it by February revenue, you will cut too soon and train your system to favor short term harvests forever.
One approach is to build a leading indicator scorecard that predicts downstream outcomes using a small set of early signals. For a subscription app, that might be a combination of trial quality scores, activation within the first week, and early retention curves. For B2B, it might be demo to opportunity conversion by segment and stage velocity. If your early indicators go green while revenue lags as expected, hold your nerve. If they flash red, course correct quickly rather than waiting for the quarter to render its verdict.
You can also keep a portion of budget in a flexible pool for opportunistic bets or defensive moves. When a competitor stumbles, when a channel’s CPMs drop for seasonal reasons, or when a creative theme catches fire, you want dry powder to lean in. Conversely, maintain kill criteria for tactics that fail incrementality checks even if platform numbers look good.
Culture, trust, and the willingness to change your mind
Data does not settle debates on its own. People do. If your culture punishes being wrong, you will end up with cautious plans and sandbagged forecasts. If your culture treats changed minds as growth, you will iterate faster. The most productive teams I have worked with share three habits. They write down their assumptions before they act, they review decisions with the benefit of hindsight without blame, and they make it easy for anyone to raise a hand when a number smells off.
This human layer is the real engine. The technology keeps improving, the privacy landscape keeps shifting, and channels rise and fall. What endures is the discipline to choose meaningful goals, measure them with humility, and build loops that reward the right behavior. That is where (un)Common Logic earns its name. It is not contrarian for its own sake. It is the uncommon practice of sticking to logic when the convenient path whispers otherwise.
Two brief stories from the field
A national services brand wanted to scale leads across 40 markets. They had squeezed performance from paid search and were cautious about upper funnel spend after a rough test the prior year. We proposed a city level incrementality design for online video with matched market pairs. After eight weeks, exposed markets showed a 9 percent lift in total site sessions and a 12 percent lift in form fills, but the headline surprise came from call logs. Direct calls from non branded sources rose sharply in exposed areas, especially during weekends. The team integrated call tracking into the analytics stack and repriced bids in those time slots. What looked like a soft branding play turned into a tactical engine with clear levers.
A healthcare startup faced strict compliance rules and long sales cycles. They could not cookie users freely or personalize aggressively. The initial instinct was to retreat to conferences and field sales. We took a different tack. We built content that spoke to the operational pains of their buyers, optimized for the few queries that mattered, and ran lightweight LinkedIn campaigns to specific job functions. The goal was not volume. It was to get five to ten serious prospects into conversations each month. Over two quarters, the startup booked enough high quality meetings to fill the reps’ calendars, and win rate held because the content had already done the heavy lifting on objections. Data drove the plan, but empathy for the buyer made it work.
Bringing it together
Data-driven marketing is not a style of dashboard. It is a set of decisions about what to value, how to learn, and where to place your bets. The (un)Common Logic approach asks you to slow down enough to define outcomes that reflect your business, to harden the pipes that deliver reliable signal, and to design tests that separate flattering noise from real lift. It invites creative and analysis to share a table. It rewards patience when signals lag and courage when the evidence asks you to shift spend where you cannot yet take a victory lap.
If you do this long enough, you begin to trust the loop. You see how smarter inputs compound. You catch yourself spending more time on framing the question and less time arguing over whose dashboard is right. And when the numbers move, they move in ways that finance, sales, and the customer all recognize as progress. That is the mark of a strategy grounded in logic that is happily uncommon.