The easiest tracking setup in the world
Wait is this about the AI tracking solution - NO, it is not... keep reading
About a year and a half ago, I had a LinkedIn post go viral. It was my second viral post and still holds the record for most views and engagement in my LinkedIn history. What kind of post was it? A meme.
I stopped doing meme posts after that one – they didn't really align with my usual content. But this particular meme captured an important truth, which is exactly why I'm writing this post today.
The meme showed a simple scene: When asked who wants to implement, get, or measure their product, everyone's hands shoot up. But when asked who wants to implement the tracking? Not a single hand. This perfectly captures half the story.
Look, I know most companies have some tracking in place to understand their product and marketing performance. The real issue is that few are willing to invest in creating a tracking setup that actually makes a difference. There's a massive gap between basic tracking and having a system that can truly help you grow your business, improve marketing, and understand how people are really using your product.
The easiest tracking setup in the world
The simplest tracking setup you can do is just implementing the standard tracking SDK that comes with most analytics tools. It tracks page loads, and the smarter ones even work with single-page applications – where technically there aren't new page loads, but the SDK still catches when users navigate to new pages and sends those events to the system.
But what does this basic setup actually tell us about how our business or product is performing? Sometimes, it might be enough. Take a blog, for instance. If I just want to know what content people are reading, this simple tracking would do the job perfectly. It lines up exactly with my business goal – people reading what I write – and I can set it up in half an hour.
But let's say I expand my blog by adding email subscriptions as a way to keep readers coming back. Now my business goal has changed, and my tracking setup needs to change with it.
There is no easy business and product anymore
In the early days of digital business, everything was simpler because we were all figuring it out. Take e-commerce – it was mostly small sites with straightforward operations. People would visit a product page, add to cart, checkout, done.
Back then, we weren't too worried about measuring customer retention because marketing wasn't as complex as it is now. Understanding customer lifetime value wasn't as critical. Being digital was new, so everything was naturally less complicated than traditional businesses.
But today's digital business? Totally different story. Look at e-commerce again – now we've added subscriptions, complex discount systems, and we're pushing for account creation and email collection way earlier. We're laser-focused on building customer loyalty because we have to be. Our businesses have grown, competition is fierce, and we've had to add layer upon layer of complexity to keep up.
Digital products are even trickier. Take a software-as-a-service product – the complexity is mind-boggling. Your acquisition process might take forever before someone actually uses the product. If you offer free accounts, you're in product-led growth territory, where free users are both marketing channel and product users. You've got to analyze them from both angles.
Then you need to figure out how long it takes people to start paying. And don't get me started on subscription management and churn. You've got product churn (people stop using your product) and revenue churn (people cancel subscriptions) – they're related but different beasts.
We keep piling on these layers, making everything more complex. You can't just throw a tracking pixel or SDK at it and expect the data to magically tell you how to improve your marketing, grow your product, boost adoption, or increase subscriptions. So here's the real question: do we need more data, or do we need better data?
More complexity, more data?
Over the last 5-10 years, as our businesses and products got more complex, the typical answer was "we need more data." In most setups I work with, both marketing and product teams have dramatically increased what they track.
Marketing teams launched customer data platform initiatives to capture every possible touchpoint between people and the brand across all platforms. Product teams started measuring every core interaction to track user behavior at an incredibly granular level.
The result? Tracking setups with over 200 unique events. Remember, an event is supposed to be when someone does something. So we're tracking 200 different scenarios of people interacting with our product or marketing. And it often goes way beyond 200. When I talk to these companies, they insist all these events are essential for their data teams and analysts.
But here's the thing: when we run workshops and take a big step back to look at things from a business and product perspective, we usually end up needing just 20-30 events. We can streamline everything dramatically using one simple trick (which I'll get to in a moment).
The key insight is this: when your business model and product get more complex, the answer isn't to collect more data points. The answer is to collect better, more focused data. We need to identify and measure the essential points in our product and business model that actually drive growth and success.
I'll explain the difference between this approach and tracking every app interaction in the next section.
From interactions to product and business outcomes
I'll admit it – I'm guilty of this myself. For over five years, I set up product tracking systems that obsessed over how people interact with products. Sure, I learned some tricks along the way, like using one "CTA clicked" event with different properties instead of creating separate events for every CTA in the product. This reduced the number of events, but the data still wasn't really telling us how the product and business were performing.
Eventually, I had to completely rethink my approach to product and marketing analytics. The realization hit me: tracking how users interact with my applications only tells me... how users interact with my applications. But the questions from product, marketing, and business teams are usually much bigger: Is our product successful? Are our marketing campaigns working? Is our business growing?
Just knowing how someone clicks around an application doesn't tell us if the product is successful. Think about an ATM – if I tracked every button press, I still wouldn't know if people actually accomplished what they came to do. An ATM session is successful when someone gets their money out. That's it. So really, I only need two events: ATM session started (card inserted) and money withdrawn, assuming that's the only use case.
This brings us to use cases – a much better level of analysis than tracking specific buttons or features. When people visit your website or open your app, they have a specific goal in mind. They want to accomplish something.
Take email programs. One use case might be checking for important new messages. This isn't the easiest thing to measure – though we can look at patterns, like if someone checks 3-4 times daily, we're probably serving this need well.
Another use case is replying to emails. Someone reads an email, hits reply, writes something, sends it. That's a distinct use case we can measure. And here's the key: it doesn't matter if they hit return, clicked a button, or used some other method to send it. Those details matter for UX designers, but they can measure that differently.
From a product perspective, I care about whether people are archiving emails, responding to them, or creating new ones. These actions tell me how people are actually using my product. So the first step in moving from tracking interactions (clicks) to measuring product and business outcomes is to focus on use cases. We need to create a use case map for our product to understand what jobs people are trying to get done.
What makes products and marketing successful
Let's talk about success moments – the key moments when our product or marketing delivers real value. For marketing, it can be straightforward, but often isn't.
Take a simple example: if we run a campaign to book demos, and that's its only goal, success is easy to measure. We just count how many people from that campaign booked demos. Done.
But reality is usually messier. We often run broader campaigns where success could mean booking a demo, creating an account, or something else entirely. That's why it's crucial to define what success looks like for each campaign before we launch it. We need clear metrics to measure against.
Consider podcast appearances. We might put our company leaders on different podcasts to boost visibility. Here, we need different success metrics. We might ask podcasts about download numbers to measure reach. Or we could get creative – maybe offer a special discount code for listeners, or create a podcast-exclusive ebook, then track how many people use that code or download that resource. The key is deciding upfront what success looks like.
The same goes for product success, and this is where user research becomes crucial. When we take user research seriously – doing lots of interviews and surveys – we discover why people really use our product and what makes them stick around. We learn what makes us so valuable to their daily lives that they won't switch to alternatives.
Once we understand these situations and outcomes, we can identify or create success moments in our product and map out the use cases that lead to them. Often, we'll end up with just 5-6 core use cases that really matter. These are what we need to measure.
Ultimately, we need to track two things: how many people reach these success moments, and how often they repeat them. This tells us if our product is performing well. Are we effectively guiding people to these valuable moments? Because let's face it – most products aren't intuitive enough for users to find value immediately.
So when measuring our product, we need to track both how well we guide people to their first success moment and how well we encourage them to repeat it. When users keep hitting these success moments, they stick around and happily pay for our product. That's the essential foundation of product measurement.
Were we not talking about the easiest analytics setup?
Let me be clear: the easiest tracking setup isn't about fancy auto-tracking or AI magic that turns messy data into brilliant insights. We're not there yet, and honestly, we might never be.
The secret to the easiest tracking setup is focus. We need to identify the metrics that actually tell us how our product, business, and marketing are performing. Let me show you with an example, using a product since product tracking is usually trickier.
Let's take a task management tool. New ones pop up weekly – it's a crowded market. The wrong approach would be tracking every possible interaction in the tool. That just gives us a pile of data that doesn't tell us if we're actually making an impact or building the loyal user base we need (especially since people hop between task management tools every couple months).
Instead, we need to focus on growth levers – specifically, success moments. These moments map to specific growth stages. When we can move people from stage A to B to C, we're more likely to convert them to subscribers.
Here's how I'd build an eight-event tracking setup for a task management product:
First, we need "account created" – our starting point. This is our baseline, our opportunity pool. Every new account is a potential subscription.
At the other end, we need "subscription created," "subscription retained," and "subscription churned." These three events tell us about our revenue health and let us calculate basic MRR metrics.
Now for the tricky part: what happens in between? We've used four events to track the start and revenue – so let's choose the next four carefully.
We need to capture how users evolve, which looks different depending on your company's stage. Let's assume this is a new tool. Here's where we can blend qualitative research (like user interviews) with our quantitative data. Your product team might already have this research – if not, it's worth doing.
Here's how it typically plays out: When someone creates an account, they usually start by adding tasks to a list. So we track "tasks created" with properties for different list types.
We might add "tasks completed," but it's actually optional. As long as people keep creating tasks, we can assume they're adopting the product. "Tasks completed" is nice to have but not essential right now.
The next evolution is usually creating a project (basically a specialized list). We'll track "project created" or "list created." This shows someone's moved up a level – they've found task management useful and want to organize their work better.
Then comes the big one: inviting team members. Assuming this is a team-focused tool, we track "team member invited." This is huge – it means someone finds the tool valuable enough to bring in their colleagues.
So that's just three core events (plus one optional) that tell us how people adopt our platform. But what about stickiness? For this, we use these same events but look at them over time.
To define an active user, we might look for someone who, in the last 30 days, either creates 3+ tasks, creates a project, or invites a team member. This time dimension helps us measure retention – and we can build it using our existing events and the segmentation features most analytics tools offer.
To recap, what does our easy tracking setup look like, and what can we achieve with it?
Let me be clear about what I mean by an "easy" tracking setup – it's about focus. We need to understand exactly what metrics will help us improve the user and product experience to drive growth.
A growing product needs two basic things: new users coming in to build our user base, and existing users sticking around (avoiding product churn). That's the core.
But of course, we also need to make money. So we need to look at both product metrics and revenue metrics. Four key pillars: product account creation, product retention, subscription creation, and subscription retention. Understanding these gives us a clear picture of where to invest our efforts to boost product usage, stickiness, and revenue.
For all this, we just need eight events: account created, task created, task completed (optional), project created, team member invited, subscription created, subscription retained, and subscription churned. Let me show you how to use these to measure product performance.
First analysis? Always create a global customer journey funnel from account creation to subscription. This gives us the big picture of how we're converting people over time. I usually look at this over a long period to understand totals, but it's also helpful to track conversion rates over time in a line chart.
Next, I create user cohorts to track evolution. I start with new users (accounts created in last 30 days). Then activated users – new users who've created at least three tasks, maybe a project, possibly invited a team member.
We can even create different activation levels: initially activated (basic tasks), extensively activated (created a project), and committed (invited teammates). The key is showing progression from new user to actually using and understanding the product. We might need to adjust thresholds – maybe three tasks isn't enough to show real engagement.
The most interesting cohort is active users. We might use the same criteria as activated users (tasks, projects, team invites), but we look at the last 30 days for ALL users, not just new ones. If 30 days is too long, we can switch to weekly analysis.
Then we create risk indicators. An "at-risk" cohort might be users who were active in the past 90 days but inactive for the last 30. "Dormant" users might be those active in the last 300 days but inactive for 60 days – they're likely lost, but some might come back.
The real power comes from combining these cohorts with subscription data. Active free users are subscription candidates. Active subscribers are our business backbone. And at-risk subscribers? They need immediate attention.
With these cohorts set up, we can run deeper analyses. A revenue retention analysis using subscription events shows how well we keep paying customers. A product usage retention analysis (using active user definitions) shows product stickiness. Comparing these reveals fascinating patterns.
For dashboarding, focus on cohort movement. Track new accounts per month to measure acquisition health. Watch conversion rates: new users to activated, activated to active, and active to churned.
These metrics tell powerful stories. If active-to-churned rates spike, we need to investigate – maybe competition is pulling users away. If new user numbers grow but activation rates drop, maybe we need to pause acquisition and fix onboarding.
That's the beauty of this setup – just eight events giving us deep insights into business performance, product growth, and risk factors. Minimal tracking, maximum impact.
What comes after that?
Now, everything beyond these eight events is optional – but this basic setup can serve you well for a long time. It'll consistently tell you how your product and growth are performing. When you do expand, it's usually to understand specific steps in more detail.
For example, you might see you're getting lots of new users but only 8% become activated users. What's happening with the other 92%? This is where qualitative research shines – user interviews can reveal insights that numbers alone can't show.
On the quantitative side, you might want to track your onboarding process. That might mean adding just three more events: "onboarding started," "onboarding steps submitted," and "onboarding finished." This helps you see if people are actually completing the onboarding or dropping off.
The magic happens when you combine measurement with onboarding design. Your onboarding could collect valuable context – like asking users what problems they're trying to solve. This helps predict their likely evolution and lets you measure if they're actually achieving their goals.
Ultimately, you're gathering context to better understand how to move people from new accounts to activated users. Yes, you'll need a few more events, but we're talking about three additional ones – not fifty. The key is careful, purposeful expansion.
I'd love to hear your thoughts in the comments. Could this setup work for you? Skeptical about anything? Questions? Drop them below or reach out to me directly.