Closing the Loop: Measuring Ad Spend to Paid Subscribers in Mobile Apps
An engineering guide to connecting ad clicks to paid subscriptions under SKAN 4, AdAttributionKit, and post-ATT privacy. Event taxonomy, reference architecture, and reconciliation patterns.
Abstract
Most mobile attribution guides stop at the install. Subscription apps do not make money on installs. They make money on trials that convert to paid, and paid users that renew. This post is a walkthrough of the full measurement pipeline that connects an ad click on Meta, Google, or TikTok to a validated paid subscription event in your data warehouse. It covers SKAN 4, AdAttributionKit on iOS 17.4+ (expanded in iOS 18.4), server-side conversion APIs, RevenueCat as a source of truth, and the reconciliation job that makes finance, marketing, and product agree on numbers.
The Measurement Problem, Framed Correctly
If you ship a subscription app, your attribution stack is a distributed system with three independent clocks and three independent sources of truth. Marketing sees ROAS in Meta Ads Manager. Finance sees revenue in RevenueCat. Product sees cohorts in the warehouse. The numbers rarely match.
The job is not to pick the "right" number. The job is to build a reconciled pipeline where each consumer gets the signal they need with known error bars.
Four events matter for a subscription app:
first_open— fires once per install, client-sidetrial_start— server-validated, encodes product and trial lengthsubscribe— first paid charge after trial (or direct paid)renewal— n-th billing cycle, tracked separately
A shadow event, churn, closes the loop for LTV modelling.
The Post-ATT Privacy Landscape
App Tracking Transparency opt-in rates still sit around 25 percent on iOS, though this varies by vertical and region (25 to 30 percent is typical). Deterministic attribution is gone for the majority of users. What replaces it is a patchwork:
- SKAdNetwork 4: three postback windows at 0-2, 3-7, and 8-35 days after install. Fine-grained 6-bit conversion values only in window 1. Coarse-grained (low/medium/high) in windows 2 and 3. Privacy thresholds null out small campaigns.
- AdAttributionKit, introduced in iOS 17.4 and expanded in iOS 18.4: configurable attribution windows, re-engagement overlap, country codes, and developer postbacks. Coexists with SKAN with no deprecation timeline.
- Android: Google Play Install Referrer remains deterministic for now. Privacy Sandbox for Android is on a multi-year rollout.
You need both SKAN and AAK on iOS. You need Install Referrer on Android. And for the 25 percent who consented to ATT, deterministic attribution through an MMP or direct SDK is still worth capturing.
Event Taxonomy for Subscription Apps
A clean event taxonomy is the one decision that will save you the most pain later. Keep it small, keep it canonical, and make sure every event has an idempotency key.
Hash user identifiers with SHA-256 before sending to ad platforms. Always capture click IDs (fbclid, gclid, ttclid) at install via deferred deep linking and store them on the user record. Without click IDs, server-side conversion APIs degrade to coarse matching.
The deduplication contract with Meta CAPI is strict: SDK and server events must share the same event_id and event_name within a 48-hour window, or you will double-count. Use event_id = hash(transaction_id + event_name).
SKAN 4 Conversion Value Schema
You get 6 bits for fine-grained values. That is 64 slots for everything you want to encode about a user in the first 48 hours. Most teams waste this budget.
A schema that works for subscription apps:
- Bits 0-2 (8 values): funnel stage —
opened,onboarded,paywall_seen,trial_start,subscribe,renewal, reserved, reserved - Bits 3-5 (8 values): revenue bucket in USD —
0,<5,5-10,10-20,20-50,50-100,100-200,200+
Never downgrade a conversion value. SKAN enforces a monotonic increase. If you drop from trial_start back to paywall_seen, Apple drops the update silently.
On small campaigns, fine-grained values get nulled by Apple's privacy threshold. Plan your bidding strategy around coarse values as the base case, not the exception.
Reference Architecture
Three parallel paths carry signal from the device to the ad platforms. The device itself sends SKAN and AAK postbacks. The MMP carries deterministic attribution for consented users. The backend sends server-side conversion events through Meta CAPI, Google Ads API, and TikTok Events API. Each path has different latency, different accuracy, and different privacy trade-offs.
MMP vs Direct SDK Integration
A Mobile Measurement Partner aggregates SKAN postbacks across networks, handles deterministic attribution for consented users, routes cost data, filters fraud, and unifies ROAS. The question is whether you need it.
MMP pricing is typically in the low-cents-per-install range at volume and is negotiated by contract. For a high-volume app, that is real money. Direct SDK integration saves the fee but pushes SKAN aggregation, postback routing, and fraud filtering onto your engineering team. The break-even depends on install volume and how many networks you run.
Implementation: RevenueCat Webhook Fan-Out
RevenueCat sits in the middle of the pipeline as the subscription source of truth. Its webhook is the trigger that fans out to every ad platform. Below is the core of a TypeScript handler that covers deduplication, authorization header verification, and parallel fan-out. RevenueCat webhooks use a shared-secret Bearer token in the Authorization header, not HMAC signatures, so the check is a simple string comparison.
A few details that are easy to miss. The app_data object Meta CAPI expects for mobile events must include advertiser_tracking_enabled, application_tracking_enabled, bundle ID, and app version. Without it, Meta falls back to coarse attribution. Note that Meta discontinued the Offline Conversions API in May 2025; all mobile app events now flow through the main Conversions API. For Google Ads, the Google Ads API supports server-side upload via UploadClickConversionsRequest with a gclid for app conversions. Enhanced conversions with hashed user data is a separate, complementary option that improves match quality.
Use Promise.allSettled, not Promise.all. One flaky ad network should not drop events for the others. Push failures to a dead-letter queue and retry with exponential backoff.
Implementation: StoreKit 2 Transaction Listener
On iOS, StoreKit 2 delivers transactions as JWS payloads. The verification step is mandatory. Anything that arrives unverified should be treated as spoofed.
The offerType field is how you distinguish an introductory trial from a direct paid purchase. A critical detail: the signed JWS lives on the outer VerificationResult enum as jwsRepresentation, not on the decoded Transaction value. Transaction.jsonRepresentation is plain decoded JSON and is not signed; sending that to your backend gives you no cryptographic guarantee. Read result.jwsRepresentation before unwrapping the enum, then validate the signature with Apple's public key on the backend before trusting the payload. RevenueCat handles this for you if you use it; if you build direct, use App Store Server Notifications v2 for renewal events.
Predictive LTV and tROAS
Short-window ROAS is the only signal fast enough for algorithmic bidding. Ad platforms need revenue events within 24 to 72 hours to optimize bids. Waiting for actual paid conversions after a 7-day trial is too late.
The fix is predictive LTV. A simple model: base rate of trial-to-paid conversion, multiplied by expected renewals, discounted by time. Feed that number as revenue to ad platforms on trial_start instead of waiting for actual paid.
The risk is a feedback loop. The ad platform optimizes against your prediction. Your prediction drifts. You optimize against drifted data. Recalibrate pLTV against actual revenue on a rolling 30-day window, and alert when the gap exceeds 15 percent.
Metrics That Matter
CPI and CPA are leading indicators. They tell you nothing about the health of the business. CAC, LTV, LTV:CAC ratio, and payback period are the business metrics. A healthy consumer subscription app targets LTV:CAC above 3:1 and payback under 12 months. Trial-to-paid conversion rate for consumer subs typically lands between 30 and 50 percent.
Expect a 20 to 40 percent gap between blended ROAS (what your warehouse says) and platform-reported ROAS (what Meta says). The gap is not a bug. It is the sum of SKAN coarse bucketing, privacy thresholds, and cross-platform overlap. Report it, do not hide it.
What Not to Do
A few patterns that will cost you months if you adopt them:
- Double-counting
trial_startbecause the SDK and the backend both fire without a sharedevent_id. Always generate the event ID once and pass it through both paths. - Treating SKAN null conversion values as zero. A null means "privacy threshold not met," not "zero revenue." Bucket them into an explicit
unknowncategory and model them separately. - Sending gross revenue to ad platforms. Subtract store fees (15 to 30 percent) and expected refunds (2 to 5 percent for consumer subs) before feeding revenue back. Otherwise your tROAS bidding over-spends.
- Forgetting currency conversion. RevenueCat normalizes to USD; ad platforms may report in account currency. Pick one canonical currency in the warehouse.
- Timezone drift. RevenueCat is UTC. Meta and Google Ads report in account timezone. Always store events in UTC and convert only at the presentation layer.
- Acting on unverified App Store Server Notifications. Validate the JWS signature before doing anything with the payload.
Reconciliation: Closing the Loop
The nightly reconciliation job is what makes finance trust the numbers. Join the MMP attribution table with the RevenueCat events table on original_transaction_id. Roll up to campaign and cohort level. Apply the rolling 35-day SKAN adjustment window. Report the unattributed bucket explicitly.
A workable warehouse model:
raw_mmp_attributions— one row per install, attributed sourceraw_revenuecat_events— one row per subscription eventreconciled_users— join key isapp_user_id, brings together install source and subscription lifecyclecohort_revenue— daily cohort × source, with rolling LTV actuals
Finance reads cohort_revenue. Marketing reads the MMP dashboard. Product reads reconciled_users. They will still disagree on edge cases. That is fine. Document the known deltas and move on.
Conclusion
The measurement pipeline for a subscription app is a distributed system problem disguised as a marketing problem. You have three clocks, three sources of truth, and three consumers with different tolerances for latency and accuracy. The job is not to find the "real" number. The job is to build a reconciled pipeline where each consumer gets signal with known error bars, and where the SKAN schema, the webhook fan-out, and the warehouse model agree on what an event means.
Start with event taxonomy. Get deduplication right. Then layer on SKAN, CAPI, and reconciliation. The teams that skip taxonomy and jump straight to dashboards spend the next year debugging numbers that do not match.
References
- Apple: Receiving postbacks in multiple conversion windows - Official SKAN 4 postback window mechanics
- Apple: AdAttributionKit and SKAdNetwork interoperability - How SKAN and AAK coexist on iOS 17.4+
- Apple: App Store Server Notifications v2 - Server-side subscription lifecycle events
- Apple: Meet StoreKit 2 (WWDC) - JWS-based Transaction API
- Google: Play Install Referrer API - Android install attribution source
- Google Ads: Upload app conversions - Enhanced conversions for apps
- Meta: Conversions API for App Events - Server-side mobile app event schema
- Meta: App Events API for mobile - Deduplication contract with SDK events
- TikTok: Events API for app - Server-side attribution for TikTok Ads
- RevenueCat: StoreKit 2 overview - JWS validation and SDK 5.0 transition
- RevenueCat: Webhooks documentation - Subscription event fan-out source
- AppsFlyer: SKAdNetwork solution guide - MMP-side SKAN aggregation
- Adjust: How SKAdNetwork 4 works - Postback windows and conversion value mechanics
- Singular: WWDC 2025 AdAttributionKit recap - iOS 18.4 AAK updates
- Apple: User Privacy and Data Use (ATT) - App Tracking Transparency requirements