Three Brands, Dual Pipelines, £8.5M+ in Attributed Revenue: A 3.5-Year EdTech Growth Engagement
This is the longest and most complex engagement of my career. Three and a half years operating as Growth Director across three related EdTech brands — managing £2.9M+ in advertising spend, generating £8.5M+ in attributed revenue, building both sides of a two-sided marketplace, navigating simultaneous AI disruption and regulatory tightening, and ultimately training my own replacements and handing over the entire system.
I'm going to write about this engagement in depth because it contains almost every pattern I've ever needed to deploy: zero-to-one infrastructure builds, dual-pipeline architecture, cross-brand portfolio management, attribution in environments where no measurement existed, CRO under pressure, and the specific discipline of building systems designed for handover.
The Starting Position
Three EdTech brands under common ownership, each serving a distinct segment of the UK private tutoring market:
Brand A — The Online Marketplace. High-volume, two-sided marketplace connecting students with tutors across subjects and levels. The demand side (students/parents) needed high-volume acquisition at manageable CPAs. The supply side (tutors) needed steady recruitment of qualified applicants. This was the volume engine of the portfolio.
Brand B — The Premium Agency. Traditional tutoring agency model focused on university-level, A-level, GCSE, and dissertation support. Higher ticket prices, longer conversion consideration cycles, and a more relationship-driven sales process. This was the value engine of the portfolio — lower volume but consistently higher ROAS.
Brand C — The Admissions Consultancy. The highest-value, lowest-volume brand — premium admissions consulting at the top end of the market. Longest sales cycles, highest customer lifetime value, and the most complex qualification logic.
The Zero-Attribution Problem
When I arrived in July 2022, no revenue attribution existed across any of the three brands. Marketing spend was allocated based on lead volume and executive judgment. The connection between pounds spent on advertising and pounds received in revenue had never been measured.
This is worth emphasising because it's the single most common infrastructure gap I encounter in companies of this size. They're spending real money on marketing — in this case, significant six-figure annual budgets — but they can't tell you which channels produce revenue and which produce noise. They know how many leads they generated. They don't know which leads became paying customers, through which channel, at what cost relative to the revenue they generated.
The absence of attribution doesn't just mean you're flying blind — it means you're optimising for the wrong things. When you can only measure lead volume, you optimise for lead volume. When you can measure revenue per channel, you optimise for revenue. These produce fundamentally different strategic decisions.
The Dual Disruption
Two forces were simultaneously reshaping the EdTech market during this engagement:
AI Disruption. ChatGPT launched in November 2022 — five months into my engagement. Within months, AI tutoring tools began commoditising the personal tutoring market. Students who previously paid £40-80/hour for private tutoring now had free access to AI-powered academic assistance that was, in many contexts, good enough. This didn't kill the tutoring market, but it fundamentally changed the value proposition of human tutoring and the competitive landscape for student acquisition.
Regulatory Tightening. Evolving UK regulations around agency versus marketplace classification created significant compliance constraints. The distinction between operating as an agency (with employer responsibilities toward tutors) and operating as a marketplace (facilitating connections between independent tutors and students) had real implications for employment law, tax treatment, insurance requirements, and marketing claims.
These disruptions didn't arrive sequentially — they overlapped. Strategy had to adapt to both simultaneously, and the marketing infrastructure I was building had to be flexible enough to accommodate strategic pivots without requiring a structural rebuild.
The Attribution Build
The most important deliverable of the entire engagement was attribution infrastructure. Everything else — campaign strategy, CRO, team training, reporting — depended on having trustworthy data connecting spend to revenue.
Defining the Attribution Model
Attribution in a tutoring marketplace is more complex than in a straightforward SaaS business. In SaaS, the attribution chain is relatively clean: campaign → lead → demo → trial → paid subscription. In a tutoring marketplace, the chain includes:
- Marketing touchpoint — ad click, organic search, referral
- Lead capture — form submission, phone call, live chat
- MQL transition — meets qualification criteria (genuine interest, correct geography, appropriate subject match)
- SQL transition — sales-verified, budget-confirmed, ready to be matched
- Tutor matching — connected with an appropriate tutor
- Trial lesson — first paid session
- Ongoing student — recurring revenue relationship
- Revenue attribution — total revenue generated by this customer over their lifetime
Most marketing attribution stops at step 2 or 3. I built attribution through to step 8.
The Technical Implementation
The attribution system was built on several interconnected layers:
UTM Architecture and Campaign Taxonomy. Every campaign, every ad group, every individual ad across Google, Meta, and LinkedIn followed a strict naming convention that encoded the brand, channel, campaign type, target audience, and creative variant. This taxonomy wasn't just for reporting convenience — it was the backbone of the attribution system, enabling granular analysis of revenue per campaign, per audience, per creative approach.
CRM Lifecycle Tracking. Within HubSpot, I configured lifecycle stages with explicit predicate logic for each transition. A lead didn't become an MQL because someone subjectively decided it was qualified — it became an MQL when it met specific, documented criteria. This removed ambiguity from the pipeline and made every stage transition a testable event.
Revenue Connection. This was the hardest part. Connecting a student's ongoing monthly tuition payments back to the marketing campaign that originally acquired them required integration between the marketing CRM, the booking/payment system, and the financial reporting layer. I built this connection using a combination of custom properties, workflow automation, and cross-system data mapping.
Cross-Brand Reporting. All three brands fed into a unified Looker Studio dashboard infrastructure that gave leadership portfolio-level visibility for the first time. Each brand had its own detailed reporting (spend, MQLs, SQLs, revenue, ROAS, CPA trends), and the portfolio view aggregated these into a single dashboard showing relative performance, budget allocation efficiency, and portfolio-level ROI.
What Attribution Revealed
The attribution data transformed how the business made decisions. Some examples:
Channel reallocation. Before attribution, budget was allocated roughly equally across brands and channels. Attribution revealed that Brand B (the premium agency) consistently delivered higher ROAS than Brand A (the marketplace), despite receiving less budget. This led to strategic reallocation that improved portfolio-level ROI.
Creative strategy. Attribution data showed that different creative approaches converted differently by brand. The marketplace brand performed better with urgency-driven messaging ("Find a tutor this week"), while the agency brand performed better with quality-driven messaging ("University-level expertise"). These insights couldn't have been discovered without end-funnel attribution, because lead volume didn't predict revenue — a campaign could generate high lead volume but low revenue conversion, or vice versa.
Seasonal patterns. The data revealed seasonal revenue patterns that didn't match seasonal lead patterns. Lead volume peaked in September (back-to-school) and January (New Year resolutions), but revenue per lead peaked during exam season (April-May) when students were most motivated to commit to ongoing tutoring. This led to counter-intuitive budgeting: spending more during traditionally "quiet" periods when conversion rates were higher.
The Dual Pipeline Architecture
Student Acquisition (Demand Side)
The student acquisition pipeline was the primary revenue driver, handling the majority of marketing spend across all three brands. Over the 3.5-year engagement, I managed approximately £2.7M in student-side PPC spend.
Google Ads was the primary channel, with campaigns structured by brand, subject, level, and geography. The campaign architecture was deliberately granular — rather than running broad campaigns and letting Google's algorithms sort out targeting, I built tight ad groups around specific search intent clusters. A parent searching for "GCSE maths tutor in London" saw an ad specifically about GCSE maths tutoring in London, landing on a page specifically about GCSE maths tutoring in London. This granularity drove higher quality scores, lower CPCs, and more relevant lead-to-customer matching.
Meta (Facebook/Instagram) was used for remarketing, lookalike audiences, and upper-funnel awareness campaigns. The integration between Meta and the CRM allowed me to build custom audiences based on lifecycle stage — retargeting MQLs who hadn't converted, creating lookalike audiences from paying customers, and running awareness campaigns to audiences that matched the demographic profile of high-LTV students.
The results on the student side:
- 82K+ MQLs generated across the portfolio
- ~14K SQLs
- MQL volume scaled +140% (from ~11K to ~27K annually)
- MQL CPA reduced 37% (from ~£40 to ~£25) while scaling volume +71%
That CPA reduction while scaling volume is the metric I'm most proud of on the student side. It's easy to reduce CPA by being more selective — just tighten your targeting and accept lower volume. It's easy to scale volume by relaxing your criteria — just broaden your targeting and accept higher CPA. Doing both simultaneously requires genuine infrastructure improvement: better landing page conversion rates, more effective ad creative, tighter audience targeting, and optimised bidding strategies working together.
Tutor Recruitment (Supply Side)
The supply side of a two-sided marketplace is often overlooked in marketing, but it's existential. Without enough qualified tutors, the marketplace can't fulfil student demand, and enrolment bottlenecks kill growth.
I built the tutor recruitment pipeline from scratch. This required a completely different approach from student acquisition:
Different channels. Tutor recruitment performed best through different channels than student acquisition. LinkedIn and targeted job-board advertising reached qualified tutors more effectively than Google search.
Different qualification logic. Tutor leads needed to be qualified on subject expertise, degree level, DBS check status, availability, and teaching experience — a completely different set of criteria from student leads.
Different CRM objects. Tutor applications needed their own pipeline in the CRM with their own lifecycle stages, their own qualification predicates, and their own automation sequences.
The results on the tutor side:
- ~£200K spend on tutor recruitment over the engagement
- Thousands of applications generated
- Sub-£10 CPA — dramatically lower than student acquisition, but with higher qualification requirements
The supply-side pipeline is a good example of why dual-pipeline architecture matters. If you try to run tutor and student acquisition through the same CRM pipeline with the same qualification logic, you get unusable data. Students and tutors have fundamentally different conversion criteria, different lifecycle stages, different value calculations, and different retention dynamics. They need separate pipelines with separate reporting, unified at the portfolio level.
The CRO Programme
With keywords in the education space commanding significant per-click costs, conversion rate optimisation wasn't a nice-to-have — it was a direct lever on unit economics. A 1% improvement in landing page conversion rate at these volumes translated to meaningful CPA reduction.
I ran a systematic CRO programme using VWO (Visual Website Optimizer) for A/B testing, with a structured cadence of hypothesis, test, analysis, and implementation.
What We Tested
Landing page structure. I tested multiple structural approaches: long-form pages with comprehensive information versus short-form pages focused on a single action, multi-step forms versus single-step forms, video testimonials versus written testimonials.
Trust elements. In the tutoring market, trust is the primary conversion barrier — parents are entrusting their children's education to strangers. I tested the placement and prominence of trust signals: Trustpilot reviews, DBS check badges, university degree verification badges, professional association logos, and parent testimonial quotes.
Mobile UX. A significant portion of traffic was mobile (parents browsing on phones during commutes, students searching between classes). I tested mobile-specific page layouts, form designs, and call-to-action placements.
Form design. The form itself was a critical conversion point. I tested field count (fewer fields = higher submission rate but lower lead quality), field ordering (putting the easiest fields first reduces abandonment), and progressive profiling (collecting additional information after the initial submission).
CRO Results
The CRO programme contributed directly to the 37% CPA reduction over the engagement. While I can't attribute the entire CPA reduction to CRO alone (bidding strategy improvements, audience refinement, and creative optimisation all contributed), the landing page conversion rate improvements were a significant factor.
Navigating AI Disruption
ChatGPT's launch in November 2022 created an immediate strategic challenge. The arrival of AI tutoring tools didn't make human tutoring obsolete, but it changed the value proposition. The marketing strategy had to adapt.
The Strategic Response
The response was nuanced — neither denying AI's impact nor treating it as an existential threat. The positioning evolved to emphasise what human tutoring provides that AI cannot:
Personalised accountability. AI can answer questions, but it can't monitor a student's progress over months, identify patterns in understanding, or hold them accountable to a study plan. Human tutors provide ongoing relationship and accountability that AI tools don't attempt.
Assessment and feedback. AI can evaluate answers, but it can't assess a student's essay-writing technique, their mathematical reasoning process, or their exam technique. Human tutors provide qualitative assessment that requires pedagogical expertise.
Motivation and support. For many students, the tutoring relationship is as much about motivation and emotional support as it is about subject knowledge. AI can't provide the human connection that keeps a struggling student engaged.
The Marketing Implication
This strategic repositioning required marketing infrastructure changes:
Messaging updates. Ad copy, landing page content, and nurture sequences were updated to emphasise the human, relational aspects of tutoring. The value proposition shifted from "get help with your homework" (which AI now provides for free) to "get a dedicated mentor who understands your learning style and holds you accountable to your goals" (which AI cannot).
Content strategy. I produced content addressing the AI question directly — articles and resources that helped parents understand when AI tutoring tools are sufficient and when human tutoring is worth the investment. This honest positioning built trust and attracted prospects who had already considered AI alternatives and wanted something more.
Targeting refinements. The audience segments most likely to value human tutoring over AI tools were identified through attribution data: families seeking exam-specific preparation, students with specific learning needs, and parents who had tried AI tools and found them insufficient. Campaign targeting was refined to prioritise these segments.
The Handover
The final and most important phase of this engagement was the handover. From day one, I designed every system with the assumption that I would eventually leave and someone else would operate it.
Training Replacements
I trained and onboarded two PPC managers as permanent in-house replacements. This wasn't a cursory handover — it was a structured training programme:
System documentation. Every campaign structure, every bidding strategy, every automation workflow was documented. Not just what it did, but why it was designed that way and what to monitor.
Graduated ownership. I transferred responsibility gradually — starting with campaign monitoring, then moving to bid adjustments, then creative testing, then strategic decisions. Each stage was accompanied by hands-on training and review.
Playbook creation. I created operational playbooks covering common scenarios: how to respond to CPA increases, how to launch campaigns for new subjects, how to handle seasonal budget adjustments, how to interpret attribution data.
Why Handover Matters
The build-to-handover model is central to how I work. I don't build systems that require my continued presence to operate. I don't create dependencies on my personal knowledge. I build infrastructure that can be inherited, understood, and operated by competent in-house teams.
This is commercially important for the client: they get a working system and the capability to operate it, not a consultant they can't afford to lose. And it's strategically important for my practice: I work on a project basis, and the cleanest way to end a project is to hand over a machine that runs without me.
The Portfolio-Level View
Looking at the aggregate numbers across the full 3.5-year engagement:
| Metric | Value |
|---|---|
| Total ad spend managed | £2.9M+ (£2.7M student + ~£200K tutor) |
| Total attributed revenue | £8.5M+ (tracked floor — phone leads untracked, one brand had incomplete revenue tracking) |
| Total MQLs generated | 82K+ (student side) |
| Total SQLs generated | ~14K |
| MQL volume growth | +140% (11K → 27K annually) |
| MQL CPA reduction | −37% (£40 → £25) |
| Peak brand ROAS | 3.11× (Brand B, 2024) |
| Portfolio ROAS | ~2.9× aggregate |
| Engagement duration | 3.5 years (Jul 2022 – Feb 2026) |
The £8.5M Floor
I describe the £8.5M attributed revenue figure as a "tracked floor" rather than a total because there are two known gaps in the attribution:
Phone leads. A percentage of students enquired by phone rather than through tracked digital forms. These leads entered the pipeline but without campaign attribution, meaning the revenue they generated couldn't be credited to a specific marketing channel.
Incomplete revenue tracking on one brand. One of the three brands had limitations in its revenue tracking infrastructure that meant a portion of downstream revenue wasn't fully attributable.
The true revenue impact of the marketing activity is likely higher than £8.5M, but I report the trackable figure because it's the one I can verify.
What This Engagement Teaches
Attribution Is the Foundation
If I could distil this engagement to a single lesson, it would be: build attribution before you build anything else. The value of every other marketing activity — campaign strategy, creative testing, CRO, audience development — is dramatically increased when you can measure its actual revenue impact, not just its lead volume impact.
Dual Pipelines Require Dual Infrastructure
Two-sided marketplaces can't run both sides of their pipeline through a single CRM workflow with a single set of lifecycle stages. Supply and demand have different qualification criteria, different channel strategies, different cost structures, and different value calculations. They need separate infrastructure with unified reporting.
Build for Handover
If you're bringing in an external operator — whether a contractor, fractional hire, or agency — the most valuable thing they can do is build infrastructure that outlasts them. The engagement that ends with "we can't function without you" is a failure. The engagement that ends with "we have a working system, trained operators, and documentation" is a success.
Disruption Requires Adaptable Infrastructure
The AI disruption that hit the EdTech market during this engagement could not have been predicted when I started. The attribution and campaign infrastructure I'd built was flexible enough to accommodate a fundamental repositioning of the value proposition without requiring a structural rebuild. This is the argument for building infrastructure properly from the start rather than hacking together quick solutions — properly built systems can adapt to strategic pivots; hacked-together systems break.