← All Reports
Richmond UNDERDOG ACADEMY
Friday, May 8, 2026
CRM Architecture Audit

ActiveCampaign Architecture Audit

Underdog Academy  |  Data Layer Review & Remediation Plan

Before we get into findings, a note on scope. ActiveCampaign is your email and SMS engine. It is not your sales CRM. Sales pipeline lives in a separate tool. So the audit measures the architecture against what an email and SMS engine needs, not against full-CRM standards. Some of what follows would matter more if AC were carrying sales-pipeline weight. We’re focused on what actually matters for the marketing engine.

Four specific projects on your roadmap are blocked on the same underlying thing. They look unrelated on the surface. They all need the same data layer to exist underneath the email and SMS engine before they can run. Fix the data layer once, and four projects unblock at the same time.

  • Book buyer post-purchase nurture. Roughly 300 to 400 books sold per month. No structured post-purchase sequence today. Even a 5% lift to masterclass registration from a proper nurture = 15 to 20 additional warm registrants per month at zero ad cost. Blocked because lifecycle stage and offer-state aren’t reliably tracked in the account.
  • Post-event lead lifecycle and no-show recovery. Roughly 75 to 85% of masterclass registrants are no-shows currently sitting in a dead zone after the 17-day cart close. Even a 1 to 2% recovery × 12 events per year is real volume. Blocked because there’s no clean disposition field with a controlled vocabulary, and event-bound no-show tags like TCNoShowSEP28 (953 contacts) and TCSEP24Noshowup (469 contacts) make reliable segmentation impossible.
  • Show rate optimization. Show rate is the core revenue lever. Optimizing it requires reliable behavioral segmentation by source, lifecycle stage, and engagement state. The substrate data those segments would read from is not in place, so show rate optimization stays guess-driven.
  • Analytics extraction from the data you already have. Your account contains raw signal on opens, clicks, page visits, form submissions, purchases, attendance, UTM parameters. The architecture prevents querying it usefully. You can pull a fraction of the analytics your data should be producing. Channel mix, cohort comparison, time-to-conversion, source-aware behavior. All theoretically answerable from what’s in the account today, none of them practically answerable because the substrate is missing.

The headline finding. 47% of your tags fall below useful threshold. Your list count has grown from 9 in 2019 to 1,048 today, with 207 already created in the first four months of 2026. Zero of the standard fields a system this size needs are present in the schema. The platform is doing real work. The structure underneath it has six years of accumulation that nobody has stopped to refactor.

Tags
1,235
47% used by >10 contacts
Lists
1,048
From 9 in 2019
Custom Fields
145
60% are TEXT type
Standard Fields Present
0 / 17
Substrate gap

About This Audit

What it is, why it matters, how to read it

What is this document?

This is an inventory of every primitive in your ActiveCampaign account. Tags, lists, segments, custom fields, deal pipelines, and automations. Each one measured against a standard data architecture framework. It identifies what is already in place, what is missing, and what to fix to unlock the four projects above.

Why does this matter?

Most CRM frustration is structural, not platform. The system you have today is what built up over six years of campaigns, integrations, consultants, and one-off fixes. Together those decisions accumulated into a structure that’s now in the way of the things you want to do next.

This audit makes that structure visible. Once you can see it, you can fix it deliberately and in the right order.

How to read this

The executive summary above already named the four projects this work unblocks. The rest of the document is the evidence behind that claim and the path forward. Each section has a job:

  • Inventory Snapshot. The numbers. What you actually have in the account.
  • Health Snapshot. A scorecard of each primitive, color coded green, yellow, or red.
  • Structural Patterns. The four specific things that are getting in the way, with examples drawn from your data.
  • The Path Forward. A four-phase remediation plan, ordered Substrate, Correctness, Hygiene, Polish.
  • Next Steps. What happens after this document.

Health colors used throughout: Healthy means working as it should. Warning means functional but accumulating drift. Critical means actively in the way of work you want to do.

1. Inventory Snapshot

What is actually in your account today

A scoreboard of the primitives the architecture is built from. The headline numbers are large because the system has been running for six years. The interpretive note under each one is the part that matters. The count is fine. The structure underneath the count is the question.

Tags
1,235
23% have zero contacts
Tag Utilization
47%
Below useful threshold
Lists
1,048
18% have zero subscribers
2026 List Growth
207
Created in 4 months
Segments
3,885
22% are orphaned
Custom Fields
145
60% are TEXT type
Deal Pipelines
16
Most last touched 2021–23
Standard Fields
0 / 17
Substrate gap
Automations
800
543 paused, mostly 2020–21
Active Automations
236
Real operational coverage
Engagement Tagging
4.3M+
Entries in core automation
UTM Source Field
Free text
Dozens of variants, no governance
How to read the snapshot

The three numbers that matter most for unblocking work: 0 of 17 standard fields present (the substrate gap), 1,235 tags with no naming convention (the findability problem), and 1,048 lists, 207 in 2026 alone (the runaway growth). Everything else in the audit is downstream of these.

2. Health Snapshot

Each primitive, scored against the standard

A pill-coded scorecard. Each row is one piece of the architecture, the metric used to judge it, what your account currently shows, and the resulting status. The red sits in the structural rows: tag-family duplication, list growth, source attribution, and substrate fields. The green sits in the engagement tagging system that handles open and click behavior.

Primitive Metric Current Status
Tags % of tags with more than 10 contacts 46.3% Warning
Tags Suspected duplicate tag families (3 or more versions) 30+ Critical
Custom Fields TEXT fields storing typed data 26.9% Warning
Custom Fields Standard lifecycle and source fields present 0 of 17 Critical
Pipelines Pipelines updated since 2024 2 of 16 Warning
Pipelines Pipelines functioning as state machines 1 to 2 of 16 Warning
Lists Lists with 10 or more active subscribers 55.4% Warning
Lists List creation cadence (governance) 347 in 2025, 207 in 4 mo of 2026 Critical
Lists List versus Segment role separation Not respected Critical
Segments Active reference rate ~30% Critical
Source / Attribution First-touch and last-touch split present Absent Critical
Source / Attribution UTM Source as a controlled vocabulary Free text, 50+ values Critical
Lead Score Active scoring (templates exist, paused) None Warning
Automations Active rate 29.5% Warning
Engagement Tagging Open and click state recomputed Active, 4.3M+ entries Healthy
Tags Average tags per contact (volume) ~8.8 Healthy

3. The Four Structural Patterns

What is getting in the way, what it is costing you, and what fixing it changes

Four patterns account for most of the day-to-day friction. Ranked by what hurts every day, not by what’s easiest to point at. For each pattern: what it is, an example pulled from your account, what it is currently costing you, and what changes when it is fixed.

Pattern 1: Nothing in your tag system is findable

The pattern

There’s no naming convention. No prefix system. No governance. The same idea exists under multiple inconsistent names. When you or anyone on the team needs to find the right tag for a campaign send or a segment query, the search bar can’t help you. You get a list of plausible-looking tags and have to manually figure out which one is current. So the default for every new campaign becomes “just make a new tag to be safe.” That’s how the sprawl perpetuates itself.

What it looks like in your account

Of 1,235 tags, 23% have zero contacts (created and never used). 47% have fewer than ten contacts. Tags like Engaged, OPENS:Engaged, OPENS:Recent activity, and CLICKS:Engaged all encode overlapping ideas under different conventions. Buyer tags for the same offer exist under eight or more variants. Tags from 2019 sit alongside tags from 2026. Some have double-space typos (Purchased UA - TCMAR4 - $2997) that make duplicates invisible to a human scanning the tag list. The search-as-you-type filter doesn’t help when the same concept is spelled four different ways.

What this is currently costing you

Every campaign send starts with “which tag do I use” and ends with “let me just make a new one to be safe.” The sprawl perpetuates itself by design. Every uncertainty produces another tag.

You personally can’t find tags you know exist. Your team can’t either. Every send is built on uncertainty about whether the audience is right.

Reports built on tag queries quietly disagree depending on which tags the analyst remembered to include. The same question asked twice gets two different numbers.

The cognitive load of operating in a 1,235-tag UI without a system is real. Every campaign send is slowed by it. Multiply that across the volume of sends per year.

The sprawl is accelerating. Without a convention, every new tag creates more confusion for the next one. The longer it stays uncleaned, the larger the cleanup gets.

What changes when fixed

One convention. Every tag prefixed by category (status-, event-, behavior-, cohort-, tier-, exclusion-). The search bar becomes useful again. You and the team find what you need without guessing. New tags get created under the convention or they don’t get created. The sprawl growth rate goes from exponential to flat. The 1,235 number drops toward 300 to 400 maintained tags through deprecation and consolidation, all findable.

Pattern 2: One buyer tag per event, instead of one buyer tag per offer

The pattern

The biggest single contributor to Pattern 1. Every Tiny Challenge masterclass spawns four to eight separate buyer tags. One per offer, per price point, per replay versus live variant. Across 50+ masterclasses a year over multiple years, this one pattern accounts for several hundred of the 1,235 tags in the account.

What it looks like in your account

The TC March 4 masterclass alone generated tags including Purchased UA - TCMAR4 - $1197, Purchased UA - TCMAR4 - $2997 (with a double space typo), Purchased UA - TCMAR4REPLAY - $1197, and similar variants for the $4997 and $395 price points. The same family pattern repeats for TCJAN, TCFEB, TCMAY, TCAUG, TCSEP, TCOCT, and so on. There are at least 16 distinct family patterns just for UA purchases. Asking the question “all UA buyers ever” today requires checking dozens of tags at once.

What this is currently costing you

“All UA buyers ever” is a question your team cannot reliably answer in real time. The number changes depending on which tags the person remembered to include. Reports don’t reconcile across people.

Buyer suppression leaks. Your UA buyers keep receiving cold-pitch UA emails because the suppression segment can’t OR together every historical buyer-tag variant. Some of those emails land in the inboxes of people who paid you four-figure amounts a year ago. That’s a trust hit on the customer base every time it happens, and a trust hit on the people you most want to ascend to higher-ticket offers later.

Every new masterclass adds four to eight new buyer tags. Every “all UA buyers” segment in the account silently goes out of date the moment those tags are created. Your team has to either remember to update every segment by hand, or accept that segments quietly decay between campaigns.

Nobody bills you for “the segment was wrong this time.” But every send to a stale or incomplete buyer audience is a small revenue leak that doesn’t show up anywhere. The cumulative cost is invisible operational drag.

What changes when fixed

One stable buyer tag per program (status-ua-customer, status-p1-customer, status-book-buyer) plus a custom field that records the offer, event code, and price tier. “All UA buyers ever” becomes a single tag check. Ascension automations can target reliably. Customer-only communication suppression actually works. The per-event tag explosion stops at the source.

Pattern 3: Lists used as send audiences instead of opt-in lists

The pattern

ActiveCampaign distinguishes two primitives. A List is a record of someone explicitly opting in. A Segment is a saved query (“everyone who matches these conditions right now”). In your account, that distinction has collapsed. New lists are created for every campaign send. The result is exponential list growth: 9 in 2019, 29 in 2020, 18 in 2021, 63 in 2022, 136 in 2023, 239 in 2024, 347 in 2025, and 207 already in the first four months of 2026.

What it looks like in your account

Examples pulled from the current list set: Bootcamp May 2026 - Non Clients, TC QnA Mar 10 2026 Invite, Bootcamp May 2026 Invite Existing Clients, Bootcamp May 2026 - Invite List Clean, Correction List for Bootcamp, TC Registrant Aug - Oct 2024 Exclude UA. Every one of these is a campaign send audience or a saved query. None is a real opt-in event. Of your 1,048 lists, 189 (18%) have zero active subscribers and 467 (44.6%) have fewer than ten.

What this is currently costing you

Subscribers who unsubscribe from one campaign list keep receiving sends from operationally identical campaign lists. They think they unsubscribed from your marketing. They didn’t, because they only unsubscribed from one of the dozens of lists they happen to be on. This produces complaints and degrades sender reputation across the whole account.

ActiveCampaign per-contact billing on some pricing tiers can multiply effective contact count when contacts are spread across many lists. With 1,048 lists and significant overlap, you may be paying meaningfully more than you would under a clean architecture.

1,048 entries in the list dropdown is unusable. Your team can’t navigate the UI efficiently. “Which list do I send this to” requires institutional knowledge that nobody has documented.

Compliance posture is fragile. If a regulator asks “show me when this person opted in to receive marketing email,” the answer requires manually checking 30+ lists. The system cannot generate it cleanly.

Every new campaign creates a new list and a new maintenance burden. The growth curve is exponential. At the current rate you are on pace for 600+ new lists in 2026 alone. Each one needs sourcing, syncing, and eventually retiring.

What changes when fixed

Lists drop from 1,048 to roughly 30. Each remaining list represents a real subscription decision (Marketing Subscribers, Customer-Only Communications, NTS Subscribers, lead-magnet-specific opt-in lists). Campaign send audiences move to Segments where they belong. ActiveCampaign per-contact billing exposure goes down because contacts are no longer fanned across dozens of lists. Unsubscribe state stops leaking sideways across operationally distinct campaigns. The growth curve flattens.

Pattern 4: No source-of-truth fields for the email and SMS engine

The pattern

The account is missing the small set of fields an email and SMS engine needs to fire behavioral sequences accurately and to extract analytics from the signal it’s already collecting. Where is this contact in the journey today? When did they hit each stage? Where did they originally come from? Where did they last come from? What was the last sales-call outcome? Have they consented to marketing email and when? None of these are tracked as a single source of truth. Every sequence and every report has to reconstruct the answer indirectly from tag soup.

What it looks like in your account

Specifically missing: a lifecycle stage field (subscriber, lead, MQL, SQL, customer, advocate). Lifecycle date fields (first opt-in, first engagement, first purchase, last purchase, last engagement). A first-touch versus last-touch source split with a controlled channel vocabulary. A controlled disposition field. Email marketing consent fields with date, source, and jurisdiction.

UTM Source is captured but stored as free text. Distinct values likely run into the dozens (Facebook, facebook, FB, fb, Meta, meta, etc.). No first-touch immutability automation, so every visit overwrites the source data. Every contact’s source field reflects their most recent visit, not where they originally came from. The historical signal is gone.

What this is currently costing you

Behavioral sequences fire to the wrong people. The 17-day cart close goes to people who already bought because there’s no clean buyer flag the sequence can suppress against. No-show recovery goes to people who actually showed up because no-show is captured as event-bound tags rather than a unified disposition field. Subscribers receive emails that don’t match their actual state. They learn to ignore your sends.

You can’t extract the analytics your account is sitting on. The raw signal is in there. Opens, clicks, attendances, purchases, UTM parameters going back years. The architecture prevents you from turning that signal into answers. “Of UA buyers, what fraction first arrived through paid Meta versus organic versus podcast appearances” is a fundamental question with no practical answer. “What’s our time-from-first-opt-in to first purchase” is unknowable. “How did Q1 2026 buyer behavior compare to Q1 2025” can’t be computed.

Source-aware messaging is impossible. Every paid Meta lead, every organic search lead, every Russell Brunson podcast referral, and every Foundr podcast referral gets the same sequence. They don’t behave the same way. The CRM should let you differentiate; it can’t, because the source data underneath every segment query is unreliable.

In a regulator inquiry, “did this person consent to marketing email on this date with this language” is unanswerable. The system tracks “they’re subscribed.” That’s a thin audit trail.

What changes when fixed

Behavioral sequences fire on accurate state. Buyer suppression works. Recovery sequences land on the people who actually need recovering.

The signal in your account becomes extractable. Cohort comparison, time-to-conversion, channel-by-channel buyer behavior, lifecycle aging, source-aware retargeting audiences. All of it queryable from the same data that’s been collecting in the account for years.

Source-aware messaging becomes practical. The paid Meta lead gets a different sequence from the Russell Brunson podcast referral. The email engine differentiates by acquisition channel because it knows the channel.

All four of the projects in the executive summary unblock here. This is the substrate phase.

For completeness: the deal pipelines

The pipeline situation, briefly

Your account has 16 deal pipelines. Most haven’t been touched since 2021 to 2023. Several encode non-state data as stages (calendar months in the 10-Day Challenge pipeline, past meeting dates in P1 COFFEE, coach assignments in RD Events). It’s a mess, but you don’t actively use deal pipelines for sales (sales lives in a separate tool), so this is dead infrastructure rather than active dysfunction. Worth mentioning in the audit for completeness. Not where the leverage is.

4. The Path Forward

Four phases, ordered by what unblocks what

The remediation work is staged into four phases. The order matters. Phase 1 has to be substantially complete before Phase 2 can start, because the substrate fields installed in Phase 1 are what Phase 2 migrates the existing data into. Phase 3 cleanup is independent of Phase 2 once Phase 1 is in place. Phase 4 is durable polish.

Phase 1 is the unblocking phase. It is what releases the four projects in the executive summary. Phases 2, 3, and 4 are the durable cleanup that prevents the system from re-accumulating the same debt.

Phase 1: Substrate

What this phase does

Installs the standard fields that should already exist underneath the email and SMS engine. This is the substrate phase. Without it, every fix downstream is patched on top of the same gaps.

What gets installed

Lifecycle stage field. A controlled dropdown (subscriber, lead, MQL, SQL, customer, advocate) so “where is this contact in the journey” has a single answer.

Lifecycle date fields. First opt-in, first engagement, first purchase, last purchase, last engagement. The fields cohort analysis and time-to-conversion reports depend on.

Source attribution fields. First-touch channel, last-touch channel, first-touch campaign, last-touch campaign, first-touch date, last-touch date. Channel as a controlled dropdown, not free text.

Disposition field. A fixed vocabulary (Take the Money, Send Resources, Not Now, Not the One, Won’t Pay, Need Spouse, Logistics, Unqualified, No Show, Reschedule). The thing recovery sequences need to read from.

Consent fields. Email marketing consent, given date, source, jurisdiction. Audit trail beyond “the platform handled the unsubscribe.”

Outcome: the four projects in the executive summary become buildable.

Phase 2: Correctness

What this phase does

Migrates the four structural patterns onto the new substrate. The substrate from Phase 1 gives the patterns somewhere to land. Without Phase 1 first, this work has nowhere to migrate to.

What gets migrated

Tag-naming convention rollout. One prefix system across the surviving tag set (status-, event-, behavior-, cohort-, tier-, exclusion-). The search bar becomes useful. New tags get created under the convention.

Buyer-tag consolidation. The dozens of Purchased UA - TC<date> - $<price> tag families collapse into one buyer tag per program plus structured custom fields. Old tags get archived for 90 days, then removed.

List to Segment migration. Campaign-send lists rebuilt as Segments. Real opt-in lists kept. Roughly 300 to 400 lists move to Segments. Roughly 500 to 600 dead lists get archived. The remaining ~30 lists are the actual subscription decisions.

Type-discipline migration. The 39 TEXT fields that are storing dates, numbers, or fixed-list values get rebuilt with the right field types. Backfilled where the old text values parse cleanly. Flagged for review where they do not.

Phase 3: Hygiene

What this phase does

Cleans up the accumulated sprawl that is not blocking anything but is making the account harder to navigate and audit.

What gets cleaned up

Tag retirement. The 284 zero-contact tags get removed after confirming no automation references them. The buyer-tag families archived in Phase 2 get permanently deleted after the 90-day window closes.

List archival. The 189 zero-subscriber lists move to archived state. The campaign-send lists migrated to Segments in Phase 2 get evaluated for permanent deletion after a quarter.

Segment cleanup. The 858 orphan segments and the 1,705 segments not updated since 2023 get audited and removed where confirmed unreferenced.

Field orphan retirement. Once Phase 2 type migrations are done, custom fields with very low non-null rates get retired.

Paused automation cleanup. The 543 paused automations from 2020 and 2021 get audited and removed where they no longer hold useful logic.

Phase 4: Polish

What this phase does

Locks in the structure with naming, organization, and documentation so the account does not re-accumulate debt over the next 12 to 24 months.

What gets locked in

Naming convention enforcement. A documented tag taxonomy (status-, behavior-, optin-, event-, interest-, cohort-, tier-, exclusion-) applied across the surviving tag set. Inconsistently named tags consolidated.

Field grouping. Custom fields organized into Lifecycle, Source, Sales, Engagement, Commerce, Compliance, and Operational groups so the field list is navigable instead of a wall.

Engagement-state taxonomy refinement. The current OPENS and CLICKS engagement tags consolidated into a unified engagement-state set with a sunset workflow that prunes deeply unengaged contacts before they damage deliverability.

Lead-scoring review (optional). If useful for email-trigger automation later, the existing paused lead-scoring templates can be reviewed and activated. Smaller priority since the sales pipeline lives outside this tool.

Documentation handoff. A living architecture document for your team. Full primitive map, naming conventions, dropdown vocabularies, integration touchpoints, governance practices. Quarterly review cadence so the structure stays clean.

5. Next Steps

What happens after this document

The working session

The next step is a working session to walk through the remediation plan in detail and decide which phase to start with. Phase 1 is the unblocking phase. Once it is done, the four projects in the executive summary can begin in parallel.

What we will decide together

Whether Phase 1 runs as a focused first installment with the four downstream projects queued behind it, or whether Phase 1 and Phase 2 run together as a more substantial initial pass. Either way is workable. The deciding factor is appetite and operational pace, not the audit itself.

What the audit does not change

This document is the audit. It does not modify your ActiveCampaign account. Nothing has been edited, archived, or migrated. Execution happens deliberately, in the working session, with your team in the loop.

The headline, one more time

Four projects on your roadmap are blocked on the same data layer underneath your email and SMS engine. The data layer is fixable in a defined first phase. Once that lands, the four projects can run, the structural cleanup follows on its own track, and the system stops being in the way of the next thing you want to do.