The biggest mistake in crypto SEO retainers is starting implementation in week 1. Founders push for it because they want progress they can see; agencies oblige because it makes the relationship feel productive. The result is two months of work that has to be partially redone in months 3–4 because the foundation underneath was wrong.
We ship discovery first, audit second, implementation third. The 90-day framework below is what we run on every Crypto SEO retainer. It is opinionated, the timing is firm, and it is the same on every engagement so that clients comparing results across us know we ran the same play.
Week 1: discovery
The first week is uncomfortable for founders because nothing visible ships. The deliverable is a 25–40 page discovery document that captures everything we’ll need for the next 23 weeks of work.
ICP and use cases. Who is the buyer, what are the use cases that matter, what’s the buyer LTV, what’s the typical sales cycle. We don’t accept “everyone” as an answer; the document forces specificity.
Query universe. 50–150 commercial queries selected from three sources: keyword research tools (Ahrefs + Semrush cross-validated), GSC historical data (if available), and prompt-style queries that AI tools answer in your category. Each query is tagged with intent (commercial, informational, transactional, comparative), volume estimate, AI-citation potential estimate, and competitive density.
Competitive teardown. 3–5 competitors mapped on: site structure, content depth, schema implementation, link profile, AI-citation rate on shared queries, and approximate organic traffic. We use this to identify gaps where we can win and patterns where the SERP is already saturated.
Regulatory perimeter. For MiCA-bound, FCA-bound, or otherwise regulated clients, we document what claims can and cannot appear in marketing copy, jurisdiction-specific copy variants required, and the compliance review workflow. This document gates every editorial decision for the duration of the retainer.
Baseline metrics. GSC, GA4 or equivalent, Ahrefs/Semrush historical, plus prompt-monitoring snapshot across 5 AI platforms on the discovery query universe. Baseline is what every future report compares against.
The discovery document gets reviewed with the client at end of week 1. Any disagreements get resolved before week 2 begins. Implementation hasn’t started.
Weeks 2–4: technical and AEO audit
Inna leads this phase. The deliverable is a 35–60 page audit report plus a prioritised punch-list of fixes.
JavaScript rendering audit. We crawl the site as Googlebot, GPTBot, ClaudeBot and PerplexityBot in parallel. Compare what each rendering bot sees against the user-facing site. Identify pages that fail to render for any bot and the failure mode (timeout, hidden content, JS dependency, etc.).
Crawl budget analysis. GSC crawl stats over 90 days, identifying pages that consume crawl budget without ranking value, and pages that should be crawled more frequently than they are. Output: a prioritised list of canonicals, redirects, robots.txt edits, and noindex tags.
Schema graph design. Site-wide schema architecture as a JSON-LD graph: top-level Organization or ProfessionalService, Person markup for named experts, Article schema for content, FAQPage on relevant blocks, BreadcrumbList per page, Service per offering. Document includes the current schema state, the target state, and the gap.
On-page restructure plan. 8–15 priority pages selected based on discovery query universe and current ranking proximity. Each page gets a documented restructure plan: H1 disambiguator, Quick Facts table content, Q-format H2s with proposed direct-answer first sentences, named-expert byline assignment.
llms.txt and AI-crawler robots. We design or rebuild the llms.txt and the AI-crawler-specific entries in robots.txt. AI-crawler access is permissive by default for our clients; the only exclusions are private documentation and admin pages.
Core Web Vitals. LCP, INP, CLS measured on representative pages, with specific fix recommendations if any are out of bounds. CWV is rarely the bottleneck on crypto sites we audit (most are tolerable), but it gets checked for completeness.
The audit report is delivered end of week 4. The punch-list is grouped into “ship in weeks 5–8”, “ship in weeks 9–12”, and “addressed in months 4–6”.
Weeks 3–5: donor profile audit (parallel)
Daniil runs this phase in parallel with the technical audit because the deliverables are independent.
Existing link profile classification. Pull the existing inbound links from Ahrefs, Majestic, and Google Search Console links report. Cross-validate the lists. Classify each link by: relevance, donor quality, anchor type, AI-fetchability, and risk score.
Toxic-link disavow. Following the conservative 2024 Google disavow guidance, identify the 8–15% of links that look like they could trigger a manual action (paid sponsored without disclosure, link-network footprints, hacked-site placements, foreign-language link farms). Build a disavow file. Submit through GSC after client sign-off.
Net-new donor allowlist. Starting from our master vetted list of ~340 publications, filter to the 100–250 most relevant for the client’s niche and jurisdictional focus. Each donor on the shortlist has its DR + traffic + index + AI-fetch verified within the last 30 days.
Anchor distribution audit. Existing inbound anchor distribution mapped against the recommended 35–45% branded, 20–25% naked URL, 15–25% long-tail, 6–9% partial-match, 2–4% exact-match for crypto SERPs. Gap analysis identifies the rebalancing pattern for new placements.
Outreach asset prep. Write the templated outreach emails, the editor-pitch one-pagers, the credibility-pack for guest-post pitches. Each donor category has a different pitch angle; the asset prep takes 10–15 hours of senior time.
End of week 5: donor allowlist signed off, disavow submitted, outreach assets ready. First placement can ship as early as week 6.
Weeks 5–8: implementation phase 1
This is where visible work starts. Three workstreams in parallel.
Priority page restructure. 4–6 pages restructured under the GS Playbook. Each page gets H1 disambiguator, Quick Facts, Q-format H2s with direct-answer first sentences, named-expert byline, FAQPage schema if applicable. Average 8–14 hours per page including research, drafting, editorial pass, schema implementation, and compliance review.
Schema deployment. The schema graph designed in weeks 2–4 ships across all priority pages. Validation has to pass 100% on Schema.org Validator and Google Rich Results Test before deployment. We typically iterate 2–3 times before hitting full pass.
First donor placements. 1–2 placements ship in weeks 6–7, ramping to 3–4 in week 8. The donor cycle (pitch, brief, draft, edit, publish) is 5–9 days, so the first placements that started outreach in week 6 land around week 8.
End of week 8 status: technical foundation in place, 4–6 priority pages restructured, schema validated and deployed, first donor placements live, weekly position tracker running.
Weeks 8–12: production cadence ramp
The retainer transitions from project-style implementation to ongoing production cadence. Four streams running.
Content production. First content cohort ships — typically 4–6 long-form pieces published in weeks 9–12. Topics from the discovery query universe, prioritised by citation potential.
Continued page restructure. Another 3–6 priority pages restructured by week 12. By end of month 3, 8–12 priority pages are on the new playbook.
Donor cadence at full rhythm. 6–12 placements per month from month 3 onwards. Mix per the donor profile audit.
Tracking and reporting. Weekly position tracker reports start ending of week 8. First monthly ROI report ships at end of month 3, comparing baseline (from discovery) to current state across 8–12 leading indicators.
What does the month-3 ROI report contain?
The first proper ROI report at end of month 3 covers eight standard sections.
Position tracker delta. Baseline-to-current movement on the discovery query universe, segmented by intent (commercial vs informational) and current rank bucket (top 3, top 10, top 30, top 100, not in top 100).
Indexed page count + freshness. Baseline-to-current change in indexed pages, plus average freshness (how recently each indexed page was crawled).
Schema validation status. All deployed schema validates 100%, with issue count from baseline.
Donor placements log. All placements made in the period, with donor URL, DR, organic traffic, anchor used, indexation status, AI-fetch verification, and brief content summary.
AI citation rate. Citation count across the prompt-monitor universe, broken down by platform (ChatGPT, Perplexity, Gemini, Claude, Google AI Overviews) and by query type.
Branded search trend. GSC branded-query growth across the three brand-query buckets, week-over-week.
Self-reported source. Coded summary of “how did you find us” responses on lead-intake forms during the period (if the client has volume).
Outlook for months 4–9. Quarterly content plan, donor placement target, scheduled audit refreshes, regulatory review cycle.
The report is delivered as a 15–25 page document plus a 60-minute review call with the named lead and the client’s primary contact.
What is the framework not?
The framework is opinionated and applied uniformly. It is also not the right fit for everything.
It is not the right fit for clients who need pipeline inside 60 days. The framework is built for compounding outcomes over 6–12 months. If you need fast results, a paid acquisition push (with the limitations covered in our advertising piece) is the right starting point, not SEO.
It is not the right fit for clients without a clear product or regulatory perimeter. We need both to write content that compounds. If the product is still in flux or the regulatory situation is unclear, the discovery week will reveal it and we’ll recommend you wait.
It is not the right fit for clients allergic to the discovery week. If the founder pushes back on spending week 1 on documentation rather than implementation, the engagement will go badly. We say so on the discovery call to surface this preference before contracts.
For clients where the framework fits, the 90-day output is what gives us confidence that months 4–12 will compound. We have run it 14 times in 2024–2025; the engagement durability past 12 months correlates with how cleanly the first 90 days followed the framework. The discovery call is the place to figure out fit. Free, 30 minutes, named lead, and we will tell you whether the framework is the right play for your situation or whether something else is.