TL;DR
Operator Hypotheses tests whether early-stage execution patterns are predictable from public signals. I pick companies at inflection points, predict their next fork, and check back 12 months later to see if I was right.
The Core Idea
Most founders fail the same way.
Not because markets shift or competitors outflank them but because they make predictable execution mistakes at predictable times. The services trap. Channel conflict. Premature scaling. Hiring too early or too late.
Here’s what VCs see vs. what operators see:
A VC analyzing a company at Month 12:
$2M ARR with 20x YoY growth
Strong market positioning in growing category
Experienced founder team
→ Looks like a strong Series A candidate
An operator analyzing the same company:
Services revenue jumped from 10% to 25% in six months
Engineering team maintaining three custom code branches
Two senior engineers left in the last quarter
Job posting for “Professional Services Manager”
→ This company is 12-18 months from a down round
Most VCs analyze market size and competitive dynamics. Operators see the execution forks that determine whether companies capture the value their positioning promises.
About
Most startup analysis is retrospective in that you only understand what mattered after the company succeeds or fails. I’m an operator who’s run and seen similar plays in enterprise software multiple times. I’ve dealt with channel conflicts, seen founders made premature VP hires and watched the same patterns play out across dozens of companies.
This series flips that: I make falsifiable predictions about execution forks before they happen, then check back 12 months later to document whether I was right or wrong. Every prediction is falsifiable. Every outcome is published - right or wrong.
Why Execution Patterns Are Predictable
Execution forks repeat because startup physics are invariant. Software has 80%+ gross margins; services have 20-35%. Enterprise sales cycles last 6-12 months. Seed runway lasts 18-24 months. Those constants force the same decision windows again and again.
The constraints are universal, not company-specific. When you’re at $2M ARR burning $150K/month, turning down a $300K customization deal feels impossible. The rational choice (protect long-term positioning) conflicts with the urgent pressure (hit this quarter). Most founders choose urgency.
Add human psychology and information cascades - one company in your cohort accepts customization, another sees it work short-term, suddenly everyone’s doing it—and patterns become forecastable.
The question this series tests: Can you predict which path a specific company will take before they reach the fork?
How It Works
1. Selection: Finding Companies at Inflection Points
I primarily use the Colossus thesis to find companies worth tracking. The “Colossus thesis,” coined by investors at Colossus, argues that AI profits accrue not to model builders but to the operators who remove the next bottleneck those models create. In other words: AI won’t make builders rich—wealth flows to whoever fixes the constraint AI just broke.
The pattern: When AI breaks one bottleneck, value flows downstream to whoever solves the next constraint.
Examples:
AI collapsed drug discovery (4 years → 4 months) → clinical trials became the bottleneck → companies automating trial operations capture value
AI automated code generation → code review and testing became the bottleneck → companies building AI-native testing infrastructure capture value
This is my primary selection lens, though I may occasionally analyze companies that fit other frameworks (inflection points, non-consensus insights, structural market shifts).
What matters: The company must be positioned at a predictable constraint where execution determines whether they capture the value.
2. Pattern Recognition: What Qualifies as a “Pattern”
Not every observation is a pattern. For something to qualify, it must meet four criteria:
1. Repeatability: The fork appears across multiple companies (not a one-off event)
2. Predictable timing: It occurs at a specific stage (e.g., Month 9-18 post-seed, $1-3M ARR)
3. Observable from public signals: No insider access required - hiring announcements, partnership press releases, funding news
4. Testable within 12 months: Long enough to see execution forks play out (most appear Month 9-18), short enough to maintain falsifiability and publish results while they’re relevant
If it doesn’t meet all four, it’s an anecdote, not a pattern.
Pattern examples:
The services trap (Month 9-16): Major customer requests customization; accepting it tanks margins and exit multiples
Channel conflict (Month 18-24): Early partnerships turn hostile when you build direct sales capability
The premature VP hire (Month 10-14): Hiring VP Sales before repeatable playbook creates misaligned incentives
Each analysis identifies when a company is approaching one of these forks and predicts which path they’ll take.
3. Public Signals Only
Everything in this series comes from publicly available information:
Company blogs, press releases, funding announcements
LinkedIn hiring patterns and job postings
Public financial filings (S-1s when available)
Industry benchmark reports
This is both constraint and value:
I can’t see inside the company - no board decks, unit economics, or pipeline data. But this forces stronger pattern recognition. If execution forks are predictable from external signals alone, the patterns are truly structural, not just insider information repackaged.
More importantly: anyone can verify my analysis. I can’t claim access I don’t have.
4. Falsifiable Predictions & The Accuracy Scoreboard
I make specific, testable predictions with dates.
Not: “They’ll face challenges scaling.”
But: “By Month 15 (January 2026), they’ll receive a major customer request for custom integration. If services revenue exceeds 20% by Month 18 (April 2026), Series A will value them at 6-10x ARR instead of 12-18x.”
Every prediction includes:
Specific timeline (Month X, Date Y)
Observable signals (gross margins, revenue mix, hiring announcements)
Clear success/failure criteria
Check-back date
Then I actually check back. If I was wrong, I document why.
After 3-4 patterns, I’ll publish an Accuracy Scoreboard tracking every prediction:
Most analysts hide their misses. I’ll document mine. If patterns are real, accuracy should be high (70%+). If I’m consistently wrong, either the patterns don’t exist or I’m identifying them incorrectly. The data will show it.
The Framework: Five Parts
Each pattern analysis follows this structure to move from thesis → execution → prediction:
1. The Selection (Why This Company) - Explains why I picked this company (usually Colossus thesis). Establishes the investment thesis—why this company could capture significant value.
2. The Pattern (What They’re About to Face) - Describes the execution fork analytically with data from comparable companies. When does it appear? What triggers it? Why do most companies take Path A?
3. The Moment (What It Feels Like) - Narrative version of the fork. The request that sounds reasonable, the pressure from sales, the “obvious” choice that’s actually a trap.
4. The Fork (Path A vs. Path B) - Specific predictions with timelines. Path A (what most do) leads to outcome X. Path B (what works) leads to outcome Y. Both backed by data.
5. The Test (How to Tell) - Observable signals, key metrics, milestones that indicate which path they’re on. When we check back and how we know if the prediction was right or wrong.
What This Isn’t
Not investment advice or consulting. I’m not a VC. I have no relationship with these companies. Don’t invest based on anything here - this is pattern research, not recommendations.
Not comprehensive due diligence. I can only see what’s publicly available. I can’t verify unit economics, pipeline health, team dynamics, or dozens of other factors that matter for actual investment decisions.
Not content for content’s sake. I publish when I find a company worth tracking - ideally at least monthly. Quality over cadence. Every claim is sourced, every prediction is falsifiable.
What Success Looks Like
Success means three things:
Founders recognize patterns early and make better decisions (”I read your services trap analysis. When the customization request came at Month 14, we said no.”)
Investors use the framework in diligence (”What pattern is this company approaching? Which path are they on? What signals would tell us?”)
My predictions hold up 70%+ over time. That’s the threshold where pattern recognition becomes useful rather than luck. If patterns are real and I’m identifying them correctly, accuracy should be high. If I’m consistently wrong, either the patterns don’t exist or I’m seeing them incorrectly. The accuracy scoreboard will make that obvious.
Do This Next 👉 Browse the Archive to see this framework applied to real companies
Subscribe to track whether these predictions hold and what that reveals about how startups really win or lose.
This is Pattern #0 of the Operator Hypotheses series. Methodology will evolve as I learn what works.
