← All work
● Private / 2025 / Founder & engineer

Pentagon

GTM automation engine for an AI customer-support SaaS.

Next.js 15 React 19 TypeScript Drizzle ORM Supabase / Postgres Inngest Anthropic OpenAI Resend Apify Hunter.io Upstash Redis Sentry BigQuery Scalar (OpenAPI)
Sources scraped
ProductHunt + Crunchbase
Lead score range
0–100
Rendering
RSC + Server Actions
Drift check
Custom ESLint gate

The brief

Echo is an AI customer-support copilot. Pentagon is the system that finds customers for Echo. Built so I could stop writing cold emails by hand.

Pipeline

The whole thing runs nightly on Inngest cron with manual triggers for one-offs. Each stage is a separately retryable Inngest step — failures don’t reset the pipeline, they resume it.

flowchart TB
Cron["Inngest cron<br/>nightly"] --> Scrape["Scrape<br/>ProductHunt + Crunchbase"]
Scrape --> Enrich["Enrich<br/>Hunter.io"]
Enrich --> Score["Score<br/>GPT-4 with 12-criterion rubric"]
Score --> Filter{"Score ≥ 70?"}
Filter -->|yes| Generate["Generate<br/>personalised cold email"]
Filter -->|no| Archive[("Archive")]
Generate --> Send["Send via Resend"]
Send --> Track["Track<br/>opens / replies"]
Track --> BQ[("BigQuery")]
Track --> Sentry["Sentry observability"]

What it does

  • Scrapes ProductHunt and Crunchbase through Apify for fresh launches and funded companies
  • Enriches with Hunter.io for verified emails and contact metadata
  • Scores each lead 0–100 with GPT-4 over a structured rubric (signals, tech stack, ICP fit)
  • Writes personalised cold emails with a per-lead context prompt
  • Sends through Resend with a sequenced follow-up cadence
  • Tracks opens, replies, and hand-offs in a Supabase analytics layer
  • Runs nightly on Inngest cron, with manual triggers for one-offs

The rubric is the model

Lead scoring is mostly a rubric problem, not a model problem. GPT-4 with a 12-criterion structured schema outperformed "rate this lead 0-100" by ~30 percentage points in agreement with my own manual scoring.

// app/lib/lead-scoring.ts
import { z } from 'zod';

const LeadScoreSchema = z.object({
  total: z.number().min(0).max(100),
  criteria: z.object({
    icpFit:           z.number().min(0).max(10),
    companyStage:     z.number().min(0).max(10),
    fundingSignal:    z.number().min(0).max(10),
    techStackOverlap: z.number().min(0).max(10),
    engineeringMaturity: z.number().min(0).max(10),
    // ...7 more criteria
  }),
  reasoning: z.string(),
});

export async function scoreLead(lead: Lead) {
  const result = await openai.chat.completions.parse({
    model: 'gpt-4o',
    response_format: zodResponseFormat(LeadScoreSchema, 'lead_score'),
    messages: [
      { role: 'system', content: SCORING_RUBRIC },
      { role: 'user', content: serializeLead(lead) },
    ],
  });
  return result.choices[0].message.parsed;
}

The drift-check, the part I’m most proud of

Pentagon has a custom CI gate I wrote called drift:check. It’s an ESLint config that bans no-restricted-globals and no-restricted-syntax in app code — direct process.env access, raw fetch outside a typed client, bypassing the Supabase SSR helpers. If a PR drifts from the chosen patterns, CI fails with the exact rule.

This is the single most valuable thing I’ve added to a TypeScript codebase. It’s how a small codebase stays opinionated as it grows.

What I learned

  • Cold email open rates collapse if your prompt mentions “AI.” I literally A/B tested this.
  • Inngest’s step.run() is a contract not a convenience. Every step is replayable; if you treat it as one, debugging cron jobs becomes possible. If you ignore it, you’ll re-send emails on every retry.

Private repo. Pipeline diagrams and the drift-check config are reviewable under NDA.

Want this for your business?

Let's discuss your AI build.

I do strategy calls, architecture audits, and full pilot builds. Same depth you just read about — for your product.