You know the grind. Open Twitter. Scroll the timeline. Find a tweet worth replying to. Context-switch to figure out what to say. Type something. Second-guess it. Delete half. Post it. Repeat 30 times a day, every day, until the algorithm notices you exist.

The "reply guy" strategy is still one of the highest-ROI growth tactics on X. But the execution is soul-crushing. Most of it is scanning — reading 100 tweets to find the 5 where you can actually add value. That's the part that kills your momentum and eats an hour before you've even written anything.

What if you could do all of that from inside your AI assistant? No tab switching. No doom-scrolling. Just: "Find the best tweets to reply to right now" — and get scored opportunities with draft replies, without leaving your editor.

That's exactly what MCP lets you build. And in this guide, we'll wire it up to ShipPost's API so your AI assistant becomes a full-blown Twitter growth copilot.


What Is MCP (And Why Should You Care)?

MCP stands for Model Context Protocol. It's an open standard created by Anthropic that lets AI assistants — Claude, Cursor, Windsurf, etc. — use external tools in a structured way.

Think of it as a USB port for AI. Before MCP, connecting your AI to external services meant fragile prompt hacking, custom plugins per platform, or copy-pasting between apps like a caveman. MCP standardizes all of that into a clean architecture:

The result: your AI assistant can discover what tools are available, understand their inputs, call them, and use the results — all natively. No copy-pasting. No browser tabs. Just conversation.

Ship Tip: MCP is especially powerful for repetitive workflows with clear inputs/outputs. Twitter growth (scan → score → draft → post) is a textbook fit.


How an MCP Server Works

An MCP server is a lightweight process that exposes tools to AI clients. When Claude connects to your server, it sees a menu of available tools — each with a description and input schema. You ask for something in plain English, the AI decides which tool to call, and the server executes it.

There are two transport modes:

For this guide, we're building a STDIO server that runs locally and calls the ShipPost API behind the scenes. Your machine runs the thin MCP layer; ShipPost handles the heavy lifting (timeline scanning, AI scoring, voice-matched drafting).


What We're Building

Our MCP server will expose three core growth tools:

Tool What It Does
find_opportunities Scans your timeline, scores tweets by reply-worthiness, suggests angles
draft_reply Drafts a reply to a specific tweet, matched to your voice profile
draft_tweet Generates original tweet variations on any topic and style

By the end, you'll be able to say "Find tweets about AI SaaS worth replying to" in Claude, and get back scored opportunities with draft replies — all in one conversation.

Ship Tip: ShipPost's API also supports draft_thread, analyze_account, and get_performance endpoints. We'll cover the core three here, then show you how to add the rest yourself — the pattern is identical.


Step 1: Project Setup

First, scaffold a new Node.js project and install the MCP SDK:

mkdir shippost-mcp && cd shippost-mcp
npm init -y
npm install @modelcontextprotocol/sdk zod

You'll also need:

Ship Tip: API rate limits scale with your plan — Free gets 50 requests/day, Creator gets 500, Pro gets 2,000. The free tier is plenty for testing.

Now create your entry file. The entire server skeleton is just a few lines:

// src/index.ts
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";                   // Schema validation for tool inputs

const SHIPPOST_BASE_URL = "https://shippost.ai/api/mcp";
const API_KEY = process.env.SHIPPOST_API_KEY || "";

// Initialize the MCP server with a name and version
const server = new McpServer({
  name: "shippost",
  version: "1.0.0",
});

Three imports, two constants, one server instance. That's your foundation.


Step 2: Write the API Helper

Every tool in our server calls the ShipPost API with Bearer token auth. Rather than repeat that boilerplate in each tool, let's write it once:

async function shippostAPI(endpoint: string, body: Record<string, unknown>): Promise<string> {
  if (!API_KEY) {
    return JSON.stringify({
      error: "No SHIPPOST_API_KEY set.",
      fix: "Add SHIPPOST_API_KEY to your MCP server env config.",
    });
  }

  const response = await fetch(`${SHIPPOST_BASE_URL}${endpoint}`, {
    method: "POST",
    headers: {
      "Content-Type": "application/json",
      Authorization: `Bearer ${API_KEY}`,   // Bearer token auth
    },
    body: JSON.stringify(body),
  });

  if (!response.ok) {
    const detail = await response.text();
    return JSON.stringify({ error: `API error ${response.status}`, detail: detail.slice(0, 300) });
  }
  return await response.text();
}

This handles three scenarios: missing API key (with a friendly nudge to sign up), HTTP errors (with truncated detail so the AI can explain what went wrong), and success. Clean and reusable.

What you should see: If you were to call this function directly with a valid key, you'd get back a JSON string from ShipPost. If the key is missing, you'd get the helpful error object instead. We'll test the full flow in Step 6.


Step 3: Build the find_opportunities Tool

This is the star of the show. Instead of scrolling Twitter for 30 minutes, you ask your AI assistant to find what's worth replying to — and it comes back with scored, ranked opportunities.

Here's the tool registration:

server.tool(
  "find_opportunities",                     // Tool name (what the AI calls)
  "Scan my timeline and find the best "     // Description (helps the AI
  + "tweets to reply to right now. Returns " // decide WHEN to use this tool)
  + "scored opportunities with suggested angles.",
  {
    // Input schema — Zod validates these automatically
    niche: z.string().optional()
      .describe('Focus area, e.g. "AI SaaS", "indie hacking"'),
    max_results: z.number().min(1).max(25).default(10)
      .describe("How many opportunities to return"),
  },
  async ({ niche, max_results }) => {
    const result = await shippostAPI("/find-opportunities", {
      niche: niche || "",
      max_results: max_results ?? 10,
    });
    return { content: [{ type: "text", text: result }] };
  }
);

Let's break down the anatomy of a tool registration, because you'll repeat this pattern for every tool:

  1. Name — A slug the AI uses internally. Keep it short and descriptive.
  2. Description — This is surprisingly important. The AI reads this to decide when to invoke the tool. Be specific about what it returns, not just what it does.
  3. Input schema — Zod schemas that the MCP SDK validates automatically. Optional fields get sensible defaults. The .describe() calls help the AI fill in parameters from natural language.
  4. Handler — The async function that actually runs. Calls your API helper, wraps the result in the MCP response format.

Ship Tip: The description is your tool's sales pitch to the AI. Vague descriptions like "does Twitter stuff" mean the AI won't know when to call it. Be precise: what does it scan? What does it return? How should results be used?

What ShipPost does behind the scenes: When you call find_opportunities, the API pulls your home timeline via Twitter's API, filters out replies/retweets/stale content, then runs each tweet through a scoring pipeline that weighs relevance to your niche, engagement potential, and how well-suited the tweet is for a reply. You get back tweets above the threshold, each with a score, suggested reply angle, and topic tags.


Step 4: Build the draft_reply Tool

Once you've found a tweet worth replying to, this tool drafts a reply that actually sounds like you — not like generic AI slop.

server.tool(
  "draft_reply",
  "Draft a reply to a specific tweet in my voice. "
  + "Returns a draft for review — does NOT post automatically.",
  {
    tweet_id: z.string().describe("The tweet ID to reply to"),
    tweet_text: z.string().describe("Full text of the target tweet"),
    tweet_author: z.string().describe("Author's username (no @ sign)"),
  },
  async ({ tweet_id, tweet_text, tweet_author }) => {
    const result = await shippostAPI("/draft-reply", {
      tweet_id, tweet_text, tweet_author,
    });
    return { content: [{ type: "text", text: result }] };
  }
);

See the pattern? Same four-part structure as find_opportunities — name, description, schema, handler. The only things that change are the inputs and the endpoint.

The response includes the draft text, a confidence note on voice matching, and the reply angle it chose. You review it, tweak if needed, then post. ShipPost has guardrails baked into the API — daily caps, minimum gaps between replies to the same author, per-thread limits — so you can be aggressive without looking like a bot.

Ship Tip: The voice profile is what makes this work. ShipPost learns your writing style from examples you provide during onboarding. Without it, you'd get replies that sound like every other AI-generated comment. With it, your drafts have your actual cadence and vocabulary.


Step 5: Build the draft_tweet Tool

Not everything is replies. Sometimes you want to post original bangers. This tool generates variations on any topic, in any style you pick:

server.tool(
  "draft_tweet",
  "Generate original tweet variations on a topic in my voice. "
  + "Returns multiple drafts to choose from.",
  {
    topic: z.string().describe("What the tweet should be about"),
    style: z.enum([
      "informative", "engaging", "controversial",
      "storytelling", "promotional"
    ]).default("engaging").describe("Tone and style"),
  },
  async ({ topic, style }) => {
    const result = await shippostAPI("/draft-tweet", {
      topic, style: style ?? "engaging",
    });
    return { content: [{ type: "text", text: result }] };
  }
);

You get back 3 variations. Pick the one that hits, edit if you want, ship it. The controversial style is particularly fun — it generates takes that are spicy enough to spark engagement without torching your reputation.

What you should see: Three tweet drafts, each with a slightly different angle on the same topic. One might be a hot take, another a personal anecdote, another a data-driven observation. Your voice, three flavors.


Step 6: Connect the Transport and Run It

Last piece of code. Wire the server to the STDIO transport so MCP clients can spawn it:

async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
  console.error("ShipPost MCP server running on stdio");
}

main().catch((error) => {
  console.error("Fatal error:", error);
  process.exit(1);
});

Build and verify it runs:

npx tsc && node dist/index.js

What you should see: The message ShipPost MCP server running on stdio printed to stderr, and the process waiting for input. That means it's working. Kill it with Ctrl+C — the MCP client will manage the process lifecycle from here.

Ship Tip: We log to stderr intentionally. STDIO transport uses stdout for the MCP protocol, so any console.log calls would corrupt the message stream. Always use console.error for debug output in STDIO servers.


Step 7: Configure Your MCP Client

Now tell your AI assistant where to find the server. For Claude Desktop, add this to your config file (~/Library/Application Support/Claude/claude_desktop_config.json on Mac):

{
  "mcpServers": {
    "shippost": {
      "command": "node",
      "args": ["/absolute/path/to/shippost-mcp/dist/index.js"],
      "env": {
        "SHIPPOST_API_KEY": "sp_your_api_key_here"
      }
    }
  }
}

Replace the path with wherever you built the project, and drop in your actual API key. The key gets passed as an environment variable to the server process — it never touches your prompts or conversation history.

Restart Claude Desktop. You should see a hammer icon in the chat input area — that means MCP tools are loaded.

Ship Tip: For Cursor, the config lives at ~/.cursor/mcp.json with the same format. For Claude Code (CLI), use ~/.claude.json or pass --mcp-config.


The Workflow in Action

With the server running, here's what your daily Twitter growth routine looks like:

You: "Find the best tweets to reply to. Focus on AI tools and indie hacking."

Claude calls find_opportunities, gets back 10 scored tweets, and presents them ranked by opportunity score. Each one shows the tweet text, author, score, and a suggested reply angle.

You: "Draft replies to the top 3."

Claude calls draft_reply three times, passing in each tweet's details. You get back three voice-matched drafts, ready to review.

You: "The second one is too long. Make it punchier."

Claude refines it conversationally — no tool call needed, just editing.

You: "Now write me an original tweet about why most SaaS founders underestimate distribution."

Claude calls draft_tweet, and you get three variations to pick from.

What used to be 45 minutes of scrolling and context-switching becomes a 5-minute conversation. You're still making the judgment calls — which tweets to reply to, which drafts sound right, what to actually post. The AI handles the grunt work.


Adding More Tools

We built three tools, but ShipPost's API has more endpoints ready to wire up. The pattern is identical every time — here's what else you can add:

You could even build combo tools that chain multiple API calls. Imagine a "morning growth routine" that finds opportunities, drafts replies to the top 5, and pulls your weekly performance stats — all in one tool invocation.

Ship Tip: Start with the three core tools. Get comfortable with the workflow. Then add performance tracking so you can see which reply strategies are actually working — and feed that data back into your approach.


Ship It

MCP turns your AI assistant from a chatbot into a tool-using agent. For Twitter growth, that combination is powerful: the AI handles scanning and drafting (the parts that eat your time), and you handle judgment and taste (the parts that actually matter).

ShipPost's voice profile system means drafts sound like you, not like ChatGPT doing a growth marketer impression. The scoring pipeline means you're replying to the right tweets, not just the loudest ones. And the built-in guardrails — daily caps, reply gaps, per-author limits — mean you can push the pace without tripping spam filters.

Three files, seven steps, and your AI assistant has Twitter superpowers.

Get your API key at shippost.ai, build the server, and start shipping replies faster than you ever scrolled.


More MCP Server Guides

Building MCP servers for other workflows? Check out our companion guides: