Shippost: Human-in-the-loop AI-powered social media content pipeline
shippost is a CLI tool that processes meeting transcripts, notes, and other written content into social media post ideas using local LLMs (Ollama) or cloud-based AI (Anthropic Claude). Keep your content pipeline local and private with Ollama, or leverage Claude's powerful language models for enhanced quality.
CLI command: ship
- ✅ Flexible LLM Providers — Choose between Ollama (local, privacy-first) or Anthropic Claude (cloud-based, high-quality)
- ✅ Content Strategies — 64 proven post formats for maximum variety and engagement
- ✅ Customizable Style — Define your brand voice and posting style
- ✅ Community Examples — Learn from real style.md examples shared by other users
- ✅ X Post Analysis — Auto-generate style guides from your X (Twitter) posts
- ✅ JSONL Output — Generated posts stored in an append-only format for easy tracking
- ✅ Multiple File Processing — Batch process all transcripts in one command
- ✅ Post Review System — Review posts interactively and mark as keep/reject
- ✅ Typefully Integration — Stage posts directly to Typefully drafts
- ✅ Reply Guy Mode — Find tweets to reply to and post replies via X API
- Node.js >= 18.0.0
- LLM Provider — Choose one:
- Ollama (local) — Install Ollama and ensure it's running (
ollama serve) - Anthropic Claude (cloud) — Get an API key from Anthropic Console
- Ollama (local) — Install Ollama and ensure it's running (
# Clone the repository
git clone <repo-url>
cd shippost
# Install dependencies
npm install
# Build the project
npm run build
# Link globally (optional)
npm link# 1. Initialize a new ship project
ship init
# 2. Add your transcripts to the input/ directory
cp ~/meeting-notes.txt input/
# Or copy from Granola (see "Getting Transcripts from Granola" section below)
pbpaste > input/meeting-notes.txt
# 3. Customize your prompts
# Edit prompts/style.md for your brand voice
# Edit prompts/work.md for generation instructions
# 4. Generate posts
ship work
# 5. Review and stage posts to Typefully
ship posts # View generated posts
ship review # Review posts interactively and stage to TypefullyInitialize a new ship project in the current directory. Creates:
| Path | Description |
|---|---|
.shippostrc.json |
Project configuration file |
input/ |
Directory for transcripts, notes, and source content |
prompts/analysis.md |
Style analysis prompt for X posts (advanced) |
prompts/banger-eval.md |
Viral potential scoring criteria (advanced) |
prompts/content-analysis.md |
Content strategy selection prompt (advanced) |
prompts/reply.md |
Reply opportunity analysis prompt (advanced) |
prompts/style.md |
Your posting style, brand voice, and tone guidelines |
prompts/system.md |
System prompt for post generation (advanced) |
prompts/work.md |
Instructions for how posts should be generated |
strategies.json |
User-editable content strategies (64 default strategies) |
Process all files in input/ and generate new post ideas. Posts are appended to posts.jsonl.
Options:
-m, --model <model>— Override the Ollama model from config-v, --verbose— Show detailed processing information-f, --force— Force reprocessing of all files (bypass tracking)-c, --count <number>— Number of posts to generate per file (default: 8)-s, --strategy <id>— Use a specific content strategy by ID--strategies <ids>— Use multiple strategies (comma-separated)--list-strategies— List all available content strategies--category <category>— Filter strategies by category (use with --list-strategies)--no-strategies— Disable strategy-based generation (use legacy mode)
# Use default settings (auto-selects 8 diverse strategies)
ship work
# List all available content strategies
ship work --list-strategies
# List strategies in a specific category
ship work --list-strategies --category educational
# Use a specific strategy for all posts
ship work --strategy bold-observation
# Use multiple specific strategies
ship work --strategies "personal-story,how-to-guide,contrarian-take"
# Generate more posts per file
ship work --count 12
# Use legacy mode (no strategies)
ship work --no-strategies
# Use a specific model with verbose output
ship work --model llama3.1 --verbose
# Force reprocessing with custom strategy
ship work --force --strategy thread-lesson
# Combine options
ship work --model llama2 --count 10 --verboseWhat it does:
- Validates environment (checks for LLM provider and required files)
- Loads your style guide and generation instructions
- Scans
input/for.txtand.mdfiles - Skips files that have already been processed (unless
--forceis used) - Processes each file through your configured LLM (Ollama or Claude)
- Parses generated posts and saves to
posts.jsonl - Tracks processed files in
.ship-state.jsonto prevent duplicates - Displays summary with file counts and any errors
File Tracking: ship automatically tracks which files have been processed to prevent generating duplicate posts. Files are considered "processed" until they are modified. This means:
- Running
ship workmultiple times will only process new or modified files - Use
--forceto ignore tracking and reprocess all files - The tracking state is stored in
.ship-state.json(not committed to git)
Generate a personalized style guide by analyzing your X (Twitter) posts. Uses X API v2 (free tier) to fetch your recent tweets and your configured LLM to analyze your writing patterns.
Options:
--count <n>— Number of tweets to fetch (default: 33, max: 100)--overwrite— Overwrite existing style-from-analysis.md without prompting--setup— Reconfigure X API credentials
# First time setup (will prompt for X API credentials)
# Analyzes 33 tweets by default
ship analyze-x
# Fetch more tweets for deeper analysis
ship analyze-x --count 100
# Overwrite existing style guide
ship analyze-x --overwrite
# Reconfigure X API credentials
ship analyze-x --setupWhat it does:
- Configures X API OAuth 2.0 authentication (first time only)
- Opens browser for you to authorize the app
- Fetches your recent tweets (default: 33)
- Analyzes writing patterns with your configured LLM (Ollama or Claude)
- Generates and saves a personalized style guide to
prompts/style-from-analysis.md
Note: The analysis is saved to style-from-analysis.md (not style.md) so you can review it first and merge insights into your main style guide as desired.
Requirements:
- Free X Developer account (sign up here)
- X API app with OAuth 2.0 enabled
- Redirect URI set to:
http://127.0.0.1:3000/callback - Required scopes:
tweet.read,users.read,offline.access
First-time setup:
- Visit X Developer Portal
- Create a new app (or use existing)
- Enable OAuth 2.0 in app settings
- Set redirect URI to
http://127.0.0.1:3000/callback - Copy your Client ID
- Run
ship analyze-xand paste Client ID when prompted
Rate limits:
- X API Free tier: 100 reads/month
- Can analyze once per month with free tier
- Upgrade to Basic ($200/month) for 10,000 reads if needed
View recently generated posts in a human-readable format with filtering options.
Options:
-n, --count <number>— Number of posts to show (default: 10)--strategy <name>— Filter by strategy name or ID--min-score <score>— Show posts with banger score >= N--source <text>— Filter by source file name--eval— Evaluate posts that are missing banger scores
# View last 10 posts
ship posts
# View last 20 posts
ship posts -n 20
# Filter by strategy
ship posts --strategy "personal-story"
# Show only high-quality posts
ship posts --min-score 70
# Show posts from specific source
ship posts --source "meeting-2024"
# Evaluate posts missing banger scores
ship posts --evalFind tweets from accounts you follow and generate contextual replies. Posts replies directly via X API.
Options:
--count <n>— Number of tweets to analyze from timeline (default: 10)
Review actions:
Enter— Post the suggested replye— Edit the reply before postingn— Skip this tweetq— Quit reply session
# Find reply opportunities (default: 10 tweets)
ship reply
# Analyze more tweets
ship reply --count 20What it does:
- Authenticates with X API (reuses credentials from
ship analyze-x) - Fetches recent tweets from your home timeline
- Uses LLM to identify 3-5 best reply opportunities
- For each opportunity, generates a contextual reply following your style guide
- Shows you the tweet + suggested reply
- You choose: post, edit, skip, or quit
- Posts approved replies directly via X API
X API Basic tier features:
If you have an X API Basic subscription ($200/month), you can enable additional features by adding apiTier to your config:
// .shippostrc.json
{
"x": {
"clientId": "your-client-id",
"apiTier": "basic"
}
}With basic tier enabled:
- Tweets are sorted by author follower count (highest first)
- Shows follower counts, likes, replies, and retweets for each tweet
- Helps prioritize replying to high-influence accounts
Reply Style:
Replies follow the "Reply Style" section in prompts/style.md:
- Never promotional
- Add value (insight, wit, helpful info)
- Match your voice/tone
- Keep replies concise (1-2 sentences)
X API Tiers:
- Free tier (default): Basic timeline fetch, limited to ~15 requests per 15 minutes
- Basic tier ($200/month): Fetches follower counts, sorts by influence & recency
Configure your tier in .shippostrc.json:
{
"x": {
"clientId": "your-client-id",
"apiTier": "basic"
}
}Rate Limits: The free tier has strict limits. If you hit 429 errors:
- Wait 15 minutes and try again
- Use
--count 10or less to reduce API calls - Consider upgrading to Basic tier for more quota
Requirements:
- Same X API setup as
ship analyze-x - App must have "Read and Write" permissions (not just "Read")
- Required scopes:
tweet.read,tweet.write,users.read,offline.access
If you get 403 errors when posting:
- Go to X Developer Portal
- Change app permissions to "Read and write"
- Delete
.shippost-tokens.jsonto force re-auth - Run
ship replyagain
Interactively review posts one-by-one and decide their fate. Posts are shown sorted by banger score (highest first).
Options:
--min-score <score>— Only review posts with score >= N
Review actions:
s— Stage to Typefully (creates draft)Enter— Keep for later (status: keep)n— Reject (status: rejected)q— Quit review session
# Review all new posts
ship review
# Only review high-quality posts
ship review --min-score 70What it does:
- Loads all posts with status
neworkeep - Sorts by banger score (highest first)
- Shows each post with score and strategy
- Prompts for action: stage, keep, or reject
- If staging, creates a Typefully draft and saves the draft ID
- Updates post status immediately after each decision
- Continues until all posts reviewed or you quit
Post statuses:
new— Freshly generated, not yet reviewedkeep— Marked as good, saved for future usestaged— Sent to Typefully as a draftrejected— Marked as low quality, filtered outpublished— Reserved for future use
Check your X API rate limit status and account information.
ship x-statusWhat it displays:
- Connected account username
- Current API tier (free or basic)
- Rate limit status with visual progress bars
- Time until rate limits reset
- Monthly limits based on your tier
Comprehensive X stats dashboard showing your account metrics. Requires X API Basic tier ($200/month).
ship statsWhat it displays:
- Account overview (followers, following, tweet count)
- Posting activity (24h, 7d, 30d)
- Impressions with daily trends (sparklines)
- 90-day goal progress toward 5M impressions
- Engagement metrics (likes, replies, retweets, quotes, bookmarks)
- Best posting times based on your engagement data
- Top performing post of the week
Note: Stats are cached for 1 hour to conserve API rate limits. The dashboard uses visual formatting with progress bars and sparklines.
Sync your local prompts with the latest package defaults.
Options:
--force— Update all prompts without prompting
# Compare and selectively update prompts
ship sync-prompts
# Force update all prompts to defaults
ship sync-prompts --forceWhat it does:
- Compares your local
prompts/files with package templates - Shows which prompts are up to date, outdated, or missing
- Displays a diff for changed files
- Prompts you to update each changed file individually
- Creates any missing prompt files
When to use:
- After upgrading shippost to get new prompt improvements
- To restore a prompt you accidentally deleted
- To see what's changed in the default prompts
Configuration is stored in .shippostrc.json:
Using Ollama (default):
{
"llm": {
"provider": "ollama"
},
"ollama": {
"host": "http://127.0.0.1:11434",
"model": "llama3.1",
"timeout": 60000
},
"generation": {
"postsPerTranscript": 8,
"temperature": 0.7,
"strategies": {
"enabled": true,
"autoSelect": true,
"diversityWeight": 0.7,
"preferThreadFriendly": false
}
}
}Using Anthropic Claude:
{
"llm": {
"provider": "anthropic"
},
"anthropic": {
"model": "claude-sonnet-4-5-20250514",
"maxTokens": 4096
},
"generation": {
"postsPerTranscript": 8,
"temperature": 0.7,
"strategies": {
"enabled": true,
"autoSelect": true,
"diversityWeight": 0.7,
"preferThreadFriendly": false
}
},
"typefully": {
"socialSetId": "1"
}
}Set your API keys in a .env file:
ANTHROPIC_API_KEY=sk-ant-api03-your-key-here
TYPEFULLY_API_KEY=your-typefully-api-key-hereSee ANTHROPIC_SETUP.md for detailed instructions on using Claude.
| Option | Default | Description |
|---|---|---|
llm.provider |
ollama |
LLM provider: ollama or anthropic |
ollama.host |
http://127.0.0.1:11434 |
Ollama server URL (when using Ollama) |
ollama.model |
llama3.1 |
Ollama model to use (when using Ollama) |
ollama.timeout |
60000 |
Request timeout in milliseconds (when using Ollama) |
anthropic.model |
claude-3-5-sonnet-20241022 |
Claude model to use (when using Anthropic) |
anthropic.maxTokens |
4096 |
Maximum tokens in response (when using Anthropic) |
generation.postsPerTranscript |
8 |
Number of posts to generate per input file |
generation.temperature |
0.7 |
LLM temperature (0.0-1.0, higher = more creative) |
generation.strategies.enabled |
true |
Enable strategy-based post generation |
generation.strategies.autoSelect |
true |
Auto-select strategies based on content analysis |
generation.strategies.diversityWeight |
0.7 |
Strategy diversity (0.0-1.0, higher = more diverse categories) |
generation.strategies.preferThreadFriendly |
false |
Prioritize thread-friendly strategies |
x.clientId |
— | X API OAuth 2.0 Client ID |
x.apiTier |
free |
X API tier: free or basic (affects reply command features) |
typefully.socialSetId |
"1" |
Typefully Social Set ID (for multi-account setups) |
After running ship init, your project will look like:
your-project/
├── input/ # Your source content
│ ├── meeting-2024-01.txt
│ └── notes.md
├── prompts/
│ ├── style.md # Brand voice & tone
│ ├── work.md # Generation instructions
│ ├── system.md # System prompt (advanced)
│ ├── analysis.md # Style analysis prompt (advanced)
│ ├── content-analysis.md # Content strategy selection (advanced)
│ ├── banger-eval.md # Viral potential scoring (advanced)
│ └── reply.md # Reply opportunity analysis (advanced)
├── strategies.json # Content strategies (CUSTOMIZABLE!)
├── posts.jsonl # Generated posts (created after first run)
└── .shippostrc.json # Configuration
Generated posts are stored in posts.jsonl as newline-delimited JSON:
{
"id":"uuid",
"sourceFile":"input/meeting.txt",
"content":"Your generated post...",
"metadata":{
"model":"llama3.1",
"temperature":0.7,
"strategy":{
"id":"personal-story",
"name":"Personal Story or Experience",
"category":"personal"
},
"bangerScore":75,
"bangerEvaluation":{
"score":75,
"breakdown":{
"hook":18,
"emotional":16,
"value":12,
"format":13,
"relevance":8,
"engagement":8,
"authenticity":0
},
"reasoning":"Strong opening hook with curiosity gap..."
}
},
"timestamp":"2024-01-15T10:30:00.000Z",
"status":"new"
}Each post includes:
- id — Unique identifier
- sourceFile — Input file the post was generated from
- content — The post text
- metadata.model — Ollama model used
- metadata.temperature — Generation temperature
- metadata.strategy — Content strategy used (id, name, category)
- metadata.bangerScore — Viral potential score (1-99)
- metadata.bangerEvaluation — Detailed scoring breakdown
- metadata.typefullyDraftId — Typefully draft ID (if staged)
- timestamp — When the post was generated
- status —
new,keep,staged,rejected, orpublished
Each generated post is automatically evaluated for its viral potential ("banger" score) on a scale of 1-99:
| Score Range | Potential |
|---|---|
| 1-29 | Low - unlikely to gain traction |
| 30-49 | Below average - limited reach |
| 50-69 | Average - decent engagement |
| 70-84 | High - strong engagement likely |
| 85-99 | Exceptional - viral potential |
The score is based on 7 key factors:
- Hook Strength (20 pts) - Scroll-stopping opening, curiosity gaps
- Emotional Resonance (20 pts) - Triggers awe, humor, surprise, FOMO
- Value & Shareability (15 pts) - Actionable value, social currency
- Format & Structure (15 pts) - Readability, pacing, visual appeal
- Relevance & Timing (10 pts) - Taps into current conversations
- Engagement Potential (10 pts) - Invites discussion, thought-provoking
- Authenticity & Voice (10 pts) - Human, relatable, genuine
Use banger scores to prioritize which posts to publish first - start with your highest-scoring content!
ship includes 64 proven content strategies inspired by Typefully's successful post formats. Strategies are fully customizable via strategies.json - add your own, modify existing ones, or remove strategies you don't need. Each strategy provides a unique angle or format for presenting your ideas, ensuring maximum variety and engagement across your content.
Content strategies are tested frameworks for structuring social media posts. Instead of generating generic posts, ship applies specific strategies like:
- Personal Story - Share an experience, failure, or transformation
- How-To Guide - Provide step-by-step instructions
- Bold Observation - Make a provocative statement that captures attention
- Before & After - Show transformation or progress
- Resource Thread - Curate a list of valuable tools or links
- Behind-the-Scenes - Show your process or work-in-progress
1. Content Analysis
When you run ship work, the system analyzes your transcript to identify characteristics:
- Does it contain personal stories?
- Does it include actionable advice?
- Are there strong opinions?
- Is it about a specific project?
2. Strategy Selection Based on the analysis, ship intelligently selects applicable strategies:
- Filters out strategies that don't fit your content (e.g., won't use "Personal Story" if there are no personal anecdotes)
- Ensures diversity across 7 categories (personal, educational, provocative, engagement, curation, behind-the-scenes, reflective)
- Uses weighted random selection to avoid over-representing any single category
3. Post Generation Each post is generated using one specific strategy:
- The strategy's prompt is injected between your work instructions and the transcript
- The LLM generates a focused post following that strategy's format
- Strategy metadata is saved with each post for tracking
| Category | Description | Example Strategies |
|---|---|---|
| Personal | Stories, experiences, transformations | Personal Story, Failure Story, Transformation |
| Educational | How-tos, frameworks, actionable tips | How-To Guide, Step-by-Step Framework, Quick Tip |
| Provocative | Bold statements, contrarian takes | Bold Observation, Contrarian Take, Myth Busting |
| Engagement | Questions, polls, thought experiments | Open Question, This or That, Thought Experiment |
| Curation | Lists, recommendations, resources | Resource List, Tool Recommendation, Thread of Links |
| Behind-the-Scenes | Process, work-in-progress, building | Building in Public, Process Share, WIP Update |
| Reflective | Lessons learned, retrospectives | Lesson Learned, Retrospective, Before & After |
Auto-Select Mode (Default)
# Automatically selects 8 diverse strategies per transcript
ship workList Available Strategies
# See all 64 strategies
ship work --list-strategies
# Filter by category
ship work --list-strategies --category educationalManual Strategy Selection
# Use one specific strategy
ship work --strategy personal-story
# Use multiple strategies
ship work --strategies "how-to-guide,bold-observation,resource-list"
# Use 5 strategies from educational category
ship work --strategies "how-to-guide,step-by-step,framework,quick-tip,common-mistakes"Control Post Count
# Generate 12 posts instead of 8
ship work --count 12
# Generate just 3 posts with specific strategies
ship work --count 3 --strategies "personal-story,how-to-guide,bold-observation"Disable Strategies (Legacy Mode)
# Use original batch generation (no strategies)
ship work --no-strategiesStrategies are defined in strategies.json in your project root. This file is fully editable - modify existing strategies, add new ones, or remove strategies you don't use.
Strategy Structure:
{
"id": "my-custom-strategy",
"name": "My Custom Strategy Name",
"prompt": "The prompt that will be sent to the LLM to guide post generation...",
"category": "personal",
"threadFriendly": false,
"applicability": {
"requiresPersonalNarrative": true,
"worksWithAnyContent": false
}
}Fields:
- id - Unique identifier (used with
--strategyflag) - name - Human-readable name shown in
--list-strategies - prompt - Instructions for the LLM on how to format this post type
- category - One of:
personal,educational,provocative,engagement,curation,behind-the-scenes,reflective - threadFriendly -
trueif this works well in threads,falsefor standalone posts - applicability - Rules for when this strategy applies:
requiresPersonalNarrative- Needs personal storiesrequiresActionableKnowledge- Needs how-to/tips contentrequiresResources- Needs tool/book mentionsrequiresProject- Needs project contextrequiresStrongOpinion- Needs strong viewpointsworksWithAnyContent- Always applicable (fallback strategies)
Adding a Custom Strategy:
# 1. Edit strategies.json
vim strategies.json
# 2. Add your strategy to the array
[
...existing strategies...,
{
"id": "weekly-reflection",
"name": "Weekly Reflection Post",
"prompt": "Share a key lesson or insight from this week. What did you learn? What surprised you? Keep it personal and relatable.",
"category": "reflective",
"threadFriendly": false,
"applicability": {
"worksWithAnyContent": true
}
}
]
# 3. Test your new strategy
ship work --strategy weekly-reflectionRemoving Strategies:
Simply delete the strategy object from the array in strategies.json. The system will continue to work with any number of strategies (even just one!).
Modifying Prompts:
Edit the prompt field to change how posts are generated. For example, you might want to add more specific instructions, change the tone, or adjust the format.
Fine-tune strategy behavior in .shippostrc.json:
{
"generation": {
"postsPerTranscript": 8,
"strategies": {
"enabled": true,
"autoSelect": true,
"diversityWeight": 0.7,
"preferThreadFriendly": false
}
}
}- enabled — Turn strategy system on/off
- autoSelect — Automatically select strategies based on content (vs. random)
- diversityWeight — How much to prioritize category diversity (0.0 = no preference, 1.0 = maximum diversity)
- preferThreadFriendly — Favor strategies that work well in threads
Variety — Never run out of angles. 64 default strategies ensure fresh approaches, and you can add unlimited custom strategies.
Quality — Proven formats that have driven engagement on social media.
Control — Full control over which strategies to use, or let the system auto-select intelligently.
Tracking — Strategy metadata in posts.jsonl lets you analyze which formats perform best.
Efficiency — One transcript becomes 8+ diverse posts without manual rewriting.
Each post includes strategy metadata you can use for analysis:
# See which strategies generated your posts
cat posts.jsonl | jq '{strategy: .metadata.strategy.name, score: .metadata.bangerScore, content: .content[:50]}'
# Group by strategy category
cat posts.jsonl | jq -r '.metadata.strategy.category' | sort | uniq -c
# Find your best-performing strategy
cat posts.jsonl | jq -r 'select(.metadata.bangerScore > 70) | .metadata.strategy.name' | sort | uniq -c | sort -rn
# View posts by status
cat posts.jsonl | jq -r '.status' | sort | uniq -cGranola is an AI meeting transcription tool. Here's how to get your meeting transcripts into ship:
- Open your meeting note in Granola
- Click the transcription button (3 vertical bars) at the bottom of the note
- Click the copy button in the top right corner
- Save to a file:
pbpaste > input/meeting-2024-01-15.txt
The Granola Transcriber Chrome extension provides one-click extraction:
- Install the Granola Transcriber extension
- Open your Granola note in Chrome
- Click the extension to extract and copy the transcript
- Save to your
input/directory
For power users with Raycast:
- Install the Granola Raycast extension
- Select multiple notes for bulk export
- Use folder-aware filtering to organize transcripts
- Export directly to your ship
input/directory
- Naming convention: Use descriptive filenames like
YYYY-MM-DD-topic.txtfor easier tracking - Batch processing: Export multiple meetings at once, then run
ship workto process them all - Integrations: Granola also supports direct export to Notion, Hubspot, and Slack (no API/Zapier yet)
- Clean transcripts: Remove excessive filler words in Granola before exporting for better post quality
Here's a typical workflow for using ship:
# 1. Set up a new project
mkdir my-content-pipeline
cd my-content-pipeline
ship init
# 2. Customize your style
# Edit prompts/style.md to define your:
# - Voice and tone (casual, professional, humorous)
# - Brand guidelines
# - Format preferences (thread length, emoji usage)
# - Target audience
# 3. Add source content
cp ~/Downloads/meeting-notes-*.txt input/
echo "Today I learned..." > input/quick-thoughts.md
# 4. Generate posts
ship work
# 5. Review generated posts
ship posts
# 6. Review and stage to Typefully
ship review --min-score 70
# 7. Process more content later
cp ~/new-transcript.txt input/
ship work # Appends new posts to posts.jsonlAll prompts used by ship are stored as editable files in the prompts/ directory. This allows you to customize the AI's behavior without touching any code.
Core prompts (edit these for best results):
prompts/style.md- Your posting style, voice, and brand guidelinesprompts/work.md- Instructions for how posts should be generated from transcripts
Advanced prompts (optional, for power users):
prompts/system.md- System prompt wrapper for post generationprompts/analysis.md- Prompt used to analyze your X posts and generate style guidesprompts/content-analysis.md- Criteria for analyzing transcript content and selecting strategiesprompts/banger-eval.md- Scoring criteria for evaluating viral potentialprompts/reply.md- Reply opportunity analysis forship replycommand
- ✅ No code changes needed - Customize behavior by editing markdown files
- ✅ Version controlled - Track prompt changes with git
- ✅ Easy experimentation - Try different prompting strategies quickly
- ✅ Project-specific - Each project can have its own unique prompts
Edit style.md when:
- You want to refine your brand voice
- You're not getting posts in the right tone
- You want to add/remove example posts
Edit work.md when:
- Posts need a different structure
- You want more/fewer posts per transcript
- You want to change quality criteria
Edit banger-eval.md (advanced) when:
- You want to adjust viral potential scoring criteria
- You need different scoring weights for the 7 factors
- You want to add or remove evaluation criteria
- You're optimizing for a specific platform beyond X/Twitter
Edit reply.md when:
- You want the LLM to find more/fewer reply opportunities
- You want to change the criteria for what makes a good reply opportunity
- You want to adjust the reply generation style or rules
Edit system.md or analysis.md (advanced) when:
- You want to change the core prompting strategy
- You're experimenting with prompt engineering
- You need very specific AI behavior
Learn from real-world examples! The community-examples/style/ directory contains style.md files contributed by the ship community. Browse these to:
- See how others define their voice and tone
- Discover different writing styles (casual, professional, humorous, etc.)
- Learn formatting approaches (emoji usage, hashtag strategies, thread preferences)
- Find inspiration for your own style guide
# Browse available examples
ls community-examples/style/
# Read an example
cat community-examples/style/example-technical-founder.md
# Copy as starting point for your style
cp community-examples/style/example-technical-founder.md prompts/style.md
# Then customize it with your own voice!Have a style.md you're proud of? Share it with the community!
- Copy your
prompts/style.mdtocommunity-examples/style/your-name.md - Remove any private/sensitive information
- Add a comment at the top with context (target audience, niche, what makes it unique)
- Submit a PR
See community-examples/style/README.md for full contribution guidelines.
Why contribute?
- Help others learn from your experience
- Get feedback from the community
- Build a library of proven styles
- Showcase different use cases (dev tools, B2B SaaS, content creators, etc.)
Content Quality
- Use well-structured transcripts with clear sections and key points
- Remove excessive filler words and tangents for better results
- Longer transcripts (500+ words) tend to generate better insights
Prompts
- Be specific in
prompts/style.mdabout what you want - Include 2-3 example posts that represent your ideal style
- Update
prompts/work.mdif posts aren't matching expectations
Models
When using Ollama:
llama3.1(default) — Good balance of quality and speedllama2— Faster, good for quick iterationsmixtral— More creative outputs- Experiment with different models using
--modelflag
When using Anthropic Claude:
claude-sonnet-4-5-20250514— Best balance of intelligence and speed (recommended)claude-3-5-sonnet-20241022— Previous generation, still excellentclaude-3-5-haiku-20241022— Fastest model for quick tasksclaude-3-opus-20240229— Most capable for complex reasoning- See ANTHROPIC_SETUP.md for pricing details
Output Management
posts.jsonlis append-only — never deletes old posts- Use
jqto filter and manipulate posts:cat posts.jsonl | jq - Consider archiving old posts periodically
Performance
- Process files in batches to avoid overloading Ollama
- Use
--verboseto debug slow or failing generations - Adjust
temperaturein config for creativity vs consistency
Content Strategies
- Let auto-selection work its magic for most transcripts (it's intelligent!)
- Use
--list-strategiesto explore available formats - Try manual strategy selection when you know exactly what format you want
- Analyze which strategies perform best using banger scores and engagement data
- Use
--count 12for longer transcripts to get more variety - Experiment with
diversityWeightconfig (higher = more category diversity) - Review strategy metadata to identify patterns in your best-performing posts
✗ Ollama is not available. Please ensure Ollama is running.
Solution:
- Install Ollama from https://ollama.ai
- Start the server:
ollama serve - Verify it's running:
curl http://localhost:11434
✗ Model 'llama3.1' not found. Run: ollama pull llama3.1
Solution:
ollama pull llama3.1✗ Anthropic API key not found. Set ANTHROPIC_API_KEY environment variable or add to config.
Solution:
Create a .env file in your project directory:
ANTHROPIC_API_KEY=sk-ant-api03-your-key-hereOr export the environment variable:
export ANTHROPIC_API_KEY=sk-ant-api03-your-key-hereSolution:
- Check your API key is valid
- Verify you have internet connectivity
- Ensure your Anthropic account has credits
- Check the status at https://status.anthropic.com/
✗ Not a ship project. Run: ship init
Solution: Run ship init in your project directory, or ensure you're in the correct directory.
Ensure your .shippostrc.json is valid JSON and includes the required fields:
{
"ollama": {
"host": "http://127.0.0.1:11434",
"model": "llama3.1"
}
}# Install dependencies
npm install
# Run in development mode
npm run dev
# Build TypeScript
npm run build
# Link globally for testing
npm link
# Issue tracking
bd list # View all issues
bd ready # See unblocked work
bd create "..." # Create new issuesrc/
├── index.ts # CLI entry point with Commander
├── commands/
│ ├── init.ts # ship init implementation
│ ├── work.ts # ship work - post generation
│ ├── posts.ts # ship posts - view posts
│ ├── review.ts # ship review - interactive review
│ ├── analyze-x.ts # ship analyze-x - style analysis
│ ├── reply.ts # ship reply - reply guy mode
│ ├── x-status.ts # ship x-status - rate limit check
│ ├── stats.ts # ship stats - X metrics dashboard
│ └── sync-prompts.ts # ship sync-prompts - update prompts
├── types/
│ ├── config.ts # Configuration types
│ ├── post.ts # Post schema
│ ├── strategy.ts # Content strategy types
│ ├── state.ts # Project state tracking
│ └── x-tokens.ts # X API token storage
├── services/
│ ├── file-system.ts # File I/O and JSONL operations
│ ├── ollama.ts # Ollama API integration
│ ├── anthropic.ts # Anthropic Claude integration
│ ├── llm-factory.ts # LLM provider factory
│ ├── llm-service.ts # Base LLM service interface
│ ├── x-api.ts # X API v2 integration
│ ├── x-auth.ts # X OAuth 2.0 authentication
│ ├── typefully.ts # Typefully draft creation
│ ├── strategy-selector.ts # Intelligent strategy selection
│ └── content-analyzer.ts # Content analysis for strategies
└── utils/
├── errors.ts # Custom error classes
├── logger.ts # Console output (✓, ✗, →)
├── validation.ts # Config and project validation
├── banger-eval.ts # Viral potential scoring
├── style-analysis.ts # Style guide generation
└── readline.ts # Command-line input utilities
This project uses bd (beads) for issue tracking. See AGENTS.md for AI agent workflow guidelines.
ship integrates with Typefully to help you stage posts directly as drafts. This streamlines your workflow from transcript → posts → published content.
-
Get your Typefully API key:
- Log in to Typefully
- Go to Settings > Integrations
- Create an API key
-
Add to your
.envfile:TYPEFULLY_API_KEY=your-api-key-here
-
(Optional) Configure Social Set ID: If you have multiple social accounts in Typefully, specify which one to post to:
{ "typefully": { "socialSetId": "1" } }The default is
"1"(your first connected account). Check Typefully's API docs to find your Social Set IDs.
Interactive review and staging:
# Review posts and stage the best ones
ship review --min-score 70During review:
- Press
sto stage a post to Typefully - The post is created as a draft in your Typefully account
- The draft URL is displayed for quick access
- Post status is updated to
stagedwith the draft ID saved
Filter staged posts:
# See all staged posts
cat posts.jsonl | jq 'select(.status == "staged")'
# Get Typefully draft URLs
cat posts.jsonl | jq -r 'select(.metadata.typefullyDraftId) | .metadata.typefullyDraftId'- ✅ Creates drafts for X/Twitter
- ✅ Saves Typefully draft ID and share URL
- ✅ Updates post status automatically
- ✅ Works with multi-account setups via
socialSetId - ✅ Handles errors gracefully (reverts status on failure)
- Posts are created as drafts, not published immediately
- You can review and edit drafts in Typefully before publishing
- Requires Typefully Pro plan for API access
- Currently supports X/Twitter only (LinkedIn coming soon)
Core Features (v0.1.0)
-
ship init— Initialize project structure -
ship work— Process transcripts into posts with Ollama -
ship analyze-x— Generate style guide from your X posts (X API v2 free tier) -
ship posts— View and filter generated posts -
ship review— Interactive post review and staging -
ship reply— Reply guy mode with X API posting - Typefully integration for staging drafts
- Configurable models and generation settings
- JSONL output format with full metadata
Planned Features
-
ship analyze— Success metrics analysis (X Basic API, $200/mo) - News-aware post generation (incorporate trending topics)
- LinkedIn support in Typefully integration
- Multiple output format support (CSV, Markdown)
- Bulk staging with
ship stage <n>command
MIT