Why it exists
Built for the moment prompts stop scaling.
This project started from a simple but repeatable failure mode: creator workflows keep collapsing back into the same manual prompt ritual. Paste a long brief. Re-explain the tone. Re-state the structure. Re-state the constraints. Hope the model stays consistent.
That workflow does not scale. It is slow, brittle, hard to review, and almost impossible to reuse cleanly across sessions. The goal here is to package stable creator context once, then let future tasks start from that packaged knowledge instead of rebuilding it from scratch every time.
The result should not feel magical. It should feel inspectable. One clear entrypoint, supporting references where bulk material belongs, and a bundle shape that can keep working after the current chat is over.
Do not rewrite the creator context every time. Package it once, then reuse it.
3
Input paths: raw text, Twitter/X, and YouTube.
2
Versioned API workflows in one FastAPI backend.
4
Core bundle layers users can inspect and keep.
1
Main goal: reusable skill output, not one-off prompting.
Bundle shape
From creator corpus to portable skill package.
This is the main structural move in the product. Different kinds of creator material are normalized into one corpus, then written back out as a package that can be reviewed, exported, and reused.
Creator corpus
Raw creator text
scripts, notes, workflows
Twitter/X import
recent posts grouped into one corpus
YouTube transcripts
video language folded into the same flow
Portable skill package
Generated bundle
creator-skill/
SKILL.md
references/framework.md
references/examples.md
references/sources.md
agents/openai.yaml
Reusable in skills-compatible tools and exportable as zip.
Request flow
Source to skill bundle
Request enters a versioned route.
A generation call lands on /api/v1 for raw paste or /api/v2 when the user wants Twitter/X or YouTube ingestion.
Validation rejects weak or unsafe inputs.
Minimum content length, URL requirements, username bounds, and path traversal guards all run before the generation pipeline starts.
Optional fetchers pull external source material.
The X client resolves recent tweets while the YouTube client gathers transcript text so every route ends with one normalized corpus.
CrewAI extracts what actually repeats.
The backend looks for tone, structure, constraints, workflow steps, and source-backed examples instead of producing a generic prompt dump.
The bundle is assembled around one entrypoint.
SKILL.md becomes the activation surface, heavier guidance is split into references, and optional openai.yaml metadata is added when requested.
Delivery stays portable.
The API can return files immediately, persist the bundle under generated_skills, and expose later list, fetch, copy, delete, and zip export paths.
Principles
Four rules that keep the product honest.
01
Evidence over invention
The product is most useful when it starts from real creator material. It should infer the workflow from evidence instead of guessing it from a vague brief.
02
The entrypoint stays short
SKILL.md should explain the repeatable job clearly, not carry every example and note. Bigger material belongs in references where it can be loaded only when needed.
03
Portable output is the feature
A good result is one the user can review, export, reinstall, and use again in a later task. That is why package shape matters as much as the generation step.
04
The product state stays honest
The backend is the working product today. The website is the shell around it. Being explicit about that makes both the UX and the roadmap clearer.
Next step
Explore the backend-first product surface.
The home page covers the generation flow. This page explains why the product is structured that way.