Why I built this: I kept redoing the same setup across AI tools and repos. Skills/prompt agents/commands/MCP servers lived in different places, each tool had different config formats, and onboarding kept breaking from config drift.
Agentloom is my attempt to keep one canonical source of truth and sync to provider-native outputs (Codex, Claude, Cursor, Gemini, Copilot, OpenCode).
Core workflow: npx agentloom add obra/superpowers
That imports an entire stack from a source repo: - skills/ - agents/ - commands/ - mcp.json
Then sync generates each provider’s native files from that canonical source.
Repo: https://github.com/farnoodma/agentloom
Would really value feedback on: 1) conflict UX in non-interactive CI mode 2) lockfile/reproducibility model 3) boundaries between shared core vs provider-specific behavior
[dead]