Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.joggr.ai/llms.txt

Use this file to discover all available pages before exploring further.

We built Joggr…so we could build Joggr. That’s not a riddle. Joggr started as an internal system for our own team. We needed a way to give AI coding agents the right standards, the right context, and the right boundaries so they could ship production code without constant hand-holding. Our team doesn’t really write code anymore. We design, architect, review, and guide. The agents do the implementation. And the codebase is the cleanest it’s ever been, not despite the AI, but because Joggr enforces the same standards on every agent, every commit, every repo.

The Problem

AI agents write code super fast, but the quality of the output depends entirely on the tooling and context you provide them. Along the way we’ve run into a host of problems that no amount of prompt engineering can fix:
  • Context doesn’t exist in one place. Your standards are in Notion, decisions are in Slack threads, tickets are in Linear, and architecture is in someone’s head. Agents get none of it unless you copy-paste it into every prompt.
  • Instructions don’t enforce anything. An agent can read your rules and ignore them in the same response. The only real enforcement is hooks, pre-commit checks, and tool-call boundaries. That’s infrastructure you have to build yourself.
  • Review can’t keep up. Agents generate code faster than you can read it. You’re merging 800-line PRs you skimmed for 2 minutes because the alternative is becoming the bottleneck.
  • You become the project manager. Multi-file features don’t fit in one prompt. So you break the work into phases, feed context between steps, babysit the output, and stitch it into a PR yourself.
  • Parallel agents trash each other. You want three agents on three features, or one running while you’re AFK. Without sandboxing, that’s shared state, merge conflicts, and no way to walk away safely.
  • Auto-approve is a security hole. Most agents run in --yolo or skip-permissions mode to avoid constant prompts. That means arbitrary commands with no guardrails. One bad rm -rf, one curl | sh, and your local environment is gone.
…and a dozen more we haven’t listed. These are just the ones we hit every week.

What We Built

Joggr is the infrastructure layer between your codebase and your agents. It generates AI instructions, rules, and internal docs so agents understand your standards before the first prompt. It connects your external knowledge — Linear, Slack, Confluence — so agents aren’t working blind. It breaks complex work into phases with review gates so you’re not skimming 800-line PRs. It enforces your rules through hooks and guardrails, not hopes. And it isolates agents in sandboxes so you can run three in parallel without trashing your working state. The result: a codebase that’s cleaner than before AI, not despite it.