Aileron Docs

Aileron

Let your AI agent fly in the real world

You’d never hand an agent the keys to your email, your calendar, your messaging services. You don’t have to.

$ brew install aileron
$ aileron launch

  First launch let's configure your upstream LLM.

  ? Provider · Anthropic
  ? API key · sk-ant-api03-XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
  ? Model · claude-sonnet-4-5

  ✓ Configuration saved to ~/.aileron/config.toml

  Aileron is running at http://localhost:8721/v1

Then point your AI agent at http://localhost:8721/v1. You’re ready to give your agent new powers.

Browse the Actions Hub and install capabilities you never would have let your agent do before. Slack the team an update on the GitHub feature you just shipped. Reply to your product manager’s email about last week’s release. Or write your own Action.

Your review and approval

No surprises. You see exactly what’s proposed, and Aileron executes only what you’ve allowed.

Put first principles back into AI

LLMs and AI agents excel at what they’re built for: synthesizing information and proposing actions. They are not designed for consistency or precision.

The LLM may pick different tools each time you run the same prompt. A retry double-posts the announcement to #engineering. Inference may leak the secrets you let it touch.

Aileron adds three fundamentals of quality software to your agent.

Determinism

Predictability. Same input, same output, every time. The action your agent runs today is the same action it runs tomorrow. No drift, no hallucinated arguments, no surprises.

Idempotency

Reliability. Safe to retry. The network drops, you click approve twice, the system retries — the action runs once. You don’t get two invoices, two emails.

Security

Your keys never reach the LLM. Stripe, Gmail, Slack, GitHub — all live in a vault Aileron alone uses. The LLM sees results, not secrets.

The result: AI and traditional software each do what they’re best at, in the same loop. Your agent proposes; Aileron executes. Your agent thinks fluidly; Aileron acts predictably.

Discover what your agent can do

A few ideas, unthinkable yesterday. Now you can Open the Claw.

Inbox watcher

“When my Amazon package is delivered, text my partner to grab it from the lobby before it walks off.”

OpenClaw watches your inbox via an Aileron action, sees the delivery confirmation arrive two days later, drafts the message, and surfaces it for your approval. You tap approve and Aileron sends the text.

Slack ship-update

“Tell the team I shipped the migration.” Aileron reads your recent issues, PRs, and commits from GitHub, passes along to the agent to write a summarized message, and surfaces it for your review. You tap approve and Aileron posts in your team’s Slack channel.

Stripe invoice

“Send Acme the invoice for last month’s work.” Aileron pulls the time entries from your tracker, passes along to the agent to draft the invoice email, and surfaces it for your review. You tap approve and Aileron sends it via Stripe.

You could do this before. It was a lot of config to wire up. And you didn’t, because you knew it was an unacceptable risk.

Built to soar with Tools and MCP

Aileron doesn’t replace Tool Calling or MCP — it complements them. Each layer in the agent stack owns a different job.

Tool Calling

How the LLM expresses tool intent — OpenAI’s tools array, Anthropic’s tools, the function-calling spec inside the LLM API.

MCP

How external tools and data get exposed to the LLM — a protocol for tool discovery and surfacing.

Aileron

How tool calls actually get executed — deterministically, with sealed credentials, your approval, and a complete audit trail.

Your agent’s existing MCP servers keep exposing tools the same way. The LLM still uses Tool Calling to express intent. Aileron is the layer that runs underneath — making the execution of those tool calls safe to use against your real systems.

Project

Static Badge GitHub License