CLI · v0.1.0 · early

Langosh

A CLI for building and running LangGraph agents. Scaffold a repo, ask an LLM to write you a graph, test it locally with langgraph dev, deploy to LangSmith.

$ pip install langosh
01 · quick start

From zero to a running graph in under a minute

~/my-agents — langosh
# scaffold a new agents repo
$ mkdir my-agents && cd my-agents
$ langosh
> /initrepo
? Project name: my-agents
? Default model: anthropic:claude-sonnet-4-5-20250929
  ✓ langgraph.json, pyproject.toml, .env, graphs/example — compiled

# install deps + boot the dev server
$ uv sync && uv run langgraph dev
  ready · http://localhost:2024  (Studio UI in the browser)

# in a second terminal — talk to the builder
$ langosh
> /graphs /create
? Graph name: news-summarizer
? Build instructions: fetch RSS feeds, summarize with the LLM, return key points
  builder · a couple of quick questions before I generate this…
  ✓ graphs/news_summarizer/definition.json + __init__.py created

# point langosh at the dev server, test it
> /server /add dev http://localhost:2024
> /exec /select news-summarizer /test
  ↳ tavily_search(query="today's AI headlines")
  ↳ done · streaming tokens…
02 · why it's built this way

Built for LLM authorship, not against it

01 / graph format

JSON, not Python

The LLM edits a structured definition.json; a compiler in graphs/codegen.py emits the Python module. No syntax errors mid-edit, diffs show graph semantics.

02 / tool discovery

Resolved at build time

Walks langchain_community.tools and langchain_experimental.tools; the builder picks from a live catalog and the compiled graph carries static imports. No runtime discovery, no MCP client at boot.

03 / server compat

LangGraph Platform / LangSmith

Talks to any compatible deployment — langgraph dev, langgraph up, or a LangSmith-hosted server. Assistants, threads, runs, and streaming all covered.

04 / conversational UX

Clarify, then act

The builder asks before generating when the request is ambiguous — "web search?" ⇒ DuckDuckGo or Tavily — then writes the definition in one shot.

05 / dev-loop tooling

/chat + /code modes

Built-in LLM chat with live LangChain docs lookup, plus a code mode with file / git / shell / subagent tooling. Works with Anthropic, OpenAI-style, Bedrock, or the Claude SDK.

06 / stream modes

Inspect any run

Every /run and /test picks a stream_mode: messages-tuple for token streams, values / updates for state snapshots, events for the full v2 event stream.

03 · reference

Dive deeper

✓ copied