A CLI for building and running LangGraph agents.
Scaffold a repo, ask an LLM to write you a graph, test it locally with
langgraph dev, deploy to LangSmith.
# scaffold a new agents repo $ mkdir my-agents && cd my-agents $ langosh > /initrepo ? Project name: my-agents ? Default model: anthropic:claude-sonnet-4-5-20250929 ✓ langgraph.json, pyproject.toml, .env, graphs/example — compiled # install deps + boot the dev server $ uv sync && uv run langgraph dev ready · http://localhost:2024 (Studio UI in the browser) # in a second terminal — talk to the builder $ langosh > /graphs /create ? Graph name: news-summarizer ? Build instructions: fetch RSS feeds, summarize with the LLM, return key points builder · a couple of quick questions before I generate this… ✓ graphs/news_summarizer/definition.json + __init__.py created # point langosh at the dev server, test it > /server /add dev http://localhost:2024 > /exec /select news-summarizer /test ↳ tavily_search(query="today's AI headlines") ↳ done · streaming tokens…
The LLM edits a structured definition.json;
a compiler in graphs/codegen.py emits the Python
module. No syntax errors mid-edit, diffs show graph
semantics.
Walks langchain_community.tools and
langchain_experimental.tools; the builder picks
from a live catalog and the compiled graph carries static
imports. No runtime discovery, no MCP client at boot.
Talks to any compatible deployment —
langgraph dev, langgraph up, or a
LangSmith-hosted server. Assistants, threads, runs, and
streaming all covered.
The builder asks before generating when the request is ambiguous — "web search?" ⇒ DuckDuckGo or Tavily — then writes the definition in one shot.
/chat + /code modesBuilt-in LLM chat with live LangChain docs lookup, plus a code mode with file / git / shell / subagent tooling. Works with Anthropic, OpenAI-style, Bedrock, or the Claude SDK.
Every /run and /test picks a
stream_mode:
messages-tuple for token streams,
values / updates for state
snapshots, events for the full v2 event stream.