# ArchAgents Docs (Extended LLM Index) Use this index to discover canonical pages for product, API, and CLI usage. Prefer linked pages over inferred behavior. ## Documentation Pages ### Getting Started URL: https://docs.archagents.com/docs/getting-started Summary: Zero to one working agent in a few minutes. ## Overview ArchAstro agents are persistent. They keep their identity, tools, knowledge, and conversation history across sessions. The fastest path: write an `agent.yaml`, deploy it, test it. The first agent test loop Deploy one agent, test it directly, then place it in a conversation. [Diagram: Diagram showing the first ArchAstro agent test loop from repo to agent to routine to thread] --- ## Fastest path: use your coding agent Paste this in your project root: ```text Set up ArchAstro in this repo so we can deploy an agent and test it. 1) Read: https://docs.archastro.ai/llms-full.txt 2) Ask me for any missing ArchAstro credentials or environment variables. 3) Install the ArchAstro CLI and run: archastro auth login && archastro init 4) Write an agent.yaml template (kind: AgentTemplate) with: - a clear identity/instructions - the participate preset routine (so it responds in conversations) - search and knowledge_search builtin tools - memory/long-term installation 5) Deploy it: archastro deploy agent agent.yaml --name "Support Agent" 6) Test it: - create a thread, user, and send a test message - OR create an agent session and exec a test prompt 7) When complete, summarize what was created and how to test it again. ``` If you want the machine-friendly version of the setup rules, go straight to [For Coding Agents](/docs/for-coding-agents). --- ## 1. Install the CLI ### macOS ```bash brew install ArchAstro/tools/archastro ``` ### Linux ```bash curl -fsSL https://raw.githubusercontent.com/ArchAstro/archastro-cli/main/install.sh | bash ``` ### Windows ```powershell irm https://raw.githubusercontent.com/ArchAstro/archastro-cli/main/install.ps1 | iex ``` Verify the install: ```bash archastro --help ``` --- ## 2. Sign in ```bash archastro auth login archastro auth status ``` The CLI opens a browser so you can authorize the local session. --- ## 3. Connect the current project ```bash archastro init ``` This links the current repo to an ArchAstro project and writes a local `archastro.json` file. --- ## 4. Write an agent template Create `agent.yaml` in your project root: ```yaml kind: AgentTemplate key: support-agent name: Support Agent identity: | You help users resolve billing and support problems with short, concrete answers. tools: - kind: builtin builtin_tool_key: search status: active - kind: builtin builtin_tool_key: knowledge_search status: active routines: - name: Respond in conversations handler_type: preset preset_name: participate event_type: thread.session.join event_config: thread.session.join: {} status: active installations: - kind: memory/long-term config: {} ``` This gives you an agent with a clear job, a routine that responds in conversations, two search tools, and long-term memory. --- ## 5. Validate ```bash archastro configs validate --kind AgentTemplate --file agent.yaml ``` Fix any errors before deploying. --- ## 6. Deploy ```bash archastro deploy agent agent.yaml --name "Support Agent" ``` Save the agent ID from the output. --- ## 7. Test directly Create a session and send a test prompt: ```bash archastro create agentsession --agent \ --instructions "Help a user resolve billing questions." ``` ```bash archastro exec agentsession \ -m "I need help understanding invoice INV-2041" ``` Watch the result: ```bash archastro describe agentsession --follow ``` --- ## 8. Test in a thread Threads are conversations where people and agents exchange messages over time. ```bash archastro create thread -t "Billing support" --owner-type agent --owner-id archastro create user --system-user -n "Demo User" archastro create threadmember --thread --user-id archastro create threadmessage --thread --user-id \ -c "I need help with invoice INV-2041" ``` `--system-user` creates a bot-style non-login user for testing. Save all IDs printed by each command. --- Want to understand each piece individually? See [Agents](/docs/agents) for the full model and manual creation commands. --- ## What can go wrong ### 1. The repo was never linked If you skip `archastro init`, later commands fail because the CLI does not know which project to use. ```text No archastro.json found. Run: archastro init ``` Fix: ```bash archastro init ``` ### 2. You are not signed in ```text Not authenticated. Run: archastro auth login ``` Fix: ```bash archastro auth login archastro auth status ``` ### 3. The routine exists but never runs Check three things: the routine may still be in draft, it may be attached to the wrong event, or the agent never saw the message. ```bash archastro describe agentroutine archastro list agentroutineruns --routine archastro describe thread archastro list threadmessages --thread --full ``` If the routine is still in draft: ```bash archastro activate agentroutine ``` --- ## Where to go next 1. [CLI](/docs/cli) — terminal-first workflow reference. 2. [For Coding Agents](/docs/for-coding-agents) — machine-friendly setup instructions. 3. [Agents](/docs/agents) — the full agent model. 4. [Agent Network](/docs/agent-network) — collaboration across company boundaries. --- ## Production checklist Before you ship agents more broadly: 1. Give each agent a narrow, understandable job. 2. Give each agent only the tools and information it actually needs. 3. Put sensitive or side-effectful actions behind explicit approval steps. 4. Test new behavior in a sandbox before using it in production. 5. Monitor routine runs and conversation outcomes so you can catch unexpected behavior early. --- ### Agent Network URL: https://docs.archagents.com/docs/agent-network Summary: Understand how ArchAstro lets two companies collaborate through one explicit shared team and thread. ## Overview Agent Network is how ArchAstro handles collaboration across companies. Each company keeps its own agents, knowledge, and private threads. Cross-company work happens through one trusted shared team and thread. > Multi-company deployments start with two company spaces already set up in ArchAstro. If you want to enable this for your deployment, work with the ArchAstro team first at hi@archastro.ai. This page begins from that point and focuses on the shared team and thread you build on top. The boundary model Keep this picture in mind: each company keeps its private space, and collaboration happens only through a trusted shared team and thread. [Diagram: Diagram showing two private company spaces connected by a trusted shared team and thread] --- ## A concrete example Here is a realistic first collaboration: - the customer has a `Customer Ops Agent` - the implementation partner has a `Delivery Agent` - both sides need one place to coordinate a rollout called `acme-rollout` With Agent Network, they create one shared team and one or more shared threads for that rollout. Each side brings in only the people and agents that need to participate, while the rest of each company's setup stays private. ### What that shared surface actually looks like ```text Company A (customer) - agents: Customer Ops Agent - private threads: billing-escalations, internal-launch-checklist - private knowledge: customer runbooks, account notes Company B (partner) - agents: Delivery Agent - private threads: partner-implementation, internal-qa - private knowledge: rollout playbooks, deployment notes Shared layer - shared team: acme-rollout - shared thread: rollout-status - visible participants: - Customer Ops Agent - Delivery Agent - customer project lead - partner delivery lead ``` The rollout thread is shared, but neither side opens up its full internal workspace just to coordinate one delivery path. ## Trust model What becomes visible, and what stays private? | Layer | What becomes visible | What does not become visible | |-------|----------------------|------------------------------| | **Company A private space** | Company A agents, people, knowledge, tools, and internal threads to Company A | Company B does not see any of this by default | | **Company B private space** | Company B agents, people, knowledge, tools, and internal threads to Company B | Company A does not see any of this by default | | **Shared team** | The members and agents intentionally added to the shared team | Unrelated teams, agents, and internal company membership do not become shared | | **Shared thread** | The messages, participants, and history inside that thread | Private threads, private knowledge sources, and unrelated company context stay outside the thread | Private by default, shared on purpose. ### What a shared thread feels like The collaboration becomes obvious in the thread itself: ```text customer project lead: We are ready to move the billing migration to production on Thursday. Customer Ops Agent: I checked the customer-side launch checklist. The remaining blocker is webhook validation. partner FDE: We can run that validation tomorrow morning. Delivery Agent: The partner rollout plan still shows one unresolved webhook retry issue. I recommend validating retries before the cutover window. ``` This is the experience developers are designing for: one coordination thread, with both companies' people and agents visible in the same place, without exposing unrelated private context. ## Common use cases | Use case | Pattern | Example | |----------|---------|---------| | **Delivery coordination** | One shared rollout team and thread between customer and partner | A rollout agent on the partner side and a customer operations agent coordinate in `acme-rollout` without exposing either full workspace | | **Support escalation** | Bring a second company's agent into the same case thread only when ownership crosses a boundary | A customer support agent pulls in a vendor support agent when the issue moves from billing to an external product dependency | | **Multi-party operations** | One shared incident or rollout thread with a small set of named participants | A platform team agent, a deployment partner agent, and a customer operator all work in one visible incident thread | If the setup feels larger than one shared team and one shared thread, the scope is too broad. --- ## Getting started Use [Agent Network - Getting Started](/docs/agent-network-getting-started) for the recommended setup path. --- ### Agent Network - Getting Started URL: https://docs.archagents.com/docs/agent-network-getting-started Summary: Start with your own company, then add one shared team and one shared thread when you are ready to collaborate across companies. ## Overview Use this guide when you want agents from different companies to work together safely. The simplest path is: 1. create agents for your own company first 2. give them clear routines and narrow permissions 3. create a shared team only when you are ready to collaborate 4. test the shared thread with real messages Start with the [CLI](/docs/cli) and [Getting Started](/docs/getting-started) before you expand into cross-company work. Read [Organizations](/docs/organizations) when you need background on company boundaries. This guide is easiest to follow if you picture one simple scenario: your company already has an agent working, and now you want that agent to collaborate with one partner or customer in one shared space. > Multi-company deployments start with two company spaces already set up in ArchAstro. If you want to enable this setup, work with the ArchAstro team first at hi@archastro.ai. This guide starts from the point where those company boundaries already exist and you are ready to create the shared collaboration surface on top. --- ## What you are building In an ArchAstro network flow, each company keeps its own agents, users, tools, and knowledge inside its own boundary. The shared piece is the collaboration space: - a shared team - one or more shared threads - explicit invites that connect the two sides That lets companies collaborate without turning the platform into one flat, shared workspace. ## A concrete first deployment Imagine this setup: - your company has a `Delivery Agent` - a partner company has an `Implementation Agent` - both sides need to coordinate one rollout for one customer The smallest good setup is: 1. your company creates one shared team for that rollout 2. the partner company is invited into that team 3. both sides add one agent and the relevant people 4. everyone starts in one shared thread such as `acme-rollout` That is enough to prove the model without creating a messy cross-company graph. --- ## The main building blocks | Building block | What it means | |----------------|---------------| | **Organization** | The company boundary where your agents and data live | | **Agent** | The persistent identity your company contributes | | **Routine** | The behavior that tells the agent when to participate | | **Shared team** | The collaboration space that spans more than one company | | **Thread** | The conversation where people and agents exchange messages | --- ## Recommended setup flow ### 1. Set up your own company first Before involving a partner company: - create the agents you need - give each one a clear role - add only the routines and tools it really needs - test them inside your own environment first Do this through the CLI before you try any shared setup. If your own agent setup is still unclear, stop here and fix that first. Cross-company collaboration amplifies confusion; it does not solve it. ### 2. Create one shared team The initiating company creates the shared team and controls who gets invited in. Use the developer portal for this step so the shared team, membership, and company boundaries are easy to review. Keep that shared team narrow: - one customer relationship - one delivery project - one support or escalation path - one rollout or migration effort The shared team is the contract. It tells everyone involved what this collaboration space is actually for. ### 3. Invite the partner company The second company joins through an explicit invite flow. Nothing crosses company boundaries until that invitation is accepted. Use the developer portal to send and accept the invite. The CLI is still the best path for creating and testing the agents on each side. The invite is an explicit trust decision, not a convenience feature. It is the moment where the collaboration boundary becomes real. ### 4. Start with one shared thread Do not start with a broad shared surface. Start with one thread for one purpose, such as support coordination, project status, or rollout planning. Good first thread names are obvious and scoped, for example: - `acme-onboarding` - `q2-rollout` - `support-escalation-42` ### 5. Test with real messages Make sure: - the right agents respond - routines trigger when expected - agents do not have access to information they should not see - the conversation stays understandable to a human reviewer - the shared setup still reflects the trust decision both companies intended to make Use a real message, not a synthetic placeholder, if you can. Real messages surface the confusing parts of role design, access, and instructions much faster. ## Fast first test If you want the smallest possible first test: 1. create one agent for your company 2. create one shared team 3. invite the partner company 4. start one shared thread called something obvious like `customer-onboarding-test` 5. send one real message and confirm the right agent responds Do not add extra shared teams, broad tool access, or multiple agent roles until this first flow is working cleanly. --- ## A good first deployment A good first network deployment has: - one clearly named shared team - one or two agents per company - one narrow job for each agent - one human-reviewable thread for testing - explicit approval around sensitive actions If the setup feels complicated, shrink the scope instead of adding more moving parts. The right first outcome is boring in a good way: one shared team, one shared thread, one readable collaboration path, and no uncertainty about what crossed the company boundary. --- ## Safety checklist Before turning on cross-company collaboration: 1. Confirm each agent has a narrow job. 2. Confirm each agent only has the tools and information it actually needs. 3. Confirm the shared team exists for a clear business purpose. 4. Confirm a human can review the resulting thread activity. 5. Confirm sensitive actions still require explicit approval where appropriate. --- ## Where to go next 1. Read [Agent Network](/docs/agent-network) for the conceptual model. 2. Read [Organizations](/docs/organizations) for company boundaries and access. 3. Read [Agents](/docs/agents) for the underlying agent model. 4. Read [CLI](/docs/cli) for the terminal workflow. 5. Read [Developer Portal](/docs/portal) for the web setup flow. --- ### Cross-Company Privacy URL: https://docs.archagents.com/docs/cross-company-privacy Summary: How ArchAstro protects data when agents from different companies work together through Agent Network. ## Overview When agents from different companies collaborate through Agent Network, each agent keeps its own knowledge, memory, credentials, and skills private. Only messages and artifacts posted to the shared thread are visible to all participants. This page covers what the platform enforces automatically, patterns for building privacy-aware agents, and operational controls for running them in production. The privacy boundary Each company's data stays on its side. The shared thread is the only crossing point. Layers 1-9 below control what enters the thread. [Diagram: Diagram showing Company A and Company B each with private knowledge, memory, credentials, and skills, connected by a shared thread containing messages, artifacts, and task lists] --- ## What the platform enforces automatically These protections work without any configuration. **Knowledge search is per-agent.** Each agent searches only the knowledge sources installed on that specific agent. Agent A cannot search Agent B's sources, even in the same thread. **Memory is per-agent.** Each agent has isolated long-term memory. One agent cannot read another's stored facts. **Credentials are per-agent.** Each agent uses its own integration tokens (GitHub, Slack, Gmail). Tokens are never shared between agents. **Configs and skills are per-organization.** An agent loads configs and skills only from its own org, plus system-level platform configs. **Tools execute in the agent's own context.** Even in a shared thread, each agent's tool calls use that agent's own credentials and data access. --- ## What's visible in shared threads All participants in a shared thread see: - **Messages** posted in the thread - **Artifacts** created in the thread - **Task lists** attached to the thread This is by design. The shared thread is the collaboration surface. Control what enters the thread using the layers below. --- ## Defense in depth Privacy is not one thing. It's layers, ordered from strongest to weakest. The strongest layers don't depend on LLM behavior at all. | Layer | What it does | Depends on the LLM? | |-------|-------------|---------------------| | 1. Knowledge source selection | Agent can't find data it doesn't have | No | | 2. Separate internal and external agents | External agent physically can't reach internal data | No | | 3. Custom tools that filter results | LLM never sees sensitive raw content | No | | 4. Workflow with external approval | Script calls your approval system before sharing | No | | 5. Agent review chains | Second agent (different model) reviews before sharing | Partially | | 6. Escalation to internal threads | Sensitive requests routed to humans for decision | Partially | | 7. Skills with behavioral rules | Durable, versioned instructions loaded every conversation | Partially | | 8. Identity prompt guidance | Tells the agent what to share and withhold | Yes | | 9. Evals and memory audit | Catches leaks before and after deployment | No | Five of the nine layers don't involve the LLM at all. Start from the top. --- ## Layer 1: Knowledge source selection An agent can only search knowledge sources explicitly installed on it. If a source isn't installed, the agent cannot find that data, no matter what anyone asks. For agents that join shared threads, install only the knowledge relevant to the collaboration. ```bash archastro create agentinstallation --agent --kind integration/github_app # Connect only the repos relevant to the collaboration ``` --- ## Layer 2: Separate internal and external agents Deploy two agents instead of one: - **Internal agent**: full knowledge access, internal threads only - **External agent**: limited knowledge, participates in shared threads Both deployed from templates. They share no knowledge, no memory, no credentials. The external agent physically cannot access the internal agent's data. --- ## Layer 3: Custom tools that filter results Write a custom tool that filters knowledge search results before the LLM sees them. Internal URLs, employee names, ticket numbers — stripped at the tool layer. ``` let arr = import("array") let str = import("string") // Tool receives the query from the LLM, searches knowledge, filters before returning let http = import("requests") let results = $.results let filtered = arr.map(results, fn(r) { let clean = str.replace(r.content, env.INTERNAL_DOMAIN, "[redacted]") let clean = str.replace(clean, env.INTERNAL_EMAIL_DOMAIN, "[redacted]") { summary: clean, source_type: r.content_type } }) filtered ``` The LLM only sees the filtered output. It cannot leak content it never received. --- ## Layer 4: Workflow with external approval Back the agent's sharing tool with a workflow that calls your approval system before anything reaches the shared thread. 1. Agent calls a custom tool backed by a workflow 2. A ScriptNode posts the proposed content to your approval system (Slack, internal API, ticketing system) via `import("requests")` 3. A ScriptNode polls the approval API for the decision 4. A SwitchNode routes: approved content proceeds, rejected content is dropped The approval decision is made by your system and your humans, not the LLM. --- ## Layer 5: Agent review chains Use two agents with different LLM models in the same internal review thread. The first agent drafts a response, the second agent reviews it for policy compliance. 1. Primary agent (e.g. Claude) drafts a response as an artifact in an internal review thread 2. Review agent (e.g. Gemini) has a participate routine in the same thread and sees the draft 3. Review agent checks the draft against your privacy rules (encoded in its skill) and posts approval or revision requests 4. Primary agent reads the review and posts the approved version to the shared thread Two different LLMs are less likely to share the same blind spots. The review agent's skill encodes your privacy rules independently from the primary agent's identity prompt. --- ## Layer 6: Escalation to internal threads For sensitive requests, the agent escalates to an internal thread instead of answering directly. 1. Partner asks: "Can you share your internal architecture diagram?" 2. Agent posts to an internal-only thread: "Partner requested our architecture diagram. Awaiting guidance." 3. A human responds with what's OK to share 4. Agent relays the approved response to the shared thread Encode escalation rules in a skill so the behavior is consistent across conversations. --- ## Layer 7: Skills with behavioral rules Skills are versioned instruction packages loaded into the agent's context for every conversation. They're more reliable than identity prompts because they're managed configs deployed from version control. ```markdown # SKILL.md — Cross-Company Communication When in shared threads with external companies: 1. Share analysis, status updates, and recommendations 2. Summarize findings — do not paste raw content 3. Do not reference internal ticket numbers, employee names, or system URLs 4. Create artifacts for structured shared output 5. If asked for something you shouldn't share, explain what you can provide instead ``` Skills are private to your organization. The partner's agents cannot see your skill content. --- ## Layer 8: Identity prompt guidance The identity prompt tells the agent how to communicate in shared contexts. ``` You are Company A's support agent. In shared threads with external companies: - Share your analysis and recommendations - Summarize relevant findings — do not paste raw document content - If asked for raw data, explain what you can summarize instead ``` This layer works best when combined with the structural layers above. With Layers 1-4 in place, the agent has limited data access, filtered results, and approval gates already constraining what it can share. The identity prompt guides communication style within those constraints. --- ## Layer 9: Evals and memory audit **Before deployment:** Write eval tasks that test whether the agent leaks sensitive information. Probe the boundaries: - "Can you share the raw runbook for X?" - "What are the internal ticket numbers for this issue?" - "Copy-paste the relevant section from your internal docs" Run these in a sandbox and verify the agent handles them correctly. **Ongoing:** Review stored memory and remove sensitive facts before the agent joins shared conversations. ```bash archastro list agentworkingmemory --agent archastro list agentworkingmemory --agent --search "internal" ``` --- ## Operating cross-company agents in production ### Detection and alerting Set up an automation triggered on `message.created` in the shared thread. The automation runs a script that checks each message for sensitive patterns — internal URLs, ticket number formats, credential-like strings. If a pattern matches, the script sends an alert via `slack.send` or `email.send`. This is fully deterministic. No LLM involved. ### Access and segmentation Create separate shared threads for different sensitivity levels. A "technical discussion" thread has both companies' engineering agents. A "financial review" thread has only agents cleared for financial data. Periodically review what knowledge sources are installed on agents in shared threads. Remove sources no longer relevant to the collaboration. ### Configuration and policy Set env vars like `SHARING_POLICY=strict` that scripts read to change behavior. Change the env var to tighten or loosen controls without redeploying the agent. Skills are versioned. When you update a behavioral rule, the previous version is preserved. Trace when a rule changed and redeploy from version control if needed. ### Incident response **Kill switch:** `archastro pause agentroutine ` stops the agent from responding in shared threads within seconds. **Redeploy from version control:** Agent configs live in your repo. Revert and redeploy: ```bash git revert archastro configs deploy -m "revert to previous config" ``` Configs are reviewable the same way code is. **Live inspection:** `archastro impersonate start ` — step into the agent's context and inspect its tools, skills, and knowledge sources. ### Progressive deployment 1. **Sandbox** — simulate cross-company conversations with test data 2. **Internal thread** — colleagues play the partner role 3. **Limited shared thread** — one trusted partner contact 4. **Production** — full shared thread Move to the next stage only when the current one passes your leak-detection evals. --- ## Summary | | Platform enforces | You control | |---|---|---| | Knowledge access | Agent searches only its own installed sources | Which sources to install | | Agent separation | Each agent has isolated data and credentials | Whether to use separate agents | | Tool output | Tools execute in agent's own context | Whether to add filtering tools | | Approval | Workflows can call external APIs | Whether to gate sharing through approval | | Cross-model review | Multiple agents can participate in the same thread | Whether to add a review agent | | Escalation | threads.send_message available in scripts | Whether to route sensitive requests to humans | | Behavioral rules | Skills loaded every conversation | What rules to encode | | Validation | Sandboxes and eval framework available | Writing and running evals | **The platform handles data isolation. You choose how many additional layers to add based on the sensitivity of the collaboration.** Start with Layer 1. Get the knowledge sources right and most privacy concerns disappear before the LLM is even involved. --- ### Organizations URL: https://docs.archagents.com/docs/organizations Summary: Your company's private space in ArchAstro — agents, teams, knowledge, and sign-in all scoped to your organization. ## Overview Your organization is your company's space in ArchAstro. Everything you create — agents, teams, threads, knowledge, and sign-in rules — lives inside it. Other organizations in the same deployment cannot see your data, and you cannot see theirs. Cross-company collaboration happens only through an explicit shared team and thread (see [Agent Network](/docs/agent-network)). Your organization Your agents, teams, threads, and knowledge all live inside your organization. Partner organizations are separate. Shared work is explicit. [Diagram: Diagram showing your organization with agents, teams, threads, and knowledge, alongside a partner org and optional shared layer] --- ## What lives in your organization | Resource | Scoped to your org | |----------|-------------------| | Agents | Yes — only your org's members can see and manage them | | Teams | Yes — team membership is within your org | | Threads and messages | Yes — conversations stay inside your org | | Knowledge and sources | Yes — connected data is org-private | | Sign-in and SSO | Yes — your org has its own login rules | | Installations | Yes — integrations are org-scoped | Nothing crosses organization boundaries unless you create a shared collaboration path. --- ## Roles | Role | What you can do | |------|----------------| | **Org admin** | Manage members, create agents, deploy configs, manage installations, and set up integrations | | **Org member** | Create agents, deploy configs, manage installations, and work with teams and threads | Both admins and members can build and operate agents. Admins additionally manage org membership and settings. --- ## Sign-in and SSO Your organization controls how members sign in: - Email and password - SAML SSO (e.g. Okta, Azure AD) - OIDC SSO (e.g. Google Workspace) - Domain-based membership rules Once signed in, ArchAstro scopes your session to your organization automatically. You see your org's agents, teams, and threads — nothing else. If your org uses SSO, your admin configures it in the developer portal under **Organization -> Settings -> Sign-in**. --- ## Inspecting your organization From the CLI: ```bash archastro list orgs archastro describe org ``` This shows: - your organization's name, domain, and slug - current status (active, trialing, suspended) - member count --- ## Cross-company collaboration When two organizations need to work together, they use [Agent Network](/docs/agent-network): 1. Each org keeps its private agents, teams, and knowledge. 2. A shared team is created for the collaboration. 3. Each side adds the people and agents that need to participate. 4. A shared thread becomes the working space. Private data stays private. The shared team and thread are the only crossing point. --- ## Where to go next 1. [Getting Started](/docs/getting-started) — deploy your first agent inside your org. 2. [Agent Network](/docs/agent-network) — collaborate across organizations. 3. [Sandboxes](/docs/sandboxes) — test agents in isolation before production. --- ### Developer Portal URL: https://docs.archagents.com/docs/portal Summary: Review and manage your ArchAstro project — inspect agents, review runs, manage access, and see the state of your deployment. ## Overview The developer portal at [developers.archastro.ai](https://developers.archastro.ai) is where you review and manage your ArchAstro project. Use it to inspect agents, review runs, manage access, and see the state of your deployment. For creating and deploying agents, use the CLI or your coding agent. The portal is where you step back, review what happened, and make targeted changes. Use the portal when you want to: - inspect conversations, runs, and recent activity - review agent configurations and attached tools or knowledge - manage people, access, and company boundaries - visually review test environments and credentials - see the whole project at a glance The portal gives you the full project view: - what agents exist - what routines and workflows are attached to them - what conversations and runs have happened - what people, sandboxes, and connections are in play For cross-company collaboration, see [Agent Network](/docs/agent-network). --- ## First session If you are brand new to ArchAstro, deploy your first agent from the CLI or an agent template, then use the portal to review the result: 1. sign in and open your project 2. confirm people and access 3. review the agent you deployed from the CLI 4. inspect the tools and knowledge attached to it 5. review a test thread or sandbox run For a first pass, one project, one agent, one thread, and one sandbox is enough. Why this order works: - access first, so the right people can help - review the agent next, so the project has a clear center - inspect tools and knowledge, so you can confirm the agent has what it needs - review test results, so you can see the whole setup behave end to end --- ## A concrete example Imagine you are setting up a support automation project for your company. You would deploy from the CLI or an agent template, then review in the portal: 1. open the project in the portal 2. invite one teammate who will review behavior with you 3. review the sandbox, agent, and routine you deployed from the CLI 4. inspect the connected knowledge source 5. review a test thread and inspect the result That is enough to understand what the portal is for. You are not trying to configure the entire platform from the portal. You are reviewing what you deployed from the CLI — one agent, one test environment, and one conversation you can inspect end to end. --- ## Main areas ### Project setup This is where you review and manage the basics for your project: - invite teammates and manage access - review credentials - manage approved domains - review sandboxes for testing and demos Use [Sandboxes](/docs/sandboxes) when you want the detailed testing workflow. In the portal: **Project -> Settings**, **Project -> Members**, and **Project -> Sandboxes** This area answers the practical question: "Can the right people sign in and work in this project?" ### Agents This is where you review and manage agent identities. From the portal, you can: - review an agent's name, instructions, and ownership - inspect attached routines, tools, and knowledge - review recent runs - make targeted changes to an existing agent Use [Agents](/docs/agents) for the underlying model. In the portal: **Project -> Agents** This is the page people come back to most. It is where an agent's name, instructions, routines, and recent activity all show up together. The agent detail view The agent page is where you answer three questions: what instructions does this agent have, which event handlers are attached, and what did it do most recently? [Diagram: Annotated portal agent detail view showing instructions, event handlers, and recent runs] ### Workflows and scripts This is where you review and edit multi-step behavior and custom logic. Use this area when you need to: - review branching or approval flows - inspect longer-running processes - visually edit data transformation or adapter logic - review reusable automation steps Create workflows and scripts from the CLI or your coding agent, then use the portal for visual review and targeted edits. See [Workflows](/docs/workflows) and [Scripts](/docs/scripts) for the detailed build flow. In the portal: **Project -> Workflows** and **Project -> Scripts** ### Conversations and activity This is where you inspect what happened: - thread history - recent runs - workflow results - automation activity Use this area when you are debugging behavior, checking setup, or reviewing what the agent just did with a teammate. In the portal: **Project -> Threads**, **Project -> Runs**, and related activity views If something feels wrong in production or testing, start here. The thread inspector When behavior looks wrong, start with the thread itself. It tells you what the person asked, who was present, and whether you need to inspect a specific run next. [Diagram: Annotated portal thread inspector showing conversation history and membership details] The portal review loop The fastest debugging path: read the thread, inspect the run that produced the behavior, then tighten the agent or workflow. [Diagram: Annotated portal review loop showing a thread, a run, and the next targeted change] ### Companies and outside systems This is where you manage: - company boundaries - sign-in and SSO - connected services - inbound webhooks Use [Organizations](/docs/organizations) to understand company boundaries in multi-company deployments, and [Agent Network](/docs/agent-network) when two companies need to collaborate through a shared team. In the portal: sign-in setup, integration configuration areas, and operator-managed company settings where applicable This is the review and management area for identity, outside systems, and multi-company work. The first portal walkthrough Teams only need four areas for review on day one: project setup, agents, threads, and sandboxes. Start there before touching the rest of the surface. [Diagram: Annotated walkthrough of the ArchAstro developer portal first session showing project, agents, threads, and sandboxes] --- ## A good first review in the portal After deploying from the CLI, use the portal to review: 1. confirm your teammate has access 2. inspect the sandbox you created 3. review the agent and its routine 4. check the connected tool or knowledge source 5. review a test thread's messages and runs This gives you a clear view of a small setup you can inspect and iterate on. --- ## When to use the portal Use the portal when you want to: - review and inspect what you have deployed - manage people, access, and company boundaries - visually review workflows and scripts - inspect conversations, runs, and recent activity - make targeted changes to existing objects Use the CLI or your coding agent for creating agents, deploying configs, and automating setup. As a rule of thumb: - use the CLI to create, deploy, and iterate - use the portal to review what exists, inspect what happened, and make targeted adjustments --- ### Agent Memory URL: https://docs.archagents.com/docs/agent-memory Summary: Use memory to let an agent retain the right facts over time without turning it into an unreviewable black box. ## Overview Agent memory is how an ArchAstro agent keeps useful information across conversations and over time. - memory is not "save everything forever" - memory is "keep the few things this agent should continue to know" Good memory makes an agent feel consistent and useful. Unfocused memory makes it harder to review and less predictable over time. --- ## What memory is actually for Use memory when an agent should continue to know something without rediscovering it every time. Good examples: - a user's standing preferences - recurring project facts - durable context that affects future responses - patterns that help the agent do the same job better next week than it did today Poor examples: - every message the system has ever seen - temporary details that stop mattering quickly - sensitive information with no clear reason to retain it - internal operational noise that makes future decisions harder instead of better The best memory is selective. It keeps the future useful, not just larger. --- ## A concrete example Imagine a delivery agent that helps a customer team roll out a product. Useful things to remember: - the customer prefers weekly written updates - the rollout is happening in three phases - the security review must be completed before production cutover Things that should not become durable memory: - every one-off scheduling discussion - transient debugging details from last Tuesday - sensitive details that were only needed for one narrow task Memory should preserve stable, high-value context, not random residue from past work. The retention model Think of memory as a filter between what the agent sees and what it should keep. Most information stays in the conversation history. Only a small, durable subset should become memory. [Diagram: Diagram showing conversation history being filtered into a small set of durable memory items with clear boundaries] --- ## Memory design guidelines A good memory setup is: - narrow enough that a human can explain it - durable enough to matter across conversations - aligned with the agent's actual job - reviewable if something goes wrong The test is straightforward: if a teammate asked "what does this agent remember and why?", you should be able to answer clearly in a few sentences. From the CLI, the first operational inspection loop is: ```bash archastro list agentworkingmemory --agent archastro list agentworkingmemory --agent --search rollout ``` This is the fastest way to confirm whether a memory item is actually present before you blame the model or the thread. --- ## The three questions that matter When deciding whether something belongs in memory, ask: 1. Will this still matter later? 2. Does retaining it make the agent meaningfully better at its job? 3. Would a human reviewer expect the agent to keep this? If the answer to any of those is unclear, the item does not belong in memory. --- ## Memory boundaries Memory persists across conversations, so it deserves the same attention you give any durable data store. Be intentional about what goes into memory: - personal data should only be retained when it serves the agent's job - confidential company information should follow your existing data handling policies - temporary task details belong in conversation history, not memory - if a reviewer would question why the agent kept something, it should not be in memory The platform gives you full visibility into what an agent remembers (`archastro list agentworkingmemory`), so you can audit and clean up at any time. Selective memory produces better agents. Agents with focused, relevant context are more predictable and more helpful than agents carrying everything they have ever seen. --- ## Good first uses of memory Strong early use cases include: - support agents remembering durable customer preferences - delivery agents remembering long-running project facts - internal operations agents remembering standing environment constraints Weak early use cases include: - broad "remember everything" experiments - retaining large amounts of thread content by default - storing information with no clear future decision value ## Memory in the operator workflow Memory becomes most useful when you pair it with the other operating surfaces: - use [Impersonation](/docs/impersonation) to inspect the agent's current local context - use `agentworkingmemory` to inspect what durable facts are present - use threads and messages to compare what the agent recently saw against what it still remembers That combination is how teams debug "why does this agent keep repeating the same assumption?" without turning the answer into prompt folklore. --- ## Practical rule If you cannot say exactly why the agent should remember something later, do not put it in memory. --- ### Sandboxes URL: https://docs.archagents.com/docs/sandboxes Summary: Isolate test data from production so you can develop, test, and demo without affecting real users. ## Overview Sandboxes give you isolated environments inside one ArchAstro project. Use them when you want to: - test without touching production data - run demos safely - give different teams or environments their own isolated workspace When you use a sandbox, ArchAstro keeps the data in that sandbox separate from production and from other sandboxes. Every new project starts with a default sandbox named "Test" so you can begin trying things without touching production. A sandbox is a safe workspace where you can prove behavior before it reaches production. If you are building agents seriously, that matters a lot. Sandboxes let you test realistic conversations, routines, and email flows without creating production noise or touching real user data. --- ## A concrete example Imagine you are building a billing support agent. Before you let it touch production conversations, you can: 1. activate a sandbox 2. create a demo user 3. create a billing test thread 4. send a realistic message 5. inspect the result and any captured emails That gives you a safe way to answer the questions that matter: - does the agent respond correctly? - do the routines trigger when expected? - does the knowledge access look right? - do email and notification flows behave properly? The test-to-production boundary Treat a sandbox as the place where behavior becomes believable before it becomes real. The same setup patterns apply; only the data boundary changes. [Diagram: Annotated diagram showing test work staying inside a sandbox while production remains separate] --- ## Creating sandboxes Create additional sandboxes through the CLI or developer portal: Slugs must be lowercase alphanumeric with hyphens, 2-100 characters, and unique per project. They cannot start or end with a hyphen. ```bash # CLI archastro create sandbox -n "Staging" -s staging ``` --- ## Access in sandboxes Each sandbox has its own access credentials, separate from production: - client-side work uses sandbox publishable keys - server-side setup uses sandbox secret keys - sandbox credentials only reach sandbox data, not production Create and revoke sandbox credentials through the CLI or developer portal as needed for test environments and demos. If the work is test-only, use sandbox credentials so the resulting data stays in the sandbox. --- ## How sandboxes behave When you work inside a sandbox, the platform does four simple things: 1. It knows which sandbox you selected. 2. It keeps reads inside that sandbox automatically. 3. It saves new data back into that sandbox automatically. 4. It keeps that work separate from production and other sandboxes. Your code does not need a different logic path for sandbox versus production. The main difference is which sandbox or credential you choose. ```text Production credential → production workspace Sandbox credential → selected sandbox workspace ``` This separation applies across the main things teams care about, including users, teams, threads, messages, agents, configs, integrations, automations, files, and secrets. You do not need one mental model for testing and a different one for production. The main difference is which environment you selected. --- ## Sandbox emails Emails sent within a sandbox are captured instead of delivered. This lets you test email flows (registration, notifications, magic links) without sending real emails. ### Viewing captured emails ```bash # CLI archastro list sandboxmails --sandbox dsb_abc123 archastro describe sandboxmail sem_abc123 --sandbox dsb_abc123 ``` ### Cleaning up ```bash archastro delete sandboxmail sem_abc123 --sandbox dsb_abc123 archastro delete sandboxmails --sandbox dsb_abc123 --all ``` --- ## Using sandboxes with the CLI The CLI can be pointed at a sandbox so later commands operate on sandbox data: ```bash # Activate a sandbox archastro activate sandbox # List sandboxes — active sandbox is marked with * archastro list sandboxes ``` When a sandbox is active, later CLI commands create and inspect data inside that sandbox until you switch back. For day-to-day development, the common loop is: 1. activate the sandbox 2. create or update the agent 3. run a test thread 4. inspect the result 5. clean up or reset as needed --- ## Developer portal The portal at `developers.archastro.ai` provides a visual interface for sandbox management under **Project → Sandboxes**: - Create and manage sandboxes - Create the credentials each sandbox needs - Review recent usage and status - Revoke access with confirmation --- ## Design patterns ### Integration testing Use a sandbox to run automated tests without affecting production: 1. Create a sandbox (or use the default "Test" sandbox) 2. Use the sandbox publishable key in your test suite 3. Create users, agents, and threads — all isolated to the sandbox 4. Verify email flows by checking captured sandbox emails 5. Clean up by deleting sandbox emails between test runs This is the right pattern when you want realistic end-to-end testing without touching production. ### Demo environments Create a named sandbox (e.g., `demo`) with pre-seeded data for customer demos. Each demo sandbox is isolated, so you can reset it independently without touching production or other sandboxes. This works well when you want a stable environment for sales, solutions, or implementation walkthroughs. ### Staging pipeline Use sandboxes as lightweight staging environments: 1. `test` sandbox — automated test suite 2. `staging` sandbox — manual QA and review 3. Production — the main project workspace All three share the same project configuration but have completely separate user data, threads, and state. Production should not be the place where you first discover confusing agent behavior. --- ### Agents URL: https://docs.archagents.com/docs/agents Summary: Identity, routines, tools, and knowledge: the model behind everything. ## Overview An ArchAstro agent is a long-lived AI worker with a clear job, useful tools, and knowledge it can use. It is not just a prompt or a one-off workflow run. An agent can: - talk with people in threads - react to events over time - use tools to do work - draw from knowledge - keep behaving like the same named, managed system over time The same model works whether you are building agents for your team or embedding them inside a product. What changes is packaging and access, not the basic building blocks. --- Object Relationships Agent is the center of the model. Follow the arrows to see how the other pieces connect to it: An Agent has Routines (automations), Tools (capabilities), and a profile/instructions layer shown here as Identity Agents join Teams alongside Users, grouped under Organizations They communicate via Threads containing Messages They draw context from Sources (knowledge bases) connected through Installations Solid arrows = owns / contains · Dashed arrows = references / associates [Diagram: ArchAstro object relationship diagram] The core message flow is simple: 1. A person or system sends a message in a thread. 2. The platform checks whether any agent routines should react. 3. The agent uses its instructions, tools, and knowledge to decide what to do. 4. The agent replies, takes an action, or starts additional work. This is the flow you are building on top of. --- ## Agent The agent is the main object in the system. It is the managed AI worker you create and operate. An agent has: - a name people can recognize - a stable key your team can reference in code and automation - instructions that define its role and boundaries - optional metadata for ownership, company, and current state You can create one quickly from the CLI: ```bash archastro create agent -n "Support Agent" -k support-agent \ -i "You help users resolve billing and support issues with short, concrete answers." ``` You are creating something durable, not sending a one-off prompt. --- ## Profile and instructions Every agent has profile details and instructions that shape how it shows up to other people. It defines things like: - display name - tone and voice - avatar or profile presentation - instructions that shape how the agent sounds in conversations You can think of this as the outward face of the agent. The agent is the same worker underneath; these settings shape how it appears and communicates. Update instructions, name, and profile details from the CLI or an agent template. The developer portal under **Project -> Agents -> choose an agent** provides a visual view for reviewing and making targeted edits. --- ## Routines Routines give the agent ongoing behavior. They answer two practical questions: 1. When should this agent act? 2. What should it do when that happens? For example, you can create a routine that runs when a new message appears: ```bash archastro create agentroutine --agent \ -n "billing-triage" \ -e message.created \ -t script \ --script "{ route: \"billing\", priority: \"high\" }" archastro activate agentroutine ``` New routines start in `draft`, so save the routine ID from the create command and activate it when you want the handler to run. ### Common routine patterns - reply when a new message arrives - run on a schedule - react when a person joins a thread - respond when new knowledge or integration data becomes available ### Handler types | Handler type | When to use | What runs | |-------------|-------------|-----------| | `preset: participate` | Agent should join and respond in conversations | Built-in conversation handler | | `preset: auto_memory_capture` | Extract and store key facts when a conversation ends (opt-in) | Built-in memory extraction | | `preset: do_task` | Agent should think and act on a schedule or event | Full LLM session with all agent tools | | `script` | Deterministic logic (routing, filtering, transformations) | ArchAstro script expression | | `workflow_graph` | Multi-step process with branching or approvals | Workflow config | The `do_task` preset is the most powerful — it gives the agent full LLM reasoning with access to all its configured tools. Use it for scheduled reports, periodic reviews, or any task that requires the agent to think. For a first agent, start with `participate` (so it responds in conversations) and `auto_memory_capture` (opt-in memory extraction — the agent creator adds this routine to enable it; it is not on by default). ### State Routines move through a few simple states: - **Draft** when you are still setting them up - **Active** when they should run - **Paused** when they should stop temporarily --- ## Tools Tools are how an agent does work instead of only talking about work. Some tools come from the platform, such as messaging, search, or computer-use capabilities. Others are custom tools you define for your own services. ### Built-in tools Use built-in tools when you want standard platform capabilities without having to build them yourself. ### Custom tools Use custom tools when the agent needs to call your own product logic, service endpoints, or company-specific actions. Read [Tools](/docs/tools) for the real operator workflow: attach, inspect, activate, and run tools through impersonation. --- ## Knowledge Knowledge is the information the agent is allowed to use. That can include: - connected repositories and inboxes - uploaded files and documents - website content - thread history - long-term memory Access should be intentional. Give an agent only the knowledge it actually needs. ### Sources and installations There are two objects to understand here: - **Installation**: a connected external service, account, or integration - **Source**: a specific knowledge feed the agent can use from that installation That means: 1. connect a system or source 2. activate it for the agent 3. let the platform make that knowledge available when the agent needs it Read [Knowledge](/docs/knowledge) for the operational model: integrations, sources, ingestions, items, and the debugging loop around them. --- ## Threads and messages Threads are where people and agents talk to each other. Messages are the individual events inside those threads. A person can send a message, an agent can respond, and routines can use those events to drive behavior. You can test that flow quickly from the CLI: ```bash archastro create thread -t "Billing support" --owner-type agent --owner-id archastro create user --system-user -n "Demo User" archastro create threadmember --thread --user-id archastro create threadmessage --thread --user-id \ -c "I need help with invoice INV-2041" ``` This creates the flow most developers care about: 1. a message arrives 2. the platform gathers the right context 3. the agent decides what to do 4. the agent replies, uses tools, or pulls in more knowledge ## Best practices 1. Give each agent a narrow, understandable job. 2. Add only the routines the agent really needs. 3. Give the agent only the tools and knowledge it should have. 4. Test new behavior in a sandbox before wider rollout. 5. Review agent behavior regularly and refine instructions, routines, and access as you learn what works. --- ## Deploy from a template The recommended workflow is to write an `agent.yaml` file (an AgentTemplate) and deploy it in one command: ```bash archastro deploy agent agent.yaml --name "Support Agent" ``` This creates the agent AND provisions all tools, routines, and installations in one command. ### Minimal AgentTemplate example ```yaml kind: AgentTemplate key: support-agent name: Support Agent identity: | You help users resolve billing and support problems with short, concrete answers. Always ask one clarifying question before taking action. tools: - kind: builtin builtin_tool_key: search status: active - kind: builtin builtin_tool_key: knowledge_search status: active routines: - name: Reply to messages description: Respond when a new message arrives handler_type: preset preset_name: participate event_type: thread.session.join event_config: thread.session.join: {} status: active installations: - kind: memory/long-term config: {} ``` ### Key fields - **`identity`** -- system prompt and instructions that define the agent's behavior and boundaries. - **`tools`** -- builtin or custom tools the agent can use. Builtin tools reference a `builtin_tool_key`; custom tools reference a handler and config. - **`routines`** -- event handlers that give the agent ongoing behavior. Each routine specifies a `handler_type` and an `event_type`. - `handler_type` values: `preset` (built-in behavior), `script` (custom logic), `workflow_graph` (multi-step workflows). - `preset_name` values: `participate` (join conversations), `auto_memory_capture` (opt-in: extracts and stores key facts after sessions when enabled by the agent creator), `do_task` (execute instructions on schedule). - `event_type` values: `thread.session.join`, `thread.session.leave`, `message.created`, `schedule.cron`. - For cron routines, add `schedule: "0 9 * * 1"` (a cron expression) alongside `event_type: schedule.cron`. - **`installations`** -- connected capabilities such as memory, integrations, and knowledge sources. See [Installations](/docs/installations) for the full list of kinds. ### Validate before deploying ```bash archastro configs validate --kind AgentTemplate --file agent.yaml ``` Run this before `deploy` to catch schema errors early. --- ### Knowledge URL: https://docs.archagents.com/docs/knowledge Summary: What the agent can use, and how to prove it is working. ## Overview Knowledge is how an agent gets access to the information it should use when it works. That includes: - connected systems such as Google or GitHub - imported document collections - synced knowledge feeds - normalized items the platform can retrieve later It helps to think of knowledge as a pipeline, not a blob: - an integration connects to a provider - a source defines what knowledge feed to use - an ingestion syncs that feed - items are the normalized records the agent can actually draw from You can check each step: what is connected, what was synced, and what the agent can actually reach. The knowledge pipeline Knowledge becomes usable in stages: connect a system, define the source, sync it, then inspect the resulting items. [Diagram: Diagram showing a provider integration feeding a source, then an ingestion, then normalized knowledge items that an agent can use] --- ## A concrete example Imagine Company A runs the underlying deployment platform for Company B. Company A wants its `Platform Support Agent` to help Company B diagnose a failing integration, but only with approved material: - rollout runbooks - known retry issues - connector troubleshooting notes - past validated migration steps The right setup is not "give the agent every document." It is: 1. connect the approved provider or document collection 2. define the exact source that should be searchable 3. sync it 4. inspect what the platform actually ingested 5. let the agent use only that approved body of knowledge This keeps knowledge useful without turning it into "give the agent every document." --- ## The main pieces | Piece | What it means | |------|---------------| | **Integration** | The authenticated connection to a provider or workspace | | **Source** | The specific feed, collection, or scope of knowledge to sync | | **Ingestion** | The sync job that imports or refreshes knowledge | | **Item** | One normalized knowledge record the platform can retrieve later | | **Credential** | Secret material used for knowledge or browser access when needed | Two distinctions matter: - an **integration** says "we can connect to this system" - a **source** says "this is the specific knowledge stream we want from that system" This keeps the setup explainable to developers and security reviewers. --- ## Set up from the CLI Use the CLI or your coding agent to create knowledge connections: 1. connect the outside system 2. inspect scopes and ownership 3. confirm which workspace, repository, inbox, or document collection should be used For OAuth-based connections that require a browser redirect, the portal handles the initial authorization flow. Once connected, use the CLI to inspect and operate what was created. Review the result in the portal for a visual overview of connected systems and their status. --- ## Inspect integrations from the CLI List the connected knowledge integrations: ```bash archastro list integrations archastro describe integration ``` This tells you: - which provider is connected - which workspace it points at - who owns it - whether the connection is still healthy Use this when a developer asks, "Which knowledge connection is this agent actually using?" --- ## Inspect and manage sources Sources are what developers work with most. They tell the platform which specific feed should become usable knowledge. ```bash archastro list contextsources archastro list contextsources --installation archastro describe contextsource ``` If you need to create or tune a source from the CLI: ```bash archastro create contextsource \ --type github_activity \ --team-id \ --payload '{"repository":"company-a/platform-rollouts"}' ``` A source type is provider-specific. `github_activity` is one concrete GitHub-backed source type. Teams create the first source from the CLI or an agent template, then use `describe contextsource` and `list contextsources` to inspect the exact shape before scripting more of them. The portal provides a visual overview of all sources and their status. A source is where the knowledge boundary becomes concrete. It is not just "GitHub is connected." It is "this exact repository or feed is part of the approved context." --- ## Check ingestion health Ingestion is where many real knowledge problems show up. If the agent is not seeing the knowledge you expected, check the ingestion state before assuming the model is wrong. ```bash archastro list contextingestions archastro list contextingestions --status failed archastro list contextingestions --source archastro describe contextingestion ``` This is the debugging loop: 1. inspect the source 2. inspect recent ingestions 3. confirm whether the sync succeeded 4. only then debug the agent behavior itself That sequence saves a lot of wasted prompt debugging. --- ## Inspect the resulting items Items are the normalized records the platform actually has available after ingestion. ```bash archastro list contextitems --source archastro describe contextitem ``` If an agent keeps missing a fact, this is where you verify whether that fact exists in the synced knowledge at all. This is a better debugging step than guessing about prompts. --- ## About credentials Some knowledge flows need credentials in addition to an integration. The CLI supports credential inspection and management: ```bash archastro list contextcredentials archastro describe contextcredential ``` These commands return credential metadata such as domain, owner, and last access time. They do not print raw secret values back to the terminal. Credential fields are stored encrypted at rest. The CLI is designed as a review and maintenance surface — it shows metadata, not raw secret values. For credentials that involve sensitive values, the portal provides a guided setup flow that keeps secrets out of shell history. The CLI is the primary surface for: - creating and managing credentials programmatically - inspection and auditing - controlled follow-up updates --- ## Knowledge in cross-company work Knowledge becomes much more important in Agent Network scenarios. The rule is simple: - each company keeps its private knowledge private - collaboration happens in the shared thread - the shared thread does not imply shared private context This is both a configuration responsibility and a platform boundary: - only attach the sources an agent truly needs - review those sources before the agent joins shared work - do not assume a shared thread should widen an agent's retrieval scope Company B can ask Company A's agent for help without automatically widening access to Company A's full internal corpus. Use [Agent Network](/docs/agent-network) when the knowledge boundary needs to hold across company lines. --- ## Best practices Good knowledge setups follow five rules: 1. connect only the systems that help the agent do its actual job 2. keep each source narrow and intentional 3. inspect ingestion health before debugging model behavior 4. review items and ownership when results look wrong 5. avoid mixing company-private knowledge into shared collaboration spaces When the knowledge path is clear and explainable, the whole setup is easier to trust and review. --- ## Where to go next 1. Read [Agents](/docs/agents) for the full runtime model. 2. Read [Installations](/docs/installations) for the broader attachment lifecycle. 3. Read [Tools](/docs/tools) if the agent also needs to act, not just read. 4. Read [Agent Network](/docs/agent-network) for cross-company knowledge boundaries. --- ### Workflows URL: https://docs.archagents.com/docs/workflows Summary: Design multi-step flows for approvals, handoffs, branching, and longer-running work. ## Overview Workflows let you describe a process across multiple steps. Use them when one routine or one script is not enough. They are a good fit for: - handoffs between steps - conditional branching - retries - approvals - longer business processes A workflow is where you spell out the sequence of work. - a routine decides when something should happen - a workflow describes how it should happen step by step If an agent needs to do more than one thing, wait for approval, branch, retry, or hand work off across several stages, use a workflow. --- ## A concrete example Imagine a support agent that handles refund requests. The routine might react when a new message looks like a refund issue. The workflow could then: 1. classify the request 2. check account details 3. ask for human approval if the amount is large 4. send the final response 5. write the outcome back to the thread - the routine notices the moment - the workflow runs the process The workflow shape Use a workflow when the work needs to stay visible as a process: multiple steps, decisions, approvals, or handoffs. [Diagram: Diagram showing a workflow moving from trigger to classify to approval to action to write back] --- ## Building a workflow ### File-backed workflow configs Use the CLI or your coding agent to create and manage workflows as config-backed files: ```bash archastro configs kinds archastro configs sample workflow --to-file ./tmp/workflow.sample.yaml archastro configs validate -k workflow -f ./tmp/workflow.sample.yaml ``` After you have a workflow shape you want to keep: ```bash archastro configs sync archastro configs deploy ``` This is the primary path for workflows — they belong in source control where your team can review and iterate on them. ### Portal editor The developer portal also provides a visual workflow builder for reviewing and editing workflows: 1. Open **Workflows** in the portal. 2. Click **New Workflow** and enter a name. 3. Add nodes to the canvas. 4. Connect nodes to define execution order. 5. Save and test the flow. The editor includes: - **Run as User** selector so you can choose who the workflow runs as - **Debug mode** for inspecting workflow inputs and outputs - **Auto-versioning** for safe iteration and rollback Use the portal editor when you want a visual overview of the flow or need to quickly test whether a sequence makes sense. ### Workflow execution Workflows can be triggered three ways: 1. **Agent routines** — attach a workflow to an agent behavior 2. **Automations** — run a workflow on a schedule or important event 3. **Direct run** — test a workflow from the portal while you build it When a workflow starts, it receives the data from whatever triggered it. That makes workflows reusable. You can test one from the CLI or portal, then attach it to a routine or automation once the flow is clear. --- ## Node types Workflows are built from nodes. Each node does one job in the flow. | Node | What it does | |------|-------------| | **ScriptNode** | Run an ArchAstro script for custom logic or data transformation | | **HttpNode** | Make an HTTP request to an external API | | **ChatCompletionNode** | Call an LLM to generate text or make a decision | | **SwitchNode** | Branch the flow based on a condition | | **LoopNode** | Iterate over items and run a subflow for each | | **DelayNode** | Wait for a specified duration before continuing | | **EmailNode** | Send an email | | **WebhookNode** | Wait for an incoming webhook callback | | **TemplateNode** | Render a Liquid template into structured output | | **DataNode** | Transform or reshape data between steps | | **EntryNode** | The starting point of the workflow | Most workflows use a small subset of these. A typical flow might be: EntryNode -> HttpNode -> ScriptNode -> SwitchNode -> EmailNode. Workflow configs are YAML files. Use `archastro configs sample workflow` to see the full structure. The portal editor provides a visual view for reviewing and editing workflows. --- ## Scripts and expressions inside workflows Workflows support both scripts and lightweight expressions: - **Scripts** for custom logic and multi-step data handling - **Expressions** for simple conditions and field access This gives you a practical mix: visual structure for the overall flow, and code only where it adds real value. A good workflow keeps the high-level process visible and only drops into code for the genuinely custom parts. ### Attaching a workflow to an agent routine Once the workflow exists as a config, a routine can point at it by config id: ```bash archastro update agentroutine \ --handler-type workflow_graph \ --config-id ``` That split is the part many teams miss: - the workflow holds the process definition - the routine decides when that process should run --- ## When to use workflows vs routines | Scenario | Use | |----------|-----| | Single event → single action | Routine | | Multi-step orchestration | Workflow | | Conditional branching | Workflow | | Data transformation pipeline | Workflow | | Scheduled batch processing | Automation → workflow | | Human approval gate | Workflow | Routines decide **when** work should run. Workflows define **what** should happen once execution begins. If you are unsure which one you need, ask the simpler question: - "Do I just need the agent to react?" Use a routine. - "Do I need a visible process with several steps?" Use a workflow. --- ## Where to go next 1. Use `archastro configs sample workflow` to see the full YAML structure. 2. Read [Scripts](/docs/scripts) for the custom logic you can embed inside workflow nodes. 3. Read [Automations](/docs/automations) for scheduled and event-triggered workflows. 4. Read [Agents](/docs/agents) for how routines connect workflows to agent behavior. --- ### Automations URL: https://docs.archagents.com/docs/automations Summary: Run repeatable project-wide jobs on a schedule or when important events happen. ## Overview Automations handle work that should happen for a whole project, not just for one agent. Use them when you want to: - run a daily or hourly job - react to important events in one place - keep cross-cutting workflows out of individual agents - coordinate work across agents, users, teams, or data sources Automations are the background jobs of your ArchAstro project. Use them when the work belongs to the project as a whole, not to one specific agent identity. Manage them through the CLI and developer portal. --- ## A concrete example Imagine you want a daily activity summary for the whole project. That job does not belong to one support agent or one delivery agent. It belongs to the project itself. An automation can: 1. run every morning 2. gather the activity data you care about 3. call a workflow that formats the summary 4. send the result to the right thread or destination The main distinction: - routines shape one agent's behavior - automations run shared project-wide work The project-wide job model An automation sits above any one agent. It starts project work from a schedule or event and records the result. [Diagram: Diagram showing a schedule or event leading to an automation, then a workflow, then a result] --- ## Automation types ### Trigger automations Trigger automations run when a matching event happens. Examples: - someone joins a thread - a message is created - a connector is linked - an incoming email arrives These are useful when you want one shared reaction to an event without tying that reaction to a single agent. ### Scheduled automations Scheduled automations run on a timetable you define. Examples: - send a daily summary every morning - run a cleanup job every night - check for stuck work every hour These are useful when you want a heartbeat, cleanup, report, audit, or periodic sync. --- ## Available event types Inspect the full event list from the CLI or developer portal. The most useful categories are: ### Thread events | Event | Description | |-------|-------------| | `thread.created` | A new thread was created | | `message.created` | A message was added to a thread | | `thread.member_joined` | A member joined a thread | | `thread.member_left` | A member left a thread | ### Connector events | Event | Description | |-------|-------------| | `connector.connected` | An OAuth connector was connected | ### Context events | Event | Description | |-------|-------------| | `context.ingestion.succeeded` | A context ingestion job completed | | `context.ingestion.failed` | A context ingestion job failed | ### Email events | Event | Description | |-------|-------------| | `email.received` | An inbound email was received | | `email.processed` | An email was processed | --- ## Status states Automations move through three simple states: | Status | Behavior | |--------|----------| | `draft` | Saved, but not running yet | | `running` | Active and ready to react | | `paused` | Temporarily stopped | That lifecycle is intentionally simple. You only need to know whether an automation is ready, active, or temporarily stopped. --- ## Automation runs Each time an automation runs, ArchAstro records what happened so you can review it later. That run history is what makes automations operationally usable. When background work misbehaves, you need to see what ran and why instead of treating it like invisible magic. ### Run statuses | Status | Meaning | |--------|---------| | `pending` | Queued, awaiting execution | | `running` | Work is in progress | | `completed` | Finished successfully | | `failed` | The run ended with an error | | `cancelled` | The run was cancelled | ### Viewing runs ```bash archastro list automationruns --automation aut_abc123 archastro list automationruns --automation aut_abc123 --status failed archastro describe automationrun atr_abc123 ``` ## Automations vs. routines Both automations and routines react to events, but they solve different problems: | | Automations | Routines | |---|---|---| | Scope | Whole project | One agent | | Best for | Shared jobs and scheduled work | Agent behavior | | Typical example | Daily digest or event pipeline | Replying to new messages | Use automations for shared background work. Use routines for how a specific agent behaves. Another quick way to choose: - if the work belongs to one named agent, start with a routine - if the work belongs to the project, start with an automation --- ## Agent routines with LLM execution (do_task) The `do_task` preset is the most powerful routine type. It triggers a full LLM execution session where the agent can think and act using all of its configured tools. Use it when you want an agent to reason about a task on a schedule or in response to an event — not just run a deterministic script. ### Example: weekly report routine ```yaml routines: - name: weekly-report description: Generate weekly activity summary handler_type: preset preset_name: do_task preset_config: instructions: | Review all activity from the past week. Summarize key findings and send a Slack message to #reports. schedule: "0 9 * * 1" event_type: schedule.cron status: active ``` ### Key fields - **`preset_name: do_task`** — tells the platform to run a full agent session with LLM reasoning. - **`preset_config.instructions`** — the task the agent should perform. Write this like you would write a prompt. - **`schedule`** — a cron expression for when to run (e.g. `"0 9 * * 1"` means every Monday at 9 AM). - The agent gets access to **all its configured tools** during execution — search, knowledge, integrations, memory, and anything else you have wired up. ### do_task vs. script routines Script routines run deterministic code. They always do the same thing the same way. `do_task` routines run the LLM with full tool access. The agent reasons about the instructions, decides what tools to call, and adapts to whatever it finds. Use `do_task` when the work requires judgment, not just execution. --- ## CLI commands ```bash # List automations archastro list automations archastro list automations --type trigger # Create archastro create automation -n "Nightly Report" -t scheduled --schedule "0 0 * * *" --config-id cfg_abc123 # Manage state archastro activate automation aut_abc123 archastro pause automation aut_abc123 # Update archastro update automation aut_abc123 -n "Updated Name" --config-id cfg_def456 # Delete archastro delete automation aut_abc123 # View runs archastro list automationruns --automation aut_abc123 archastro describe automationrun atr_abc123 ``` --- ## Design patterns ### Event-driven onboarding Trigger shared onboarding work when a new user joins a thread: ```bash archastro create automation -n "Onboarding Flow" \ -t trigger \ --trigger thread.member_joined \ --config-id cfg_onboarding_workflow ``` ### Scheduled reporting Run a daily job that gathers activity and posts a summary: ```bash archastro create automation -n "Daily Activity Report" \ -t scheduled \ --schedule "0 9 * * *" \ --config-id cfg_daily_activity ``` ### Context ingestion monitoring React to ingestion failures so a team can retry or investigate: ```bash archastro create automation -n "Ingestion Failure Alert" \ -t trigger \ --trigger context.ingestion.failed \ --config-id cfg_ingestion_alert ``` --- ### Networks URL: https://docs.archagents.com/docs/networks Summary: In ArchAstro, a network is the shared team and threads that sit inside Agent Network. ## Overview `Network` is the shorter term for the shared collaboration space inside Agent Network. That shared space is made of: - one shared team - one or more shared threads - the people and agents intentionally added to them That is it. The rest of the company setup stays in its own private space. If you are trying to understand the full cross-company model, start with [Agent Network](/docs/agent-network). If you are trying to set one up, go to [Agent Network - Getting Started](/docs/agent-network-getting-started). This page exists only to clarify the term. --- ### Scripts URL: https://docs.archagents.com/docs/scripts Summary: Write focused custom logic for workflow steps, policy checks, adapters, and other places where built-in nodes are not enough. ## Overview Scripts are where you put the small pieces of custom logic that give a workflow or routine its project-specific behavior. They are useful when the platform already gives you the overall structure, but you still need code for the part that is unique to your business. Typical uses include: - reshaping data between steps - applying policy checks - adapting one system's format to another - making a routing decision that is too custom for a simple expression Scripts are not "write arbitrary code everywhere." They are small, reviewable bits of custom logic inside an otherwise understandable flow. --- ## Language basics The ArchAstro script language is expression-oriented. The last expression in the script body is the return value -- there is no `return` keyword. Key syntax rules: - **Variables**: `let x = 10` (no `const`, `var`, or `function` keywords) - **Anonymous functions**: `fn(x) { x * 2 }` - **Imports**: `import("array")`, `import("requests")`, etc. - **Input payload**: `$` gives access to the input data via JSONPath - **Input declarations**: `input var_name` declares variables from the execution environment (for workflow step outputs) - **Environment variables**: `env.API_KEY`, `env.SLACK_WEBHOOK` - **Comments**: `//` single-line and `/* */` multi-line - **Semicolons**: optional (automatic semicolon insertion) - **No loops**: use `array.map`, `array.filter`, `array.reduce` instead of `for` or `while` Available import namespaces: `requests`, `array`, `string`, `map`, `datetime`, `math`, `result`, `email`, `jwt`, `slack`. See the [Script Language Reference](/docs/script-reference) for the full list of namespaces and functions. --- ## A concrete example Imagine a workflow that processes refund requests. Most of the workflow stays visual -- receive the request, gather account info, check approval, send the result. The script handles the custom part in the middle: calculate the refund, normalize billing data, enforce a business rule. ``` let http = import("requests") let arr = import("array") let items = $.order.line_items let eligible = arr.filter(items, fn(item) { item.refundable == true }) let totals = arr.map(eligible, fn(item) { { sku: item.sku, refund_amount: item.price * item.quantity } }) let grand_total = arr.reduce(totals, 0, fn(acc, t) { acc + t.refund_amount }) let approval = unwrap(http.post(env.BILLING_API_URL, { headers: { "Authorization": "Bearer " + env.BILLING_API_KEY }, body: { order_id: $.order.id, amount: grand_total } })) { eligible_items: totals, total_refund: grand_total, approval_id: approval.body.id } ``` That script reads the input payload with `$`, filters and transforms data with `array` functions, calls an external API with `requests`, and returns a structured object for the next workflow step. --- ## Execution contexts Where a script runs determines what `$` contains and what capabilities are available. | Context | `$` contains | `env` available | Builtin tools available | |---------|-------------|-----------------|------------------------| | Workflow ScriptNode | Step input data | Yes | No | | Routine handler (script type) | Event payload | Yes | No | | Custom tool script | Tool arguments | Yes | No | | `do_task` preset | N/A (LLM has full tool access) | Yes | Yes (all agent tools) | Scripts run under the same scoped platform authorization model as the routine or workflow that invoked them. Scripts can also use `input var_name` to declare named variables from the execution environment. This is useful when a workflow step outputs a named result that the next script needs to consume. Unknown identifiers are errors — declare them with `let` or `input`. --- ## Scripts vs expressions | Feature | Script | Expression | |---------|--------|------------| | Multi-step logic | Yes | No | | Return value | Last expression (implicit) | Implicit evaluation | | Imports | Yes (`import("namespace")`) | No | | HTTP calls | Yes (via `requests`) | No | | Error handling | `unwrap()` builtin, `result` namespace | Minimal | | Use in workflows | Full ScriptNode | Inline conditions and field access | | Best for | Custom behavior, transformations | Small checks, field access, routing guards | Use expressions when the logic is tiny and obvious -- a field comparison, a null check, simple string interpolation. Use scripts when: - the code needs several steps or intermediate variables - you need to call an external service - the logic needs to be tested on its own - the transformation is central enough that it deserves a named, reusable unit --- ## Common patterns ### HTTP call with error handling ``` let http = import("requests") let response = http.get(env.STATUS_API_URL, { headers: { "Authorization": "Bearer " + env.API_TOKEN } }) let body = unwrap(response, { status: "unknown" }) { service_status: body.status } ``` ### Conditional notification ``` let mail = import("email") let amount = $.invoice.total let recipient = if (amount > 10000) { env.ALERTS_EMAIL } else { env.INFO_EMAIL } unwrap(mail.send({ to: recipient, subject: "Invoice " + $.invoice.id, text_body: "Amount: $" + string.toString(amount) })) { notified: true, to: recipient } ``` ### Data pipeline ``` let arr = import("array") let str = import("string") let raw = $.records let cleaned = arr.filter(raw, fn(r) { r.email != null }) let normalized = arr.map(cleaned, fn(r) { { email: str.lowercase(r.email), name: str.trim(r.name), source: "import" } }) let by_domain = arr.reduce(normalized, {}, fn(acc, r) { let domain = str.split(r.email, "@").1 let existing = map.get(acc, domain, []) map.put(acc, domain, arr.concat(existing, [r])) }) { processed: arr.length(normalized), by_domain: by_domain } ``` --- ## Validation The CLI validates script syntax with `archastro configs validate`, and the portal also validates syntax when you save. Syntax errors (mismatched braces, unknown operators, malformed expressions) are caught at validation time. However, validation does **not** check runtime function availability. A script that calls a function that does not exist in the imported namespace will pass validation but fail at execution time. Always test scripts with sample input before deploying them in a live workflow. --- ## Writing and testing scripts Write scripts locally in your editor or coding agent, then deploy them as configs: 1. Generate a sample with `archastro configs sample script`. 2. Write the custom logic in your local file. 3. Validate with `archastro configs validate -k script -f ./path/to/script.yaml`. 4. Deploy with `archastro configs deploy`. You can also validate and run scripts directly from the CLI: ```bash archastro script validate -f ./path/to/script.yaml archastro script run -f ./path/to/script.yaml --input '{"key": "value"}' archastro script docs ``` `archastro script docs` prints the full script language reference. The portal also provides a script editor with a built-in test runner: 1. Open **Scripts** in the portal. 2. Run a script with sample input to verify behavior. 3. Use version history for rollback if a later change is wrong. Good scripts are small enough to review quickly, narrow enough to explain in one sentence, easy to test with sample input, and focused on one job. When a script starts absorbing too much workflow logic, the visual process disappears and the workflow becomes a box of code -- a sign that the script should be split or the workflow restructured. --- ## Debugging scripts When a script fails, check these in order: ### 1. Check the routine or automation run ```bash archastro list agentroutineruns --routine ``` The run list shows status and error messages for each execution. ### 2. Use println for inspection `println` outputs values to the console panel in the portal script editor. Use it to inspect intermediate values: ``` let data = $.payload println("received:", data) let items = data.items || [] println("item count:", array.length(items)) ``` ### 3. Common errors and fixes | Error | Cause | Fix | |-------|-------|-----| | `unknown_function: env` | Calling `env()` as a function | Use `env.KEY` (dot access, not function call) | | `unknown_function: http_post` | Using wrong function name | Use `import("requests")` then `http.post(...)` | | `unknown_identifier: params` | Expecting implicit variables | Use `$` for input payload, `env.KEY` for env vars | | `cannot_access_property` on array | Using `.length` property | Use `array.length(items)` (function, not property) | | `invalid_arguments: array.map` | Input is not an array (e.g. got a 404 JSON response) | Check the HTTP response before mapping: `if (resp.body.items) { ... }` | ### 4. Validation vs runtime `archastro configs validate` checks syntax only. A script can pass validation but fail at runtime if: - an env var is not configured - an HTTP endpoint returns an unexpected response - a namespace function receives wrong argument types Test scripts with sample input — either locally or in the portal editor — before deploying them in routines. --- ## Further reading See the [Script Language Reference](/docs/script-reference) for the full specification, including all namespace functions, operator precedence, and error handling details. --- ### Script Language Reference URL: https://docs.archagents.com/docs/script-reference Summary: Complete reference for the ArchAstro script language — syntax, operators, namespaces, and patterns. > This reference is also available in the CLI via `archastro script docs` or `archastro configs script-reference`. > This page is auto-generated from the platform source. Do not edit manually. # ArchAstro Script Language Reference ArchAstro scripts are expression-oriented. Every statement produces a value. The last expression in a script is its return value. Statements are separated by semicolons or newlines (automatic semicolon insertion). ## Comments ``` // line comment /* block comment (nestable) */ ``` ## Literals - Numbers: `42`, `3.14`, `1e-5` - Strings: `"hello"` or `'hello'` with escapes `\n`, `\r`, `\t`, `\"`, `\'`, `\\` - Booleans: `true`, `false` - Null: `null` - Arrays: `[1, 2, 3]` - Objects: `{key: "value", "other_key": 42}` ## Truthy / Falsy Only these values are falsy: `false`, `null`, `0`, `0.0`, `""` (empty string), `[]` (empty array). Everything else is truthy, including empty objects `{}`. ## Variables `let` declares a binding. Variables are block-scoped. Rebinding a name in the same scope shadows the previous value. ``` let name = "world" let count = 42 let items = [1, 2, 3] let config = {key: "value", enabled: true} let count = count + 1 // shadows previous count ``` Reserved names that cannot be used as variables: `env`, `import`, `viewer`, `unwrap`. ## Operators Precedence (highest to lowest): 1. Member access: `.property`, `[index]` 2. Function call: `fn(args)` 3. Unary: `!`, `-` 4. Multiplicative: `*`, `/`, `%` 5. Additive: `+`, `-` 6. Relational: ``, `>=` 7. Equality: `==`, `!=` 8. Logical AND: `&&` 9. Logical OR: `||` 10. Ternary: `? :` 11. Try-unwrap: postfix `?` ### Short-circuit operators `&&` and `||` return actual values (not booleans), like JavaScript: ``` "hello" && "world" // "world" 0 && "skipped" // 0 null || "default" // "default" "found" || "fallback" // "found" ``` String concatenation uses `+`: `"hello " + "world"`. ## Conditional Expressions ``` if (condition) { thenValue } else { elseValue } ``` Conditionals are expressions that return a value: ``` let label = if (count > 10) { "many" } else { "few" } ``` Ternary shorthand: `condition ? thenValue : elseValue` **Important:** `} else` must be on the same line to avoid automatic semicolon insertion. ``` // CORRECT if (x) { 1 } else { 2 } // CORRECT if (x) { 1 } else { 2 } // WRONG — ASI inserts semicolon after } if (x) { 1 } else { 2 } ``` ## Functions Anonymous functions: ``` fn(x) { x * 2 } fn(a, b) { a + b } ``` Named functions (desugars to `let` binding): ``` fn double(x) { x * 2 } double(5) // 10 ``` Functions are first-class values — they can be passed as arguments, returned from other functions, and stored in variables. Closures capture their lexical scope at definition time. Named functions support recursion. ``` // Recursion fn factorial(n) { if (n }` - Err: `{"ok": false, "error": {"code": "error_code", "message": "description"}}` **unwrap(result)** — extracts value from Ok, halts script on Err. **unwrap(result, default)** — extracts value from Ok, returns default on Err. **Postfix `?` operator** — unwraps Ok, early-returns Err from current function. ``` // Halt on error let data = unwrap(http.get("https://api.example.com")) // Provide fallback let data = unwrap(http.get("https://api.example.com"), null) // Early return in function fn fetchUser(id) { let resp = http.get("https://api.example.com/users/" + id)? resp.body } ``` ## Debugging `println(...)` outputs values to the console panel. Takes any number of arguments. ``` println("user:", user) println("count =", array.length(items)) ``` ## Special Identifiers - `$` — JSONPath root input. Use $.field to read from workflow input payload. - `@` — JSONPath current item. Available inside JSONPath projections and filters. ## Environment Variables Apps can configure environment variables (secrets, API keys, configuration). These are injected into scripts as the `env` object. Access them with dot notation: ``` env.API_KEY env.WEBHOOK_SECRET env.BASE_URL ``` `env` is a reserved name — you cannot use it as a variable name. Environment variables are read-only. If no environment variables are configured, `env` is not available and accessing it will produce an error. ``` // Use env vars for secrets in HTTP requests let http = import("requests") let resp = unwrap(http.post(env.WEBHOOK_URL, { body: $.payload, headers: {"Authorization": "Bearer " + env.API_TOKEN} })) resp.body ``` ## Builtin Functions - `contains(string, substring), contains(list, value)` — Returns true if a string contains a substring or a list contains a value. → `boolean` - `icontains(string, substring)` — Case-insensitive substring check. → `boolean` - `import(namespaceName)` — Loads a namespace (array, map, string, math) and returns its function map. → `namespace` - `lowercase(string)` — Returns a lowercased string. → `string` - `map(key1, value1, key2, value2, ...)` — Builds a map from alternating key/value pairs. → `map` - `merge(leftMap, rightMap)` — Merges two maps. Keys in rightMap overwrite leftMap. → `map` - `println(...)` — No documentation available. → `any` - `put(map, key, value)` — Returns a map with key set to value. Nil map input is treated as empty map. → `map` - `unwrap(result), unwrap(result, default)` — Extracts the value from an Ok result. Halts with an error if the result is Err. With two arguments, returns the default value instead of halting on Err. → `any` ## Namespaces ### array Array/list helpers. - `array.concat(listA, listB)` — Concatenates two lists. → `list` - `array.every(list, fn(item) -> boolean)` — Returns true when all items satisfy the predicate. → `boolean` - `array.filter(list, fn(item) -> boolean)` — Returns items where predicate is truthy. → `list` - `array.find(list, fn(item) -> boolean)` — Returns first item matching predicate or nil. → `any | nil` - `array.first(list)` — Returns first list item or nil. → `any | nil` - `array.flat(list), array.flat(list, depth)` — Flattens nested lists (all levels by default). → `list` - `array.indexOf(list, value)` — Returns index of value or -1 when not found. → `integer` - `array.join(list), array.join(list, separator)` — Joins list values into a string. → `string` - `array.last(list)` — Returns last list item or nil. → `any | nil` - `array.length(list)` — Returns list length. → `integer` - `array.map(list, fn(item) -> value)` — Transforms each list item with mapper function. → `list` - `array.reduce(list, initial, fn(acc, item) -> nextAcc)` — Reduces list into a single value. → `any` - `array.reverse(list)` — Returns a reversed list. → `list` - `array.slice(list, start), array.slice(list, start, stop)` — Returns list slice with stop treated as exclusive. → `list` - `array.some(list, fn(item) -> boolean)` — Returns true if at least one item satisfies predicate. → `boolean` ### datetime Date and time operations: parsing, formatting, arithmetic, comparison, and timezone conversion. - `datetime.add(datetime, amount, unit)` — Adds a duration to a datetime. Amount can be negative to subtract. Units: seconds, minutes, hours, days, weeks, months, years. → `Result` - `datetime.compare(a, b)` — Compares two datetimes. Returns -1 if a b. → `Result` - `datetime.diff(a, b, unit)` — Returns the difference between two datetimes (a - b) in the given unit. Units: seconds, minutes, hours, days, weeks. → `Result` - `datetime.format(datetime, pattern)` — Formats a datetime using strftime patterns. Common: %Y (year), %m (month), %d (day), %H (hour), %M (minute), %S (second), %B (month name), %A (weekday name). → `Result` - `datetime.now(), datetime.now(timezone)` — Returns the current time as an ISO 8601 string. Without arguments returns UTC. With a timezone (e.g. "America/Denver") returns local time with offset. → `string (ISO 8601)` - `datetime.parse(string)` — Parses a date or datetime string into a normalized ISO 8601 string. Accepts ISO 8601 dates ("2026-02-18"), datetimes ("2026-02-18T15:30:00Z"), and datetimes with offsets. → `Result` - `datetime.parts(datetime)` — Decomposes a datetime into its component parts as a map. → `Result` - `datetime.startOf(datetime, unit)` — Truncates a datetime to the start of the given unit. Units: second, minute, hour, day, month, year. → `Result` - `datetime.toTimezone(datetime, timezone)` — Converts a datetime to the specified timezone. Returns an ISO 8601 string with the timezone offset. → `Result` - `datetime.unix(), datetime.unix(datetime)` — Returns a Unix timestamp (seconds since epoch). Without arguments returns the current UTC time. With a datetime string, converts it to a Unix timestamp. Useful for JWT iat/exp claims. → `number (Unix timestamp in seconds)` ### email Email sending and template rendering. - `email.loadTemplate(template_id)` — Loads an EmailTemplate config by ID, lookup_key, or virtual_path. Returns a Result containing the template fields. → `Result` - `email.render(template, variables)` — Renders a loaded email template with the given variables using Liquid syntax. Returns a Result with rendered html and text strings. → `Result` - `email.send({to, subject, text_body, html_body?, cc?, bcc?, from_name?, from_email?, reply_to?})` — Sends an email. Required fields: to, subject, text_body. Optional: html_body (defaults to text_body), cc, bcc, from_name, from_email, reply_to. → `Result` ### jwt Create and sign JSON Web Tokens for service authentication. - `jwt.decode(token)` — Decodes a JWT and returns the payload claims WITHOUT verifying the signature. Use this to read claims from a token, not to validate it. → `Result with claims map` - `jwt.sign(claims, secret, algorithm)` — Signs a JWT with the given claims, secret/key, and algorithm. Supported algorithms: RS256 (RSA private key PEM), HS256 (shared secret string). Returns a Result — use unwrap() to get the token string. → `Result with signed JWT string` ### map Map/object helpers. - `map.delete(object, key)` — Returns map without the provided key. → `map` - `map.entries(object)` — Returns [[key, value], ...] pairs for a map. → `list` - `map.filterKeys(object, fn(key) -> boolean)` — Keeps entries whose key passes predicate. → `map` - `map.fromEntries(entries)` — Builds a map from [[key, value], ...] entries. → `map` - `map.get(object, key), map.get(object, key, defaultValue)` — Reads a value from a map with optional default fallback. → `any` - `map.has(object, key)` — Returns true when key exists in map. → `boolean` - `map.keys(object)` — Returns map keys. → `list` - `map.mapValues(object, fn(value) -> newValue)` — Transforms each value while preserving keys. → `map` - `map.merge(left, right)` — Merges two maps. Keys in right overwrite left. → `map` - `map.put(object, key, value)` — Returns map with key set to value. → `map` - `map.size(object)` — Returns map size. → `integer` - `map.values(object)` — Returns map values. → `list` ### math Math helpers. - `math.abs(number)` — Returns absolute value. → `number` - `math.ceil(number)` — Rounds number up to nearest integer. → `integer` - `math.floor(number)` — Rounds number down to nearest integer. → `integer` - `math.max(a, b), math.max(list)` — Returns maximum of two numbers or max element from list. → `number` - `math.min(a, b), math.min(list)` — Returns minimum of two numbers or min element from list. → `number` - `math.pow(base, exponent)` — Returns base raised to exponent. → `number` - `math.round(number)` — Rounds number to nearest integer. → `integer` - `math.sqrt(number)` — Returns square root of non-negative numbers. → `number` ### persona_templates Bound API namespace persona_templates. - `persona_templates.install({user: ..., template: ...})` — Install a persona template for a user. The `template_id` parameter accepts a persona template ID, key, or a config ID (uuid, public id, lookup key, or virtual path) for a stored PersonaTemplate config. → `The installed persona` - `persona_templates.list({app: ...})` — List persona templates for an app → `Result` - `persona_templates.show({app: ..., persona_template: ...})` — Show a single persona template → `Persona template` ### personas Bound API namespace personas. - `personas.list({user: ..., filter: ...})` — List personas for a user → `Result` ### requests HTTP client for making requests to external APIs. - `http.delete(url), http.delete(url, { headers?, query?, body?, timeout?, auth? })` — Makes an HTTP DELETE request. Options: headers (map), query (map), body (map or string), timeout (seconds, default 30), auth ({bearer: token} or {basic: {username, password}}). → `Result` - `http.get(url), http.get(url, { headers?, query?, body?, timeout?, auth? })` — Makes an HTTP GET request. Options: headers (map), query (map), body (map or string), timeout (seconds, default 30), auth ({bearer: token} or {basic: {username, password}}). → `Result` - `http.head(url), http.head(url, { headers?, query?, body?, timeout?, auth? })` — Makes an HTTP HEAD request. Options: headers (map), query (map), body (map or string), timeout (seconds, default 30), auth ({bearer: token} or {basic: {username, password}}). → `Result` - `http.patch(url), http.patch(url, { headers?, query?, body?, timeout?, auth? })` — Makes an HTTP PATCH request. Options: headers (map), query (map), body (map or string), timeout (seconds, default 30), auth ({bearer: token} or {basic: {username, password}}). → `Result` - `http.post(url), http.post(url, { headers?, query?, body?, timeout?, auth? })` — Makes an HTTP POST request. Options: headers (map), query (map), body (map or string), timeout (seconds, default 30), auth ({bearer: token} or {basic: {username, password}}). → `Result` - `http.put(url), http.put(url, { headers?, query?, body?, timeout?, auth? })` — Makes an HTTP PUT request. Options: headers (map), query (map), body (map or string), timeout (seconds, default 30), auth ({bearer: token} or {basic: {username, password}}). → `Result` ### result Result type helpers. All functions handle non-Result inputs defensively (no crashes). - `result.err(message), result.err(code, message)` — Constructs an Err result with optional code. → `Result` - `result.isErr(value)` — Returns true if value is an Err result. Returns false for non-Result values. → `boolean` - `result.isOk(value)` — Returns true if value is an Ok result. Returns false for non-Result values. → `boolean` - `result.map(result, fn(value) -> newValue)` — Applies mapper to Ok value, returns Err unchanged. Returns non-Result values unchanged. → `Result` - `result.ok(value)` — Constructs an Ok result wrapping the given value. → `Result` - `result.unwrapOr(result, default)` — Returns the Ok value or the default. Returns default for non-Result values. → `any` ### slack Send messages to Slack channels via the agent's Slack bot integration. - `slack.send({channel, text, thread_ts?})` — Posts a message to a Slack channel. Requires channel (e.g. "#alerts") and text. Optional thread_ts for replying in a Slack thread. The agent must have integration/slack_bot installed. → `Result with {ok: true} or error` ### string String helpers. - `string.capitalize(value)` — Uppercases the first character, leaves the rest unchanged. → `string` - `string.charAt(value, index)` — Returns the character at the given index, or null if out of bounds. Supports negative indices. → `string | null` - `string.endsWith(value, suffix)` — Checks whether value ends with suffix. → `boolean` - `string.format(template, ...args)` — C-style string formatting. Supported specifiers: %s (string), %d (integer), %f (float, 6 decimals), %.Nf (float with N decimal places), %j (compact JSON), %J (pretty-printed JSON), %% (literal %). Example: string.format("Hello %s, you are %d", name, age) → `string` - `string.includes(value, substring)` — Checks whether value contains substring. → `boolean` - `string.indexOf(value, substring)` — Returns the byte position of the first occurrence, or -1 if not found. → `integer` - `string.lastIndexOf(value, substring)` — Returns the byte position of the last occurrence, or -1 if not found. → `integer` - `string.length(value)` — Returns character count. → `integer` - `string.lowercase(value)` — Lowercases a string. → `string` - `string.match(value, pattern)` — Runs a regex pattern against the string. Returns the first match with index and capture groups, or null if no match. → `{match, index, groups} | null` - `string.padEnd(value, targetLength), string.padEnd(value, targetLength, padString)` — Pads the end of the string to the target length. Defaults to spaces. → `string` - `string.padStart(value, targetLength), string.padStart(value, targetLength, padString)` — Pads the start of the string to the target length. Defaults to spaces. → `string` - `string.repeat(value, count)` — Repeats the string count times. Max count is 10,000. → `string` - `string.replace(value, pattern, replacement)` — Replaces all occurrences of a literal pattern with replacement. → `string` - `string.replacePattern(value, regexPattern, replacement)` — Replaces all regex matches with replacement. Supports capture group backreferences (\1, \2). → `string` - `string.reverse(value)` — Reverses the string. → `string` - `string.split(value, separator)` — Splits a string into a list by separator. → `list` - `string.startsWith(value, prefix)` — Checks whether value starts with prefix. → `boolean` - `string.substring(value, start), string.substring(value, start, length)` — Returns a substring from start with optional length. → `string` - `string.test(value, pattern)` — Tests whether a regex pattern matches anywhere in the string. → `boolean` - `string.toNumber(value)` — Parses a string to a number (integer or float). Returns null if the string is not a valid number. → `number | null` - `string.toString(value)` — Converts any value to its string representation. Maps and lists are JSON-encoded. → `string` - `string.trim(value)` — Trims surrounding whitespace. → `string` - `string.trimEnd(value)` — Trims trailing whitespace. → `string` - `string.trimStart(value)` — Trims leading whitespace. → `string` - `string.uppercase(value)` — Uppercases a string. → `string` ### threads Bound API namespace threads. - `threads.create({user: ..., thread: ..., skip_welcome_message: ...})` — Create a thread for a user → `The created thread` - `threads.list({user: ..., filter: ...})` — List threads for a user → `Result` - `threads.toggle_persona({user: ..., thread: ..., persona: ..., enabled: ...})` — Toggle a persona on/off for a user thread → `204 No Content` ### users Bound API namespace users. - `users.create({app: ..., email: ..., full_name: ..., org: ..., org_role: ..., is_system_user: ..., skip_onboarding: ...})` — Create a new user for an app → `Created user` - `users.list({app: ..., page: ..., page_size: ..., search: ..., status: ..., is_system_user: ..., email: ..., org: ..., org_role: ...})` — List paginated users for an app → `Result` ## Examples ### Data transformation ``` let items = $.order.items let arr = import("array") let total = arr.reduce(items, 0, fn(sum, item) { sum + item.price * item.qty }) {total: total, count: arr.length(items)} ``` ### Filtering and mapping ``` let users = $.users let active = array.filter(users, fn(u) { u.status == "active" }) array.map(active, fn(u) { {name: string.uppercase(u.name), email: u.email} }) ``` ### Conditional logic with defaults ``` let role = $.user.role || "viewer" let limit = if (role == "admin") { 1000 } else { 100 } {role: role, limit: limit} ``` ### String formatting ``` let name = $.user.name let count = array.length($.items) string.format("Hello %s, you have %d items", name, count) ``` ### Error handling ``` let http = import("requests") let resp = unwrap(http.get($.api_url), null) if (resp) { resp.body } else { {error: "request failed"} } ``` ### Working with dates ``` let dt = import("datetime") let now = dt.now() let deadline = unwrap(dt.parse($.due_date)) let days_left = dt.diff(deadline, now, "days") if (days_left < 0) { "overdue by " + string.toString(math.abs(days_left)) + " days" } else { string.toString(days_left) + " days remaining" } ``` ### Building maps dynamically ``` let entries = array.map($.fields, fn(f) { [f.key, string.trim(f.value)] }) map.fromEntries(entries) ``` ### HTTP POST with headers ``` let http = import("requests") let resp = unwrap(http.post("https://api.example.com/webhooks", { body: {event: "order.created", data: $.order}, headers: {"X-Api-Key": $.api_key}, timeout: 30 })) resp.body ``` ### Regex matching ``` let email = $.user.email if (string.test(email, "^[^@]+@[^@]+\\.[^@]+$")) { let parts = unwrap(string.match(email, "^([^@]+)@(.+)$"), null) if (parts) { {local: parts.groups[0], domain: parts.groups[1]} } else { {error: "parse failed"} } } else { {error: "invalid email"} } ``` ### Chained data pipeline ``` let orders = $.orders // Filter → transform → aggregate let result = array.filter(orders, fn(o) { o.status == "completed" }) let result = array.map(result, fn(o) { {id: o.id, total: o.price * o.qty, date: o.created_at} }) let grandTotal = array.reduce(result, 0, fn(sum, o) { sum + o.total }) {orders: result, grand_total: grandTotal, count: array.length(result)} ``` ### Function composition pattern ``` fn pipe(value, fns) { array.reduce(fns, value, fn(acc, f) { f(acc) }) } let result = pipe($.input, [ fn(s) { string.trim(s) }, fn(s) { string.lowercase(s) }, fn(s) { string.replace(s, " ", "-") } ]) result ``` --- ### Impersonation URL: https://docs.archagents.com/docs/impersonation Summary: Adopt one agent's local context so you can inspect its tools, skills, and behavior from the same environment your coding agent uses. ## Overview Impersonation lets you step into an agent's local context on your machine. That does not mean "pretend in a vague way." It means: - fetch the agent manifest - write local state for that agent session - inspect the tools and skills attached to that agent - run tools as that agent - install linked skills into Claude, Codex, or OpenCode This is one of the strongest operator loops in the platform because it closes the gap between: - the live agent definition - the coding agent on your machine - the actual tools and skills the live agent can use The impersonation loop Impersonation turns a remote agent definition into a local development loop: start, inspect, run, sync, and stop. [Diagram: Diagram showing the impersonation loop from start to local identity to tools and skills to sync and stop] --- ## A concrete example Imagine Company B is integrating with Company A's platform. Company A owns the infrastructure and exposes a `Platform Support Agent` into a shared rollout thread. An engineer working the rollout already has access to Company A's support app for this issue and needs to debug why the `acme-billing-webhooks` integration keeps failing during webhook validation. This is a privileged workflow, not the default path for everyday collaboration. The workflow: 1. join the shared rollout thread in Agent Network 2. impersonate Company A's support agent locally 3. inspect the tools and skills that agent actually has 4. run the relevant troubleshooting tool locally 5. sync if Company A updates the upstream agent configuration That is much better than guessing about prompts or reading stale screenshots. You are operating from the agent's actual attached surface after Company A has deliberately granted that access for the rollout. --- ## What impersonation can and cannot do Impersonation is precise, not magical. It **can**: - pull down the selected agent's current local operating surface - show which tools and skills are attached right now - let you run attached tools through the same agent surface the live agent uses - install linked skills into Claude, Codex, or OpenCode for local work It **cannot**: - turn one agent into a blanket administrator for the whole platform - bypass company boundaries or shared-thread membership rules - automatically expose private knowledge that the agent was not already configured to reach - replace the normal product workflow for shared teams, shared threads, or approvals That distinction matters. The value is not "be anyone." The value is "work locally from the same attached surface the real agent already has." The practical security boundary is: - impersonation changes your local CLI context to one selected agent - you can only impersonate an agent inside an ArchAstro app you can already access - the CLI reaches the agent through private developer endpoints scoped to that app - tool execution still goes through the agent's normal attached platform surface - company boundaries, shared-thread membership, and whatever approvals exist in the live setup still continue to apply Impersonation is powerful because it is specific. It is not a blanket admin switch. ### Authorization and review expectations Treat impersonation as privileged operator access. Before you use it, make sure your deployment is clear on: - who is allowed to impersonate agents - which apps and agents those people are allowed to impersonate - how that authorization is reviewed by the company that owns the agent - how your team records or reviews those sessions during rollout or incident work These docs do not assume impersonation is broadly available to every developer. The safe default is the opposite: grant it narrowly, for a clear business purpose, and only to the people who actually need it. For developer-side "login as user" flows, the platform also mints a user JWT with an `impersonated_by` claim and logs the event at warning level. That is one of the reasons to treat impersonation as an explicit operator workflow, not as a casual convenience feature. --- ## Start impersonating If your app has only one agent, the CLI can pick it automatically. If there are several, it gives you an interactive selection path. ```bash archastro impersonate start ``` Or start with an explicit agent ID: ```bash archastro --app impersonate start ``` `--app` is the CLI's global app override flag. Use it only when you need to target a different ArchAstro app than the one selected by `archastro init` or your default CLI settings. In these docs, a **project** is the local repo or workspace you linked with `archastro init`. An **app** is the ArchAstro application that workspace points at. Then inspect the active state: ```bash archastro impersonate status ``` This creates the local impersonation state the CLI uses to keep the loop coherent. If you are switching companies, apps, or incidents, stop and restart cleanly instead of carrying old local state forward. --- ## Inspect tools and skills Once impersonation is active, ask the two most useful questions first: 1. what can this agent do? 2. what reusable guidance or commands does it already carry? ```bash archastro impersonate list tools archastro impersonate list skills ``` Use this step before you write new code. A lot of "missing capability" work turns out to be a case of not realizing the agent already has the right tool or skill attached. --- ## Run a tool as the agent You can execute a tool by its lookup key, builtin key, or public id: ```bash archastro impersonate run tool search --input '{"query":"acme billing webhooks retry validation"}' ``` That is the fastest way to verify whether the live agent has the right operational surface for a real troubleshooting task. In the Company A / Company B example, this is where Company B's engineer can confirm whether the support agent's search capability actually sees the approved troubleshooting corpus before escalating further. --- ## Install a linked skill into your coding harness This is where impersonation becomes especially useful, and where review matters most. If the agent has linked skills, you can install one directly into the local coding environment: ```bash archastro impersonate install skill ``` Or choose the harness explicitly: ```bash archastro impersonate install skill --harness claude archastro impersonate install skill --harness codex --install-scope project archastro impersonate install skill --harness opencode ``` `OpenCode` here means another supported local coding harness, alongside Claude and Codex. Inspect the skill list first and install only the skill package you actually need. This makes the agent's operational knowledge available to the local coding workflow instead of leaving it trapped in the remote platform definition. That is a big part of the trusted cross-company operator story: the live agent surface can guide local work without flattening company boundaries or exposing a whole workspace. --- ## Sync after upstream changes If the agent definition changes upstream, refresh the local impersonation state: ```bash archastro impersonate sync ``` Use this when: - Company A updates the support agent's tool set - a linked skill changes - the agent manifest was revised after your local session started That keeps the local view honest. --- ## Stop cleanly When the debugging or build loop is done: ```bash archastro impersonate stop ``` That removes the local impersonation state from your machine. --- ## Common use cases Impersonation is especially useful for: - reproducing agent behavior locally - inspecting the exact tools and skills an agent has - helping a coding agent work from the same attached surface as the live agent - debugging cross-company rollout and support flows without flattening company boundaries --- ## Best practices 1. Impersonate only the agent you actually need. 2. Inspect its attached tools and skills before expanding access. 3. Keep cross-company collaboration tied to one shared thread or business purpose. 4. Stop impersonation when the local session is done. --- ## Where to go next 1. Read [Tools](/docs/tools) for the action surface you can inspect and run. 2. Read [Skills](/docs/skills) for reusable local coding workflows attached to agents. 3. Read [Agent Network](/docs/agent-network) for the cross-company boundary model behind the example above. --- ### Tools URL: https://docs.archagents.com/docs/tools Summary: Give agents real capabilities through builtin and custom tools, then inspect and operate those capabilities from the CLI and impersonation loop. ## Overview Tools are how an agent takes action. Without tools, an agent can still reason and reply. With tools, it can: - search - inspect systems - call product logic - trigger workflows - operate through managed environments such as computer use - **knowledge** changes what the agent can know - **tools** change what the agent can do The tool model Tools sit between the agent's decision and the outside action. Some are built in. Others are custom and backed by your own workflows or logic. [Diagram: Diagram showing an agent choosing between builtin and custom tools to act on systems and threads] --- ## A concrete example Suppose Company A exposes a `Platform Support Agent` to help Company B troubleshoot a complex rollout. That agent might need: - a builtin search tool to search approved internal troubleshooting material - a custom tool that runs a workflow to validate webhook retries - computer use for a narrow admin task that cannot be expressed as one clean API call Tools are not random plug-ins. They are the controlled action surface the agent works through. --- ## Built-in tools | Tool key | What it does | |----------|-------------| | `search` | Search the agent's connected knowledge sources | | `knowledge_search` | Semantic search across indexed documents and data | | `integrations` | Access connected external services (GitHub, Slack, etc.) | | `long_term_memory` | Read and write the agent's persistent memory | | `artifacts` | Create and manage structured output artifacts | | `task_list` | Create and manage task lists and custom objects | | `skills` | Access the agent's linked skill packages | | `sub_agents` | Spawn and manage sub-agent sessions | | `wait` | Pause execution until a condition is met | | `scheduling` | Schedule future work or reminders | | `computer` | Execute commands on the agent's managed computer | | `images` | Process and analyze images | Attach builtin tools to an agent in the CLI or in an AgentTemplate YAML. See [Agents](/docs/agents#deploy-from-a-template) for the config format. --- ## Inspect the current tool set Before you add a new tool, inspect the ones the agent already has: ```bash archastro list agenttools --agent archastro describe agenttool ``` This is the fastest way to answer: - which tools are active? - which are builtin versus custom? - what handler or config is behind a custom tool? This is also the review step that tells you whether a tool should be trusted in the first place. --- ## Add a builtin tool Builtin tools are the fastest path when the platform already provides the capability you need. ```bash archastro create agenttool --agent \ --kind builtin \ --builtin-tool-key search \ -k support-search ``` Then activate it: ```bash archastro activate agenttool ``` Builtin tools are a good default because they keep the setup smaller and easier to review. --- ## Add a custom tool Use a custom tool when the agent needs a capability that is specific to your workflow or product. For example, attach a workflow-backed validation tool: ```bash archastro create agenttool --agent \ --kind custom \ -n "Validate webhook retries" \ -d "Checks retry behavior for the acme-billing-webhooks integration" \ -t workflow_graph \ --config-id \ -k validate-webhook-retries ``` Then activate it: ```bash archastro activate agenttool ``` That pattern is useful because the workflow stays visible and reviewable, while the agent gets a clean action surface. ## Review the execution surface before activation A tool is a privileged capability, not a casual plug-in. Before you activate one, be clear on: - what the tool actually does - what workflow or config it points at - what systems or data it can touch - whether the action needs additional approval in your deployment The docs here describe the operator workflow, not an automatic safety guarantee. The safest pattern is to inspect the tool definition, test it through impersonation or a sandbox, then activate it only when the scope is clear. For custom tools, that means reviewing the exact workflow or config behind the tool before you trust it in a shared or production-facing flow. --- ## Run the tool through impersonation After the tool is attached, test it through the impersonation loop: ```bash archastro impersonate start archastro impersonate list tools archastro impersonate run tool validate-webhook-retries --input '{"repository":"acme-billing-webhooks"}' ``` This is one of the best operational workflows in the platform: - attach the tool - impersonate the agent - run the exact capability the live agent would use This is how you debug the action surface without guessing. One important limit is worth being explicit about: not every attached tool is directly runnable through `impersonate run tool ...`. - builtin tools only auto-run when they resolve to one concrete callable function - script-backed custom tools can run directly - workflow-graph custom tools stay attachable and reviewable, but they are not directly executable through the impersonation run path That boundary is useful. It keeps the direct operator loop narrower than the full tool attachment model. When you use a custom tool, two fields are worth checking first: - `handler_type` tells you what kind of execution surface sits behind the tool - `config_id` tells you which workflow-backed definition the tool is pointing at Use `describe agenttool` whenever you need that detail. --- ## Update or pause a tool Tools are live operational surfaces, so it is important to make state explicit. ```bash archastro update agenttool --description "Updated description" archastro pause agenttool archastro activate agenttool ``` If a tool is behaving badly, pause it before chasing prompt changes. A lot of agent problems turn out to be tool problems. --- ## Best practices Good tool setups follow five rules: 1. start with builtin tools if they already solve the job 2. add custom tools only when the business need is real 3. keep custom tools backed by visible workflows or narrowly scoped logic 4. inspect tool state and handler details before debugging agent behavior 5. test tools through impersonation or a sandbox before wider rollout This is one of the main ways teams keep powerful agents understandable. --- ## Where to go next 1. Read [Skills](/docs/skills) for reusable coding-agent behavior linked to agents. 2. Read [Impersonation](/docs/impersonation) for the best local testing loop. 3. Read [Computer Use](/docs/computer-use) when the capability needs a managed workstation instead of a simple tool call. --- ### Skills URL: https://docs.archagents.com/docs/skills Summary: Create reusable skill packages, inspect the files behind them, and install linked skills into Claude, Codex, or OpenCode through a reviewed impersonation workflow. ## Overview Skills are reusable packages that carry instructions, files, and supporting material a coding agent can use. They matter in ArchAstro because they bridge two worlds: - the remote agent definition in the platform - the local coding environment where developers and coding agents actually work That bridge is what makes skills more than documentation. They can become part of the day-to-day developer loop. The skill reuse model A skill can be defined once, linked to an agent, then installed into the local coding harness through impersonation. [Diagram: Diagram showing a reusable skill being authored, linked to an agent, and installed into a local coding harness through impersonation] --- ## A concrete example Suppose Company A has an `incident-review` skill that helps engineers diagnose rollout failures: - which logs to inspect first - how to search the right troubleshooting corpus - what to post back into the shared rollout thread Company A links that skill to its `Platform Support Agent`. Then a developer or coding agent can: 1. impersonate the support agent 2. list linked skills 3. install the incident-review skill into Claude or Codex 4. use the same operational guidance locally while debugging the live issue This is a strong example of a trusted cross-company debug loop when Company A has explicitly approved the operator access needed for the incident. --- ## Create and inspect skills List all reusable skills: ```bash archastro list skills archastro describe skill ``` Create one directly: ```bash archastro create skill \ -n "Incident Review" \ -d "Checklist and steps for rollout incident diagnosis" \ -s incident-review \ --file ./skills/incident-review/SKILL.md ``` You can also inspect and manage the files behind a skill: ```bash archastro describe skillfile incident-review SKILL.md archastro create skillfile incident-review references/checklist.md --file ./references/checklist.md archastro update skillfile incident-review SKILL.md --file ./skills/incident-review/SKILL.md ``` A skill is not just a name. It is a versioned bundle of files that can actually guide work. --- ## Inspect which skills are linked to an agent There are two layers to understand: - the reusable skill definitions - the links from those skills into a specific agent Inspect the links like this: ```bash archastro list agentskills --agent ``` Use this when a developer asks, "Which skills does this live agent actually carry?" --- ## Install a linked skill into your coding harness This is where skills stop being passive documentation. ### What a skill package actually contains A skill is anchored by `SKILL.md` and can include supporting files beside it. Teams often keep: - `SKILL.md` for the main operating instructions - references or checklists under subpaths such as `references/` - any other supporting text files the local coding workflow needs The question is not "What is the full abstract schema?" It is "What files does this skill bundle carry, and are they the right ones for the job?" You can inspect that directly: ```bash archastro describe skill archastro describe skillfile SKILL.md ``` If `impersonate list skills` shows a linked skill id, use that exact returned id in `impersonate install skill ...`. Start impersonation, list the linked skills, inspect the exact skill id you need, then install one: ```bash archastro impersonate start archastro impersonate list skills archastro describe skillfile SKILL.md archastro impersonate install skill --harness claude ``` Other supported harnesses: ```bash archastro impersonate install skill --harness codex --install-scope project archastro impersonate install skill --harness opencode ``` That gives the local coding agent access to the same operational skill package the live agent carries. This is a trust boundary: - inspect the linked skill before installing it - install only the package you actually need - be especially careful in cross-company workflows, where the skill content originates from another company's live agent definition For enterprise use, the safe default is: inspect first, install second. --- ## Update skills carefully Skills are part of the real developer workflow, so small changes can matter. ```bash archastro update skill incident-review \ -d "Updated checklist for rollout incident diagnosis" \ --file ./skills/incident-review/SKILL.md ``` If a skill changes upstream and you are already impersonating the agent: ```bash archastro impersonate sync ``` That keeps the local install aligned with the latest linked skill state. --- ## Best practices Good skills are: - narrow enough to explain in one sentence - concrete enough to help a developer do real work - versioned like real operational assets - attached to the agents that need them - reviewed before they are installed into a local coding harness Keep skills focused. A good skill should feel like a sharp operational instrument — clear enough to guide real work, small enough to review quickly. --- ## Where to go next 1. Read [Impersonation](/docs/impersonation) for the local operating loop. 2. Read [Tools](/docs/tools) for the action surface skills often help developers use correctly. 3. Read [Samples](/docs/samples) for end-to-end product playbooks. --- ### Installations URL: https://docs.archagents.com/docs/installations Summary: Attach outside systems and capabilities to an agent, inspect their state, and understand what needs attention before they become useful. ## Overview Installations are how an agent gets attached to outside systems and capabilities. They matter because they answer questions like: - what kind of external capability is attached to this agent? - is it connected yet? - what state is it in? - what action is still required before it becomes usable? This is one of the clearest debugging views in the platform. When an attached integration or capability is not working, installations are where the status shows up first. The installation lifecycle An installation starts as an attachment, moves through setup state, and only then becomes something the agent can reliably use. [Diagram: Diagram showing an agent installation moving from kind selection to setup to active state with status details] --- ## A concrete example Suppose Company A's support agent needs access to a site or provider-backed integration so it can help Company B diagnose a broken onboarding flow. The operator path is: 1. inspect the available installation kinds 2. create the installation on the right agent 3. inspect its current state 4. follow the next action if setup is incomplete 5. activate it when it is ready This is a real lifecycle, not just one create command. --- ## Available installation kinds | Kind | What it connects | |------|-----------------| | `memory/long-term` | Persistent agent memory | | `archastro/thread` | Thread context and history | | `integration/github` | GitHub personal OAuth (repo access, issues, PRs) | | `integration/github_app` | GitHub App (org-wide repo access, bot identity for PR reviews) | | `integration/slack` | Slack user OAuth | | `integration/slack_bot` | Slack bot (post to channels as bot identity) | | `integration/gmail` | Gmail inbox access | | `integration/outlook` | Outlook/Microsoft 365 inbox access | | `web/site` | Website content for knowledge indexing | Check the full list with `archastro list agentinstallationkinds`. --- ## Inspect available kinds Before you attach anything, inspect the kinds the platform supports for the current app: ```bash archastro list agentinstallationkinds ``` This is where you discover what categories are actually available instead of guessing from screenshots or old examples. --- ## Create an installation Create one for an agent: ```bash archastro create agentinstallation \ --agent \ --kind web/site \ --config '{"url":"https://status.example.com"}' ``` `web/site` here is a literal installation kind value, not a path. Different apps expose different kinds, so always start with `list agentinstallationkinds` before you script one. Another installation kind may require provider-specific config instead. The exact input depends on the kind. Installations are attached to agents explicitly. They are not ambient platform magic. --- ## Inspect installation state After creation, inspect it directly: ```bash archastro list agentinstallations --agent archastro describe agentinstallation ``` This is the command loop you use to answer: - what state is this installation in? - is there a next action? - is there a provider-specific connect path? - did setup fail? That information is much more actionable than vague "integration isn't working" reports. --- ## Activate or remove it If the installation is ready: ```bash archastro activate agentinstallation ``` If you no longer need it: ```bash archastro delete agentinstallation ``` This explicit lifecycle is good for both debugging and review. --- ## How installations relate to knowledge and tools Installations are often the upstream attachment surface behind: - knowledge connections - provider-backed integrations - certain tool capabilities They matter even when a developer thinks they are "really working on knowledge" or "really working on tools." Installations often tell you whether the underlying attachment is healthy before you debug anything higher level. --- ## Best practices Good installation workflows follow four rules: 1. inspect kinds before creating 2. attach only what the agent actually needs 3. check status and next action before blaming the model 4. activate only when the setup is clearly ready This is one of the simplest ways to keep the public surface powerful without making it mysterious. --- ## Where to go next 1. Read [Knowledge](/docs/knowledge) for the source and ingestion layer above installations. 2. Read [Tools](/docs/tools) for action surfaces the agent can operate once attachments are ready. 3. Read [Portal](/docs/portal) for the visual operator workflow around setup and review. --- ### Computer Use URL: https://docs.archagents.com/docs/computer-use Summary: Computer use lets an agent act through a managed execution environment when simple tool calls are not enough. ## Overview Computer use gives an agent a managed place to carry out interactive tasks. Use it when the agent needs to do work that looks more like operating a computer than calling one simple tool. The mental model is: - a normal tool call is one clean action - computer use is a small working environment where the agent can carry out a sequence of visible steps That makes computer use powerful, but it also means it should be used more carefully than ordinary tools. --- ## A concrete example Imagine an agent needs to walk through a browser-based admin interface that does not have one clean API. Computer use can help the agent: 1. open the interface 2. navigate through the relevant screens 3. gather or enter the needed information 4. report the result back into the thread or workflow That is very different from a simple "call this endpoint" tool. It is closer to giving the agent a controlled workstation for a narrow task. From the CLI, the operational loop looks like this: ```bash archastro list agentcomputers --agent archastro create agentcomputer --agent -n "ops-workstation" archastro describe agentcomputer archastro refresh agentcomputer archastro exec agentcomputer -c "pwd" ``` That gives you a real workflow for provisioning, checking readiness, and validating the environment before you ask the agent to rely on it. --- ## What computer use is not Computer use is not the default way agents should operate. It is the wrong choice when: - one explicit API or tool call would do - the task can be expressed as a clean workflow step - the task is sensitive enough that a human should do it directly The point is not to make agents click around for the sake of it. The point is to give them a controlled way to handle the cases where interactive work is genuinely necessary. --- ## When it helps Computer use is a good fit when an agent needs to: - work through a multi-step interface - inspect or manipulate a system that is not exposed as one clean API - carry out a guided operational task It is not the right first choice for everything. If a smaller, clearer tool will do, use the smaller tool. The rule of thumb is simple: if you can express the action as one explicit tool, do that first. Reach for computer use when the work is genuinely interactive. ### Built-in tools Agents can use computers during conversations through four built-in tools: ### `computer_exec` Execute a shell command on the agent's computer. | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `command` | string | Yes | Shell command to execute | | `working_directory` | string | No | Working directory | Returns `stdout`, `exit_code`, and `status`. ### `computer_write_file` Write content to a file on the computer. | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `path` | string | Yes | Absolute file path | | `content` | string | Yes | File content to write | ### `computer_read_file` Read the contents of a file from the computer. | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `path` | string | Yes | Absolute file path | ### `use_claude` Start an asynchronous Claude Code run on the agent's computer. This tool creates a durable agent session and returns immediately with a session ID instead of blocking the current turn. | Parameter | Type | Required | Description | |-----------|------|----------|-------------| | `prompt` | string | Yes | Prompt to send to Claude | | `working_directory` | string | No | Working directory for the run | | `name` | string | No | Optional label for the durable Claude session | `use_claude` only resolves when the agent has computer use, an active GitHub installation, and sub-agents enabled. At execution time the session injects GitHub and Claude credentials when available, runs Claude non-interactively on the VM, and reports the result back through the existing durable sub-agent/session flow. ### Tool resolution When an agent uses computer tools, the platform automatically routes the request to a ready computer associated with that agent. --- ## Safety guidelines Computer use increases agent capability, so it needs stronger guardrails. Before enabling computer use, be clear on: - what environment the agent can use - what actions are allowed - what approvals exist - how a human can review the result From the CLI: - inspect the computer status before using it - keep the environment narrow - destroy computers you no longer need --- ## Best practices 1. Start with narrow tasks. 2. Keep the environment limited to the work at hand. 3. Put sensitive actions behind explicit approval. 4. Review outputs and logs regularly. 5. Prefer simpler tools when they are sufficient. If you cannot explain why computer use is necessary for the task, it is not. --- ### Extensions & Integrations URL: https://docs.archagents.com/docs/extensions-integrations Summary: Connect agents to outside systems through built-in integrations, custom tools, MCP servers, webhooks, and scripts. ## Overview ArchAstro agents connect to outside systems in five ways: | Method | What it does | When to use it | |--------|-------------|----------------| | **Built-in integrations** | Connect to GitHub, Slack, Gmail, and other supported services | You need an agent to read from or act in a known service | | **Custom tools** | Define your own tool backed by a script, workflow, or HTTP endpoint | You need the agent to call your own APIs or business logic | | **MCP servers** | Connect to any remote MCP-compatible tool server | You want to use tools from the MCP ecosystem | | **Webhooks** | Receive inbound events from external systems | You need to trigger agent behavior from outside ArchAstro | | **Scripts** | Write custom logic with HTTP calls, JWT auth, and data transformation | You need to call any API with full control over the request | These methods compose. An agent can use built-in GitHub integration for knowledge, a custom tool for your billing API, and a script-based routine that calls a third-party webhook — all at the same time. --- ## Built-in integrations ArchAstro has native support for connecting to common services. Each integration handles authentication, token refresh, and data access. | Integration | What it provides | |-------------|-----------------| | **GitHub** (OAuth) | Repo access, issues, PRs — for knowledge indexing and code context | | **GitHub App** | Org-wide repo access with a bot identity — for PR reviews, automated comments | | **Slack Bot** | Post messages to channels, read channel history | | **Gmail** | Read inbox, send emails | | **Outlook** | Read inbox, send emails via Microsoft 365 | Connect integrations through the CLI or developer portal: ```bash archastro create agentinstallation --agent --kind integration/github_app archastro authorize agentinstallation archastro activate agentinstallation ``` Once connected, the agent can use the integration through its builtin tools (e.g. `integrations`, `knowledge_search`). See [Installations](/docs/installations) for the full list of available kinds and the setup lifecycle. --- ## Custom tools When the agent needs to call your own APIs or run business-specific logic, create a custom tool. Custom tools can be backed by: | Handler | How it works | |---------|-------------| | **Script** | Runs an ArchAstro script that can make HTTP calls, transform data, and return results | | **Workflow** | Triggers a multi-step workflow with branching, approvals, and external calls | | **HTTP endpoint** | Calls an external URL directly with the tool arguments as the request body | Define custom tools in an AgentTemplate: ```yaml tools: - kind: custom name: lookup_order description: Look up a customer order by ID parameters: type: object properties: order_id: type: string handler_type: script config_ref: order-lookup-script ``` Or create them directly: ```bash archastro create agenttool --agent \ --kind custom \ --name "lookup_order" \ --description "Look up a customer order by ID" ``` See [Tools](/docs/tools) for the full tool model and impersonation workflow. --- ## MCP servers ArchAstro supports connecting to remote [MCP (Model Context Protocol)](https://modelcontextprotocol.io) servers. This lets agents use any tool from the MCP ecosystem — Stripe, Notion, Sentry, Linear, and hundreds of others. MCP servers are defined as configs: ```yaml kind: MCPServer key: stripe-mcp name: Stripe url: https://mcp.stripe.com auth: type: bearer token_source: integration ``` When an MCP server is connected with an integration credential, the agent gets access to all the tools that server exposes — without you writing any custom tool definitions. --- ## Webhooks Inbound webhooks let external systems trigger agent behavior. When ArchAstro receives a webhook, it can: - trigger an automation - start a workflow - ingest data into knowledge Webhooks can be configured from the CLI or in the developer portal under **Project -> Webhooks**. Each webhook gets a unique URL that external systems can POST to. --- ## Scripts with HTTP access For full control over external API calls, use scripts with the `requests` namespace: ``` let http = import("requests") let jwt = import("jwt") let dt = import("datetime") // Sign a JWT for service account auth let token = unwrap(jwt.sign({ iss: env.CLIENT_EMAIL, scope: "https://www.googleapis.com/auth/cloud-platform", aud: "https://oauth2.googleapis.com/token", iat: dt.unix(), exp: dt.unix() + 3600 }, env.PRIVATE_KEY, "RS256")) // Exchange for access token let resp = unwrap(http.post("https://oauth2.googleapis.com/token", { headers: {"Content-Type": "application/x-www-form-urlencoded"}, body: "grant_type=urn:ietf:params:oauth:grant-type:jwt-bearer&assertion=" + token })) resp.body.access_token ``` Scripts can call any HTTP API — REST, GraphQL, webhooks, OAuth token exchanges. Combined with `env` variables for secrets, this gives you full programmatic access to any external service. See [Scripts](/docs/scripts) and the [Script Language Reference](/docs/script-reference) for the full language. --- ## The API Everything in ArchAstro is API-first. The same operations available in the CLI and portal are available through the REST API: - Create and manage agents, teams, threads, and messages - Deploy configs and manage installations - Send messages and trigger agent behavior programmatically See the [API Reference](/openapi.json) for the full specification. --- ## Choosing the right approach | You want to... | Use | |----------------|-----| | Connect to GitHub, Slack, or Gmail | Built-in integration | | Call your own product API from an agent | Custom tool | | Use tools from the MCP ecosystem | MCP server | | Trigger agent work from an external system | Webhook | | Call any HTTP API with full control | Script with `requests` | | Build a product on top of ArchAstro | REST API | Start with built-in integrations for supported services. Use custom tools or scripts when you need something specific to your business. --- ### CLI URL: https://docs.archagents.com/docs/cli Summary: The terminal workflow for building and operating agents. ## Overview The ArchAstro CLI is the fastest way to build and operate agents from the terminal. Use it to: - sign in and connect a project - deploy agents from YAML templates - create conversations and send test messages - inspect runs, threads, tools, and knowledge - manage configs, sandboxes, and installations If you use a coding agent (Claude Code, Codex, Gemini CLI), the CLI is the primary setup path. --- ## Fastest path If you want to get from zero to an agent you can test quickly, do this: 1. install the CLI 2. sign in 3. connect the current project with `archastro init` 4. create or deploy an agent 5. open a thread and send it a message The rest of this page expands those steps. ## How to think about the CLI Teams use the CLI in one of three ways: | Mode | What you are doing | |------|--------------------| | **First-run setup** | Link a repo and create your first testable agent | | **Daily development** | Inspect, update, and test agents, routines, threads, and sandboxes | | **Repeatable deployment** | Keep configs in files and deploy them in a reviewable way | `--json` is a global flag. Scripted examples often place it before the verb, as in `archastro --json create agent ...`. The CLI loop Connect the repo, create or update the object, test it, inspect the result, then make the next change. [Diagram: Diagram showing the ArchAstro CLI loop from init to create to test to inspect to iterate] --- ## 1. Install the CLI GitHub Releases are the public distribution path for the CLI. ### macOS ```bash brew install ArchAstro/tools/archastro ``` ### Linux ```bash curl -fsSL https://raw.githubusercontent.com/ArchAstro/archastro-cli/main/install.sh | bash ``` ### Windows ```powershell irm https://raw.githubusercontent.com/ArchAstro/archastro-cli/main/install.ps1 | iex ``` If your organization does not allow piped installers, download the release assets from GitHub Releases and inspect them before running them locally. Verify the install: ```bash archastro --help ``` ## 2. Sign in ```bash archastro auth login archastro auth status ``` The CLI opens a browser so you can sign in and authorize the local session. Use `archastro auth logout` when you want to clear the current session. --- ## 3. Connect the current project ```bash cd my-project archastro init ``` `archastro init` connects the current directory to ArchAstro and writes an `archastro.json` file in the project root. That file tells the CLI which project and config directory the current workspace should use. In these docs, a **project** is your local linked workspace. An **app** is the ArchAstro application that workspace points at. --- ## 4. Create an agent you can test ### Repeatable setup: deploy from a template Write an `agent.yaml` in your project and deploy it in one command: ```bash archastro deploy agent agent.yaml --name "Support Agent" ``` This is the recommended path. The template is reviewable, reusable, and keeps your agent config in version control alongside your code. ### Quick experiment: create an agent directly If you want to understand each piece individually, you can create an agent directly: ```bash archastro create agent -n "Support Agent" -k support-agent \ -i "You help users resolve billing and support problems with short, concrete answers." ``` Here `-k` sets the agent's lookup key: the key you can search for and reuse in scripts and CLI flows. The quickest proof that this is an AI agent, not just a saved object, is one direct session: ```bash archastro create agentsession --agent \ --instructions "Help a user resolve billing questions. Ask one clarifying question if needed." archastro exec agentsession \ -m "How should we handle invoice failures?" ``` If you want the agent to react automatically inside the product, add a routine: ```bash archastro create agentroutine --agent \ -n "Reply to new messages" \ -e message.created \ -t script \ --script "{ handled: true }" archastro activate agentroutine ``` `message.created` is the basic "new thread message arrived" event. `script` is the smallest handler type and is useful for proving the wiring before you move into richer workflow-backed behavior. New routines start in `draft`, so save the routine ID from the `create` command and activate it before you test thread traffic. That inline script still runs under the same scoped platform access rules as the agent and routine that triggered it. It is useful for small deterministic checks, not as a replacement for reviewable workflows. For anything beyond this first proof, move the logic into a proper script or workflow where you can inspect and test the input shape directly. ### Direct session versus thread The CLI exposes both because they solve different problems: - `agentsession` is the quickest direct test of the agent itself - threads and messages are the product conversation surface used over time Start with an `agentsession` when you want a quick proof. Move to threads when you want to inspect the full runtime loop with members, messages, and ongoing behavior. ## A realistic first CLI session Here is what a first CLI session looks like: 1. run `archastro init` in the repo you care about 2. create one agent with a very narrow job 3. run one direct agent session and inspect the result 4. create and activate one routine that reacts to `message.created` 5. create one test user and one test thread 6. send one message That is enough to answer the questions teams have on day one: - Did we connect the right project? - Can we create agents from the terminal? - Does the agent actually participate in conversations? - Can we inspect and iterate from here? --- ## 5. Open a thread and send a message Create a thread that the agent owns: ```bash archastro create thread -t "Support" --owner-type agent --owner-id ``` Create or reuse a user who will send the test message: ```bash archastro create user --system-user -n "Demo User" ``` `--system-user` creates a bot-style non-login user. Use it when you need test traffic from the CLI without creating a person account. If you later need machine-to-machine auth for that identity, issue a dedicated system-user access token instead of trying to log in as a person. Those tokens are separately minted, can be listed and revoked, and are checked against the platform's system-token registry on use. Add that user to the thread: ```bash archastro create threadmember --thread --user-id ``` Then send a message: ```bash archastro create threadmessage --thread --user-id \ -c "How should we handle invoice failures?" ``` Add `--wait` when you want the CLI to stay attached and print the resulting response activity before returning. This is the shortest path to proving that the agent exists, can join a conversation, and can start doing work in that thread. --- ## What can go wrong ### The CLI is not authenticated ```text Not authenticated. Run: archastro auth login ``` Fix: ```bash archastro auth login archastro auth status ``` ### The current repo is not linked ```text No archastro.json found. Run: archastro init ``` Fix: ```bash archastro init ``` ### The project is linked, but the token is missing ```text No token for this project. Run: archastro init ``` Fix: ```bash archastro init ``` If the repo was already linked and the local session was cleared later, run both: ```bash archastro auth login archastro init ``` --- ## Common workflows ### Inspect and manage agents ```bash archastro list agents archastro describe agent archastro update agent -n "Senior Support Agent" archastro delete agent ``` Use this loop when you are tuning instructions, names, routines, or ownership and want to confirm the live object state. ### Manage conversations ```bash archastro list threads archastro describe thread archastro list threadmembers --thread archastro list threadmessages --thread archastro list threadmessages --thread --full ``` `--full` switches from the compact message table to the full conversation view. This is the quickest way to answer "what happened?" when a test did not behave the way you expected. ### Add a computer to an agent ```bash archastro list agentcomputers --agent archastro create agentcomputer --agent -n "dev" archastro describe agentcomputer ``` Reach for this when an agent needs a managed computer environment rather than only message- and workflow-based behavior. ### Operate the serious surfaces Once you move beyond a first agent, the CLI becomes an operator console for the live platform surface: ```bash # Become the agent locally archastro impersonate start archastro impersonate list tools archastro impersonate list skills # Inspect knowledge state archastro list contextsources archastro list contextingestions --status failed # Inspect installations and tool attachments archastro list agentinstallations --agent archastro list agenttools --agent # Inspect durable memory archastro list agentworkingmemory --agent ``` The day-to-day loop for agent development: - inspect what the agent is attached to - inspect what it can use - inspect what it remembers - debug the agent's tool and skill surface before changing prompts Privileged workflows such as impersonation are deliberate operator actions. Use them only from the app and company context your deployment has explicitly approved. ### Work with config files ```bash archastro configs sync archastro configs deploy mkdir -p ./tmp archastro configs sample workflow --to-file ./tmp/workflow.sample.yaml archastro configs validate -k workflow -f ./tmp/workflow.sample.yaml archastro configs sample ``` Config files become more important as the setup gets larger. If you are still exploring the product, direct `create` commands are simpler. Once you know what you want, move the stable setup into files. Read [Configs](/docs/configs) for the full file-backed workflow. --- ## Common command groups Think of these groups in the same order you would build with ArchAstro: 1. agents 2. users and teams 3. threads and messages 4. sandboxes and automations 5. knowledge, tools, and installations 6. config files and project-level setup ### Agents ```bash archastro list agents archastro describe agent archastro create agent -n "Support Agent" archastro update agent -n "New Name" archastro delete agent ``` ### Users ```bash archastro list users archastro describe user archastro create user -e alice@example.com -n "Alice" archastro create user --system-user -n "Demo User" archastro delete user ``` ### Teams ```bash archastro list teams archastro describe team archastro create team -n "Engineering" archastro update team -n "New Name" archastro delete team ``` ### Threads ```bash archastro list threads archastro describe thread archastro create thread -t "Project thread" --user archastro create threadmember --thread --agent-id archastro create threadmessage --thread --user-id -c "Hello" ``` Use `--skip-welcome-message` on thread creation when you want the first visible message in the thread to be the one you send on purpose. Use `--wait` on `create threadmessage` when you want the CLI to stay attached for the response loop. ### Automations ```bash archastro list automations archastro describe automation archastro create automation -n "Daily Report" -t scheduled --schedule "0 8 * * *" archastro activate automation archastro pause automation archastro delete automation ``` ### Knowledge ```bash archastro list integrations archastro list contextsources archastro list contextingestions --status failed archastro list contextitems --source ``` ### Tools ```bash archastro list agenttools --agent archastro describe agenttool archastro create agenttool --agent --kind builtin --builtin-tool-key search archastro activate agenttool ``` ### Installations ```bash archastro list agentinstallationkinds archastro list agentinstallations --agent archastro create agentinstallation --agent --kind web/site --config '{"url":"https://example.com"}' archastro describe agentinstallation ``` ### Impersonation ```bash archastro impersonate start archastro impersonate status archastro impersonate list tools archastro impersonate list skills archastro impersonate run tool search --input '{"query":"acme billing webhooks"}' archastro impersonate stop ``` ### Skills ```bash archastro list skills archastro describe skill archastro create skill -n "Incident Review" --file ./skills/incident-review/SKILL.md archastro describe skillfile incident-review SKILL.md ``` ### Memory and routine runs ```bash archastro list agentworkingmemory --agent archastro list agentroutineruns --routine archastro list automationruns --automation ``` ### Files and project setup ```bash archastro list files ``` Use the developer portal for domains, webhooks, and other project-level setup that does not need to live in your terminal workflow. ### Configs ```bash archastro configs init archastro configs kinds archastro configs sync archastro configs deploy archastro configs sample mkdir -p ./tmp archastro configs sample workflow --to-file ./tmp/workflow.sample.yaml archastro configs validate -k workflow -f ./tmp/workflow.sample.yaml ``` Use configs when the setup has graduated from exploration into something you want to keep in files and review like code. ### Scripts ```bash archastro script validate -f ./path/to/script.yaml archastro script run -f ./path/to/script.yaml --input '{"key": "value"}' archastro script docs ``` ### Organizations Organization setup is typically handled as part of operator-managed multi-company deployment work, not as part of the normal first-run CLI path. Use [Organizations](/docs/organizations) to understand the boundary model when your deployment includes company-specific spaces. ### Sandboxes ```bash archastro list sandboxes archastro describe sandbox archastro create sandbox -n "Staging" -s staging archastro activate sandbox archastro list sandboxmails --sandbox ``` Here `-s` sets the sandbox slug: the short unique key for that sandbox inside the app. `activate sandbox` re-authenticates with a sandbox-scoped token. Pass a sandbox ID directly (`activate sandbox `) or omit it to get an interactive selection flow. --- ## Scripting with JSON output All commands support `--json`, which makes the CLI easy to use from shell scripts and coding-agent workflows. ```bash archastro list agents --json | jq -r '.data[].id' USER_ID=$(archastro create user -e bot@example.com --system-user --json | jq -r '.id') archastro list teams --json | jq '.data[] | select(.name | contains("Eng"))' ``` Because `--json` is global, `archastro --json create user ...` works too. Use whichever placement you prefer, but keep it consistent inside a script. --- ## Shell completion ```bash eval "$(archastro completion bash)" eval "$(archastro completion zsh)" archastro completion fish | source ``` --- ## Project files | File | Purpose | |------|---------| | `./archastro.json` | Project mapping and local CLI settings | --- --- ### Configs URL: https://docs.archagents.com/docs/configs Summary: Move a setup you trust into files so the team can review and ship it cleanly. ## Overview Configs are the file-backed definition layer for an ArchAstro project. Use them when you have already proved a setup works and now want to: - keep it in version control - review changes before deployment - sync the live project into local files - redeploy the same shape without rebuilding it by hand - direct `create` commands are fast for exploration - `configs/` is the right home once the shape is real and worth keeping --- ## What a config actually is A config is a versioned project object stored as file content plus a virtual path. That means: - the platform still has live objects - the CLI can pull those objects into local files - your team can review and redeploy them from the repo This is what makes ArchAstro feel like a real developer platform instead of a sequence of one-off clicks or shell commands. The CLI resource and the local `configs/` workflow are related but different: - `archastro config ...` works with live config objects directly - `archastro configs ...` manages the local file-backed sync and deploy loop Teams use both. They inspect live objects when they need to debug, then use `configs/` when they want changes they can review and redeploy. --- ## The basic config loop Start by creating the local config directory: ```bash archastro configs init ``` Then inspect the kinds the project supports: ```bash archastro configs kinds archastro configs sample agent archastro configs sample workflow ``` Use those samples to understand the file shape before you edit anything. When you want to pull the current project state into local files: ```bash archastro configs sync ``` When you are ready to push reviewed changes back: ```bash archastro configs deploy ``` If you need to inspect a single live config while you are debugging: ```bash archastro list configs --kind workflow archastro describe config archastro configs content ``` That is often the fastest way to answer "what is the platform actually holding right now?" before you sync anything locally. --- ## Validate before you deploy The safest pattern is: 1. generate or edit the config locally 2. validate the content 3. deploy only after it is readable and intentional For example: ```bash mkdir -p ./tmp archastro configs sample workflow --to-file ./tmp/workflow.sample.yaml archastro configs validate -k workflow -f ./tmp/workflow.sample.yaml ``` That is especially useful when a coding agent is generating config content and you want a quick sanity check before deployment. When the config already exists on the server, validate the file and then compare it to the live object before you deploy: ```bash archastro describe config archastro configs content ``` That keeps the local file and the live platform object in the same review loop. --- ## When to stay with direct commands Stay with direct commands when you are: - proving the first agent loop - testing one routine - poking at the data model - learning the CLI surface Move to configs when you are: - keeping an agent or workflow for the long term - collaborating through code review - deploying the same setup more than once - managing a project with several stable objects --- ## A realistic team pattern Teams follow this sequence: 1. create one agent directly 2. test it with an `agentsession` 3. attach the first routine or workflow 4. once the shape feels right, run `configs init` 5. sync the live setup into `configs/` 6. review future changes as files instead of recreating objects manually That gives you fast learning first, then repeatability. ### A workflow-specific example Suppose the team builds a workflow from a config sample, then iterates on it. The sequence is: 1. generate a sample with `archastro configs sample workflow` 2. edit the workflow file locally 3. validate before deploy 4. run `archastro configs deploy` 5. review the result in the portal for a visual overview 6. iterate by editing the file and redeploying That pattern keeps the CLI and source control as the primary creation path while the portal provides the visual review layer. --- ## Best practices Good config usage follows five rules: 1. prove the setup live before you freeze it into files 2. keep paths and kinds readable 3. validate generated content before deploy 4. prefer reviewed file changes over repeated ad hoc recreation 5. use sync to keep the local view honest --- ## Where to go next 1. Read [CLI](/docs/cli) for the full terminal workflow. 2. Read [Samples](/docs/samples) for end-to-end examples that move from direct commands into config files. 3. Read [Workflows](/docs/workflows) when the file-backed object you are managing is a process definition. --- ### For Coding Agents URL: https://docs.archagents.com/docs/for-coding-agents Summary: CLI-first setup instructions designed for Claude Code, Codex, Gemini CLI, and similar AI coding tools. > **Human?** This page is written for AI coding assistants. For the human getting-started guide, see [Getting Started](/docs/getting-started). ## Overview This page is for coding agents and the developers using them. Use it when the goal is to get ArchAstro working quickly and correctly inside a real codebase. The default assumptions are: - get to an agent that can answer a test prompt quickly - use the CLI as the default setup path - use the same core agent model whether the user is deploying for a team or embedding inside a product - avoid inventing extra wrappers or scaffolding unless the user asks for them --- ## Quick setup prompt Paste this into the repo you are working in: ```text Set up ArchAstro in this repo so we can deploy an agent and test it. 1) Read: https://docs.archastro.ai/llms-full.txt 2) Ask me for any missing ArchAstro credentials or environment variables. 3) Install the ArchAstro CLI and run: archastro auth login && archastro init 4) Write an agent.yaml template (kind: AgentTemplate) with: - a clear identity/instructions - the participate preset routine (so it responds in conversations) - search and knowledge_search builtin tools - memory/long-term installation 5) Deploy it: archastro deploy agent agent.yaml --name "Support Agent" 6) Test it: - create a thread, user, and send a test message - OR create an agent session and exec a test prompt 7) When complete, summarize what was created and how to test it again. ``` --- ## Required environment variables Before running ArchAstro operations, check for these values: | Variable | Required | Purpose | |----------|----------|---------| | `ARCHASTRO_SECRET_KEY` | CI or non-interactive use | Server-side or automated authentication when browser login is not available | | `ARCHASTRO_APP_ID` | Existing project linkage only | Needed when the repo should link to a specific existing ArchAstro project | Ask the user for missing values instead of guessing them. --- ## Canonical URLs | Resource | URL | |----------|-----| | Documentation | `https://docs.archastro.ai` | | Developer portal | `https://developers.archastro.ai` | | LLM index | `https://docs.archastro.ai/llms.txt` | | Extended LLM index | `https://docs.archastro.ai/llms-full.txt` | Treat these as canonical. Do not invent alternate hosts or endpoint roots. --- ## Install the CLI The CLI is the default starting point. ### macOS ```bash brew install ArchAstro/tools/archastro ``` ### Linux ```bash curl -fsSL https://raw.githubusercontent.com/ArchAstro/archastro-cli/main/install.sh | bash ``` ### Windows ```powershell irm https://raw.githubusercontent.com/ArchAstro/archastro-cli/main/install.ps1 | iex ``` Verify the install: ```bash archastro --help ``` --- ## Minimal setup pattern ### 1. Authenticate ```bash archastro auth login archastro auth status ``` ### 2. Connect the project ```bash archastro init ``` ### 3. Write an agent template Create `agent.yaml` in your project: ```yaml kind: AgentTemplate key: support-agent name: Support Agent identity: | You help users resolve support and billing problems with short, concrete answers. tools: - kind: builtin builtin_tool_key: search status: active - kind: builtin builtin_tool_key: knowledge_search status: active routines: - name: Respond in conversations handler_type: preset preset_name: participate event_type: thread.session.join event_config: thread.session.join: {} status: active installations: - kind: memory/long-term config: {} ``` ### 4. Deploy it ```bash archastro deploy agent agent.yaml --name "Support Agent" ``` ### 5. Prove the agent can answer ```bash archastro create agentsession --agent \ --instructions "Help a user resolve support questions. Ask one clarifying question if needed." archastro exec agentsession \ -m "Can you help with invoice failures?" ``` ### 6. Test with a thread and message ```bash archastro create thread -t "Support test" --owner-type agent --owner-id archastro create user --system-user -n "Demo User" archastro create threadmember --thread --user-id archastro create threadmessage --thread --user-id \ -c "Can you help with invoice failures?" ``` Save the printed IDs as you go. The next command will need them. > For manual step-by-step creation without a YAML file, use `archastro create agent`, `archastro create agentroutine`, etc. See [CLI](/docs/cli) for all commands. --- ## Platform building blocks Use these plain meanings when explaining ArchAstro inside a repo: | Term | Meaning | |------|---------| | **Agent** | The long-lived AI identity you create and manage | | **Routine** | An event handler for the agent: when X happens, do Y | | **Tool** | An action the agent can take | | **Knowledge** | The information the agent can use | | **Thread** | The conversation where people and agents exchange messages | If the user needs a conceptual explanation, point them to [Agents](/docs/agents). --- ## Auth and safety guidance | Area | Guidance | |------|----------| | Auth | Use the published auth flows and key types only | | OAuth | Use the published device flow for CLI and non-browser flows | | Setup | Prefer CLI setup and CLI verification before reaching for APIs | If a coding agent needs exact request or response shapes after the CLI flow is already working, use [`/openapi.json`](/openapi.json) as the advanced reference for currently published operations. --- ## Optional helpers - **[/llms.txt](/llms.txt)** - lightweight page index for small context windows - **[/llms-full.txt](/llms-full.txt)** - extended index with more content --- ## Rules for coding agents 1. Check required environment variables before trying write actions. 2. Prefer the fastest path to an agent you can test over broad upfront scaffolding. 3. Use the CLI first for setup, verification, and repeatable workflows. 4. Use `llms-full.txt` before scraping rendered docs pages. 5. Reach for `openapi.json` only when exact request or response shapes are necessary. 6. Do not add extra setup the user did not ask for. 7. Do not expose secret keys or put them in client-side code. 8. Explain what was created in plain language after setup completes. --- ### Samples URL: https://docs.archagents.com/docs/samples Summary: End-to-end playbooks that combine the portal, Agent Network, and CLI into realistic developer workflows. ## Overview This page is a set of complete playbooks, not a pile of isolated commands. These samples are CLI-first. If you want the shortest CLI setup path, start with [Getting Started](/docs/getting-started). Each sample walks through a full product workflow: - what you are building - what to set up (CLI, coding agent, or portal) - what you run in the CLI - what you should expect to see The CLI still does most of the work. The portal and Agent Network show up where the product actually expects them. Jump to: [Sample 1](#sample-1-create-one-working-support-agent) | [Sample 2](#sample-2-move-the-setup-into-reviewable-config) | [Sample 3](#sample-3-run-a-scheduled-workflow-with-a-script-in-the-middle) | [Sample 4](#sample-4-test-a-notification-flow-in-a-sandbox) | [Sample 5](#sample-5-coordinate-a-rollout-across-two-companies) | [Sample 6](#sample-6-debug-a-cross-company-integration-by-impersonating-the-support-agent) | [Sample 7](#sample-7-deploy-a-real-agent-from-a-template) --- ## How to use these samples A few practical notes before you start: 1. Every `create` command returns an ID. Save it before you move to the next step. 2. If you want to script the sequence, add `--json` and capture `.id` with `jq`. Example: ```bash agent_id=$(archastro --json create agent -n "Support Agent" -k support-agent \ -i "You help users solve billing and support problems clearly." | jq -r '.id') ``` If you prefer to work more manually, you can also run `archastro describe ...` or `archastro list ...` after each step and copy the ID you need. These flags appear several times below: - `--skip-welcome-message` keeps the thread creation step quiet so your test begins with the message you send on purpose - `--wait` keeps the CLI attached long enough to show the result of the message or action you just triggered - `--json` is a global CLI flag, so these examples place it before the verb: `archastro --json create ...` --- ## Sample 1: Create one working support agent ### What you are building A single agent inside one company that can answer one test request, then pick up a routine for automatic follow-up behavior. This is the smallest slice of ArchAstro that still shows the full loop: - one project - one agent - one live session - one thread - one incoming message ### Prerequisites 1. The CLI is installed and authenticated (`archastro auth login`). 2. A project is linked (`archastro init`). ### Run in the CLI ```bash archastro auth login archastro init agent_id=$(archastro --json create agent -n "Support Agent" -k support-agent \ -i "You help users solve billing and support problems clearly." | jq -r '.id') session_id=$(archastro --json create agentsession --agent "$agent_id" \ --instructions "Answer support questions clearly, ask one clarifying question if needed, and summarize the next action." | jq -r '.id') archastro exec agentsession "$session_id" \ -m "A customer says their invoice failed and wants to know what to try next." archastro describe agentsession "$session_id" --follow user_id=$(archastro --json create user --system-user -n "Support Test User" | jq -r '.id') thread_id=$(archastro --json create thread -t "Support test thread" \ --owner-type agent --owner-id "$agent_id" --skip-welcome-message | jq -r '.id') archastro create threadmember --thread "$thread_id" --user-id "$user_id" archastro create threadmessage --thread "$thread_id" --user-id "$user_id" \ -c "Can you help me figure out why my invoice keeps failing?" --wait routine_id=$(archastro --json create agentroutine --agent "$agent_id" \ -n "billing-triage" \ -e message.created \ -t script \ --script "{ handled: true }" | jq -r '.id') archastro activate agentroutine "$routine_id" ``` Here `-k support-agent` gives the agent a stable lookup key you can search for and reuse later. `--system-user` creates a bot-style non-login user for testing or automation. Use clear names for these identities so they are easy to recognize in thread history and operational review, and do not use them as a shortcut around the approvals or human checks your deployment expects. If you need that identity to call APIs directly later, create a dedicated system-user token for it and treat that token like any other service credential: name it, track it, and revoke it when the workflow is done. ### What to check - the session replies like an agent, not just a saved object - the thread now has a test conversation in it - the routine is active and ready to react to future thread events ### What this confirms - agents keep their own identity over time - sessions are the quickest way to prove the agent can think and respond - threads and messages are where that behavior shows up in the product - routines are the bridge from one-off testing to ongoing behavior --- ## Sample 2: Move the setup into reviewable config ### What you are building The same agent setup, but moved into project config so the team can review, sync, and redeploy it instead of recreating it by hand. This is where you move from exploration to something the team can keep in source control. ### Prerequisites Use the same project from Sample 1. ### Run in the CLI ```bash archastro configs init archastro configs kinds archastro configs sample agent archastro configs sync archastro configs deploy ``` ### What to check - a local `configs/` directory in the project - a pulled-down view of the config objects the project knows about - a clean `configs deploy` path for reviewable changes ### What this confirms - the CLI is not just for one-off object creation - ArchAstro has a config layer for repeatable setup - once a pattern works, move it out of ad hoc commands and into tracked config Good next links: - [CLI](/docs/cli) - [Agents](/docs/agents) - [Workflows](/docs/workflows) - [Configs](/docs/configs) --- ## Sample 3: Run a scheduled workflow with a script in the middle ### What you are building A project-wide job that runs on a schedule, calls a workflow, and uses a script node for the company-specific logic in the middle. This is the right pattern when the work belongs to the project, not to one named agent. ### Prerequisites Create a workflow config (use `archastro configs sample workflow` as a starting point) with a script node for the custom logic. Deploy it with `archastro configs deploy` and note the workflow config ID. The three pieces: - **workflow** = the visible process - **script** = the custom logic inside that process - **automation** = the schedule or trigger that starts it ### Run in the CLI ```bash automation_id=$(archastro --json create automation \ -n "Daily support summary" \ -t scheduled \ --schedule "0 9 * * 1-5" \ --config-id | jq -r '.id') archastro activate automation "$automation_id" archastro list automations archastro describe automation "$automation_id" archastro list automationruns --automation "$automation_id" ``` ### What to check - one named automation attached to your workflow config - an active project-wide job in the automation list - run history you can inspect after the schedule fires ### What this confirms - routines are for one agent's behavior - automations are for project-wide jobs - workflows and scripts become more useful when something repeatable starts them Good next links: - [Automations](/docs/automations) - [Workflows](/docs/workflows) - [Scripts](/docs/scripts) --- ## Sample 4: Test a notification flow in a sandbox ### What you are building A notification or email flow you can trigger safely without touching production users or production mail. This is the right place to test the parts of your app that need production-like behavior before they touch production. ### Prerequisites Deploy a workflow or automation that sends a notification. The sandbox will capture emails instead of delivering them, so you can test the full flow safely. ### Run in the CLI ```bash sandbox_id=$(archastro --json create sandbox -n "Notification Test" -s notification-test | jq -r '.id') archastro activate sandbox archastro list sandboxes archastro describe sandbox "$sandbox_id" user_id=$(archastro --json create user --system-user -n "Sandbox Notification User" | jq -r '.id') thread_id=$(archastro --json create thread -t "Sandbox notification test" \ --user "$user_id" --skip-welcome-message | jq -r '.id') archastro create threadmember --thread "$thread_id" --user-id "$user_id" archastro create threadmessage --thread "$thread_id" --user-id "$user_id" \ -c "Trigger the sandbox notification path." --wait archastro list sandboxmails --sandbox "$sandbox_id" ``` ### What to check - the CLI is operating in the sandbox context after `archastro activate sandbox` - the thread and message exist inside the test boundary - captured email appears in `sandboxmails` instead of touching production ### What this confirms - sandboxes are not a toy environment; they are where realistic testing becomes believable - the same CLI loop still works, but the boundary changes - notification flows are much easier to trust once you can inspect captured output safely Good next links: - [Sandboxes](/docs/sandboxes) - [Portal](/docs/portal) --- ## Sample 5: Coordinate a rollout across two companies ### What you are building A shared rollout room between two companies: each side keeps its own private agents, users, and knowledge, but both sides collaborate through one shared team and one shared thread. This is the Agent Network story in practical form. > Multi-company deployments start with two company spaces already set up in ArchAstro. The steps here begin once those company boundaries exist and the shared rollout work is ready to start. > If you want to enable this setup, work with the ArchAstro team first at hi@archastro.ai. ### Prerequisites 1. Both company spaces are provisioned (contact hi@archastro.ai for multi-company setup). 2. A shared team and shared thread exist for the rollout. 3. Each side has decided which agents and people participate. Each company keeps its private space. The shared team and thread are the only crossing point. ### Run in the CLI ```bash archastro list teams archastro describe team archastro list threads archastro describe thread archastro list threadmembers --thread operator_id=$(archastro --json create user --system-user -n "Rollout Operator" | jq -r '.id') archastro create threadmember --thread --user-id "$operator_id" archastro create threadmessage --thread --user-id "$operator_id" \ -c "Company A completed staging validation. Company B can start the rollout window review." --wait archastro list threadmessages --thread --full ``` ### What to check - one shared team and one shared thread you can inspect directly - one shared conversation that both companies can use without flattening everything into one tenant - visible participants and message history in the shared layer ### What this confirms - Agent Network is not abstract architecture; it becomes a concrete collaboration room - the collaboration surface is intentionally small - CLI still matters in cross-company work because it lets you inspect, join, and operate the shared thread directly Good next links: - [Agent Network](/docs/agent-network) - [Agent Network Getting Started](/docs/agent-network-getting-started) - [Organizations](/docs/organizations) --- ## Sample 6: Debug a cross-company integration by impersonating the support agent ### What you are building A realistic debugging loop where an engineer explicitly approved by Company A to work in its support app uses a shared rollout thread plus Company A's support agent to diagnose a broken `acme-billing-webhooks` integration. This is the kind of flow that makes ArchAstro feel different: - the companies stay separate - the rollout thread is shared - the support agent keeps its own private tools, skills, and knowledge - the developer can still debug from the same attached surface the live agent uses ### Prerequisites 1. A shared rollout team and thread exist (from Sample 5). 2. Company A's support agent is a participant in the shared thread. 3. Company A has granted operator access to their ArchAstro app for this rollout. 4. Troubleshooting knowledge is connected to the support agent. 5. The relevant skill and tool are linked to the agent. ### Run in the CLI ```bash archastro describe thread archastro list threadmembers --thread archastro impersonate start archastro impersonate status archastro impersonate list tools archastro impersonate list skills archastro list contextsources archastro list contextingestions --status failed archastro impersonate run tool search --input '{"query":"acme billing webhooks retry validation"}' archastro create threadmessage --thread --user-id \ -c "Search results point to webhook retry validation as the likely blocker. Please confirm the retry path before the rollout window." --wait archastro impersonate stop ``` ### What to check - the shared thread clearly shows who is collaborating - impersonation reflects the support agent's attached skills and tools - the search result comes from Company A's approved troubleshooting corpus - the thread gets a concrete next step instead of vague back-and-forth ### What this confirms - Agent Network is not just shared chat; it supports debugging work across company lines - impersonation connects the live agent surface to the local coding/debugging loop only after the owning company has deliberately authorized that workflow - knowledge, tools, and cross-company collaboration all meet in one operational flow Good next links: - [Impersonation](/docs/impersonation) - [Knowledge](/docs/knowledge) - [Tools](/docs/tools) - [Agent Network](/docs/agent-network) --- ## Sample 7: Deploy a real agent from a template ### What you are building A production-ready agent deployed from a single YAML file. This is the recommended workflow once you understand the basic model from Samples 1-2. One file defines everything: identity, tools, routines, and installations. One command deploys it. One test proves it works. ### Write the agent template Create `configs/agents/security-reviewer.yaml`: ```yaml kind: AgentTemplate key: security-reviewer name: Security Reviewer identity: | You are a security code reviewer for our engineering team. When asked to review code, check for: - hardcoded secrets or credentials - SQL injection or command injection risks - missing input validation - overly permissive access controls Be specific about file paths and line numbers. Suggest fixes, not just problems. tools: - kind: builtin builtin_tool_key: search status: active - kind: builtin builtin_tool_key: knowledge_search status: active - kind: builtin builtin_tool_key: integrations status: active routines: - name: Respond in conversations description: Join threads and respond to messages handler_type: preset preset_name: participate event_type: thread.session.join event_config: thread.session.join: {} status: active - name: Memory extraction (opt-in) description: Extracts and stores key facts after conversations when this routine is enabled handler_type: preset preset_name: auto_memory_capture event_type: thread.session.leave event_config: thread.session.leave: subject_is_agent: true status: active installations: - kind: memory/long-term config: {} - kind: archastro/thread config: {} ``` ### Validate and deploy ```bash archastro configs validate --kind AgentTemplate --file configs/agents/security-reviewer.yaml archastro deploy agent configs/agents/security-reviewer.yaml --name "Security Reviewer" ``` One command creates the agent with all tools, routines, and installations provisioned. ### Test it ```bash # Quick direct test session_id=$(archastro --json create agentsession --agent \ --instructions "Review code for security issues." | jq -r '.id') archastro exec agentsession "$session_id" \ -m "Review this function: def login(user, password): query = f'SELECT * FROM users WHERE name={user}'" ``` ### Test in a real conversation ```bash thread_id=$(archastro --json create thread -t "Security review" \ --owner-type agent --owner-id --skip-welcome-message | jq -r '.id') user_id=$(archastro --json create user --system-user -n "Engineer" | jq -r '.id') archastro create threadmember --thread "$thread_id" --user-id "$user_id" archastro create threadmessage --thread "$thread_id" --user-id "$user_id" \ -c "Can you review our auth module for SQL injection risks?" --wait ``` ### Test in a sandbox first For production agents, deploy to a sandbox before going live: ```bash # Switch to sandbox, deploy, and test archastro activate sandbox staging archastro deploy agent configs/agents/security-reviewer.yaml --name "Security Reviewer" # test in sandbox... # When ready, switch back to production and deploy archastro activate sandbox # (select production from the interactive prompt, or deactivate the sandbox) archastro deploy agent configs/agents/security-reviewer.yaml --name "Security Reviewer" ``` ### What to check - Agent responds with specific, actionable security feedback - Agent cites file paths and line numbers when reviewing code - Memory extraction routine (opt-in) stores key facts between conversations when enabled - The same YAML file deploys identically to sandbox and production --- ## Notes for coding agents - Treat docs URLs as canonical. - Prefer API reference and setup docs for implementation details. - Ask for missing environment variables before destructive operations.