lm.chat
The lm.chat tool sends a full OpenAI-compatible chat-completions request to a language model via OpenRouter and returns the model's response. It supports native function-calling (tool use), making it the right choice for ReACT-style agents that need the model to select and invoke Scarab tools directly.
Capability Required
tool.invoke:lm.chat
Input Schema
{
"type": "object",
"required": ["messages"],
"properties": {
"messages": {
"type": "array",
"description": "Conversation history in OpenAI message format (role + content)."
},
"model": {
"type": "string",
"description": "OpenRouter model ID. Routed via ModelRouter if spec.model_policy is set."
},
"tools": {
"type": "array",
"description": "OpenAI function-calling tool definitions."
},
"tool_choice": {
"type": "string",
"description": "OpenAI tool_choice value: \"auto\", \"none\", or a specific function."
}
}
}
Output Schema
When finish_reason is "stop"
{
"finish_reason": "stop",
"text": "<model's final text response>",
"input_tokens": 1234,
"output_tokens": 56,
"cost": 0.00042
}
When finish_reason is "tool_calls"
{
"finish_reason": "tool_calls",
"tool_calls": [
{
"id": "<call-id>",
"name": "<tool-name>",
"arguments": { ... }
}
],
"input_tokens": 1234,
"output_tokens": 56,
"cost": 0.00042
}
Examples
Basic chat
#![allow(unused)] fn main() { let response = agent.invoke_tool("lm.chat", json!({ "model": "anthropic/claude-sonnet-4-6", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "What is 2 + 2?" } ] })).await?; println!("{}", response["text"]); // "4" }
With function calling (ReACT loop)
#![allow(unused)] fn main() { // Build the function-calling manifest from the agent's live tool list. let available_tools = agent.list_tools().await?; let tools = agents::tools_to_openai_json(&available_tools); let response = agent.invoke_tool("lm.chat", json!({ "model": "anthropic/claude-sonnet-4-6", "messages": messages, "tools": tools, "tool_choice": "auto" })).await?; if response["finish_reason"] == "tool_calls" { for tc in response["tool_calls"].as_array().unwrap() { let tool_name = tc["name"].as_str().unwrap(); let args = tc["arguments"].clone(); let result = agent.invoke_tool(tool_name, args).await?; // append result to messages … } } }
Tool Name Sanitisation
Some LLM providers (e.g. Amazon Bedrock via OpenRouter) reject tool names containing dots. The agents crate provides helpers to sanitise names for the LLM and map them back to canonical Scarab tool names:
#![allow(unused)] fn main() { use agents::{tools_to_openai_json, build_llm_name_map}; let available_tools = agent.list_tools().await?; let tools = tools_to_openai_json(&available_tools); // dots → underscores let llm_name_map = build_llm_name_map(&available_tools); // reverse lookup // When dispatching: map LLM name back before calling agentd. let canonical = llm_name_map.get("web_search").cloned().unwrap_or_else(|| "web_search".into()); agent.invoke_tool(&canonical, args).await?; }
Model Selection
Model selection follows the same priority order as lm.complete:
modelfield in the tool inputspec.model_policyrouting (ifspec.model_policy≠explicit): selects cheapest, fastest, or most capable model within the remaining cost budgetSCARAB_MODELenvironment variable- Fallback:
anthropic/claude-haiku-4-5
See Model Routing for details on spec.model_policy.
Prompt Injection Defence
When the calling agent is marked as tainted (it has invoked an Input-category tool), lm.chat automatically applies the injection_policy declared in the agent's manifest:
| Policy | Effect |
|---|---|
none | No protection. Suitable for fully-trusted agents on internal data. |
delimiter_only (default) | The last user message is wrapped in <external_content>…</external_content> tags and a taint notice is injected into the system message. |
dual_validate | Same as delimiter_only, plus a secondary classifier LLM call (configurable via SCARAB_CLASSIFIER_MODEL) rejects content classified as UNSAFE before the primary call is made. Recommended for untrusted/sandboxed agents consuming external data. |
Policy defaults by trust level when spec.injection_policy is not explicitly set:
| Trust level | Default policy |
|---|---|
untrusted, sandboxed | dual_validate |
trusted | delimiter_only |
privileged | none |
API Key
Requires OPENROUTER_API_KEY in agentd's environment:
export OPENROUTER_API_KEY=sk-or-...
Network Policy
Requires spec.network.policy: full or allowlist with openrouter.ai:443.
Reference Agent
See crates/agents/src/bin/react_agent.rs for a complete ReACT-loop implementation using lm.chat, including dynamic tool discovery, history trimming, and large-result condensation.