lm.complete
The lm.complete tool sends a prompt to a language model via OpenRouter and returns the completion text along with token usage and cost.
Capability Required
tool.invoke:lm.complete
Input Schema
{
"type": "object",
"required": ["prompt"],
"properties": {
"prompt": {
"type": "string",
"description": "The user message / prompt text."
},
"system": {
"type": "string",
"description": "Optional system message."
},
"model": {
"type": "string",
"description": "OpenRouter model ID. Overrides SCARAB_MODEL if set."
}
}
}
Output Schema
{
"type": "object",
"properties": {
"text": { "type": "string", "description": "The completion text." },
"input_tokens": { "type": "integer", "description": "Tokens in the prompt." },
"output_tokens": { "type": "integer", "description": "Tokens in the completion." },
"cost": { "type": "number", "description": "Approximate USD cost." }
}
}
Examples
#![allow(unused)] fn main() { // Basic completion let result = agent.invoke_tool("lm.complete", json!({ "prompt": "What is the capital of France?", "system": "Answer in one sentence.", "model": "anthropic/claude-opus-4-6" })).await?; println!("{}", result["text"]); // "The capital of France is Paris." println!("{}", result["output_tokens"]); // e.g. 8 println!("{}", result["cost"]); // e.g. 0.00024 }
ash tools invoke <agent-id> lm.complete '{"prompt": "Hello!", "model": "openai/gpt-4o-mini"}'
Model Selection
The model is selected in this priority order:
modelfield in the tool input (always respected regardless of policy)spec.model_policyrouting: if the policy ischeapest,fastest, ormost_capable, agentd'sModelRouterselects a model from the built-in registry within the agent's remaining cost budgetSCARAB_DEFAULT_MODELenvironment variable (when policy isexplicitand no model is provided)- Hardcoded fallback:
anthropic/claude-haiku-4-5
Configure routing via spec.model_policy in the manifest. See the Manifest Reference for details.
Prompt Injection Defence
When the calling agent is tainted (it has called an Input-category tool such as web.fetch or fs.read), lm.complete applies the injection_policy from the manifest:
| Policy | Effect |
|---|---|
none | No protection |
delimiter_only (default for trusted) | Prompt is wrapped in <external_content>…</external_content> and a taint notice is prepended to the system message |
dual_validate (default for untrusted/sandboxed) | Same as above, plus a secondary classifier LLM call checks the prompt for injection patterns before the primary call proceeds |
See lm.chat and Manifest Reference for configuration details.
API Key
The lm.complete tool requires OPENROUTER_API_KEY to be set in agentd's environment, or registered as a secret:
export OPENROUTER_API_KEY=sk-or-...
Cost
Estimated base cost: 1.0 (actual cost varies by model and token count). The cost field in the output reflects the actual charged cost from OpenRouter.
Network Policy
Requires spec.network.policy: full or allowlist with openrouter.ai:443.