Predict takes a signature and actually calls the LM. It’s the bridge between your type definitions and real LLM inference. Under the hood, it uses an adapter to format prompts and parse responses.
Basic usage
::<QA> tells Rust which signature you’re using. The macro generates QAInput from your #[input] fields.
Creating predictors
Simple
With instruction override
With demos (few-shot)
Example<S> — typed input/output pairs. They become few-shot examples in the prompt.
With tools
Calling predictors
.call() returns Result<Predicted<Output>, PredictError>.
Predicted<O> wraps the output with call metadata and implements Deref<Target = O>, so you access fields directly:
Accessing metadata
For token usage, raw response text, or per-field parse details, use.metadata():
CallMetadata fields
| Field | Type | Description |
|---|---|---|
raw_response | String | Raw LLM response text |
lm_usage | LmUsage | Token counts |
tool_calls | Vec<ToolCall> | Tool calls the LM requested |
tool_executions | Vec<String> | Results from tool execution |
node_id | Option<usize> | Trace node ID if tracing |
field_meta | IndexMap<String, FieldMeta> | Per-field raw text, flags, constraint results |
Error handling
Predict implements Module
Predict<S> implements the Module trait with typed associated types:
Multiple predictors in a pipeline
Prompting strategies
Instead of manually adding fields for chain-of-thought reasoning, use library modules that augment any signature:ChainOfThought<S>— adds areasoningfield, accessible viaresult.reasoningReAct<S>— adds tool-calling with an action/observation loop
