Skip to main content
The Language Model (LM) struct is a configurable client for calling LLM providers, with built-in caching and history tracking. This page explains what an LM is in DSRs, how it’s structured in Rust terms, and how it cooperates with other building blocks.

What is an LM?

The LM struct is a thin wrapper around OpenAI-compatible API clients, with built-in support for multiple providers. It handles three core responsibilities:
  1. Configuration - Stores provider credentials, model selection, and inference parameters (eg: temperature)
  2. API Execution - Takes pre-formatted Chat messages and executes HTTP calls to the LLM provider
  3. Response Caching - Optionally stores input/output pairs to avoid duplicate API calls

Structure

LM is built using the builder pattern and holds:
  • model - Model identifier (e.g., “gpt-4o-mini” or “openai:gpt-4o-mini”)
  • api_key - Provider API credentials (optional for local servers)
  • base_url - API endpoint URL (optional, inferred from model provider)
  • temperature - Sampling temperature (default: 0.7)
  • max_tokens - Maximum completion tokens (default: 512)
  • cache - Enable response caching (default: true)
  • client - Internal HTTP client (initialized during build)
  • cache_handler - Optional response cache (initialized during build if enabled)
Cloning an LM is cheap - clones share the same HTTP client and cache via Arc, making them ideal for concurrent use.

Where it fits

You rarely call LM directly—It’s the lowest-level DSRs primitive. Instead, a Predictor uses an Adapter to format a Signature and call the LM. This keeps business logic (your task) separate from transport (the model client).

Construction and configuration

The LM::builder() must be awaited with .build().await because client initialization is async.

Basic usage

use dspy_rs::LM;

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    // OpenAI - API key automatically read from OPENAI_API_KEY env var
    let lm = LM::builder()
        .model("gpt-4o-mini".to_string())
        .temperature(0.7)
        .max_tokens(512)
        .build()
        .await?;

    // Or explicitly provide API key
    let lm = LM::builder()
        .model("gpt-4o-mini".to_string())
        .api_key("your-api-key".into())
        .build()
        .await?;

    Ok(())
}

Local server usage

For local OpenAI-compatible servers (vLLM, Ollama, etc.), provide base_url without an api_key:
let lm = LM::builder()
    .base_url("http://localhost:11434".to_string())
    .model("llama3".to_string())
    .build()
    .await?;

Custom OpenAI-compatible endpoints

For custom endpoints requiring authentication, provide both base_url and api_key:
let lm = LM::builder()
    .base_url("https://my-custom-api.com/v1".to_string())
    .api_key(my_api_key.into())
    .model("custom-model".to_string())
    .build()
    .await?;
  • Clone semantics: LM implements Clone; clones share the underlying client and cache via Arc, so they see the same history while carrying their own config copy.

API Reference

You can browse the full LM module reference on docs.rs.

Global vs explicit usage

  • Global: configure(lm, ChatAdapter) sets the process-wide default used by predictors.
  • Explicit: Wrap the model in an Arc when you want to override the global instance: let shared = Arc::new(lm); predictor.forward_with_config(inputs, Arc::clone(&shared)).await.

Async execution and sync entry

  • Async: LM building and calls are async; prefer using an async runtime (Tokio).
  • Sync-style: If you need a plain fn main, create a runtime and block_on the async work.
  • Async (Tokio)
  • Sync
#[tokio::main]
async fn main() -> anyhow::Result<()> {
    let lm = LM::builder()
        .model("gpt-4o-mini".to_string())
        .build()
        .await?;
    Ok(())
}

Inspecting history

let history = lm.inspect_history(3).await;
for entry in history {
    println!("Prompt: {}", entry.prompt);
    println!("Prediction: {:?}", entry.prediction.data);
}
inspect_history requires caching to be enabled on the LM; otherwise no history is recorded.

Configuration options

All LM builder parameters have sensible defaults, so you only need to override what you need.
ParameterTypeDefaultNotes
modelString"openai:gpt-4o-mini"Supports “provider:model” format or bare model name (defaults to OpenAI)
api_keyOption<String>NoneProvider API key; omit for local servers
base_urlOption<String>NoneCustom endpoint URL; auto-detected from model provider if not provided
temperaturef320.7Higher values increase randomness
max_tokensu32512Upper bound on completion tokens
cachebooltrueEnables response caching and inspect_history support

Example with custom settings

// API key automatically read from ANTHROPIC_API_KEY env var
let lm = LM::builder()
    .model("anthropic:claude-3-5-sonnet-20241022".to_string())
    .temperature(0.3)
    .max_tokens(1_024)
    .cache(true)
    .build()
    .await?;

Provider Support

DSRs supports multiple LLM providers through Rig. Use the provider:model format to specify which provider to use. Bare model names default to OpenAI. Supported providers:
  • openai - OpenAI models (requires OPENAI_API_KEY)
  • anthropic - Anthropic models (requires ANTHROPIC_API_KEY)
  • gemini - Google Gemini models (requires GEMINI_API_KEY)
  • groq - Groq models (requires GROQ_API_KEY)
  • openrouter - OpenRouter (requires OPENROUTER_API_KEY)
  • ollama - Local Ollama models (no API key required)
API keys are automatically read from environment variables. You only need to provide .api_key() if you want to override the default environment variable. You can also use base_url to connect to any OpenAI-compatible server (vLLM, LiteLLM, etc.).

Usage examples

// Anthropic - reads from ANTHROPIC_API_KEY env var
let lm = LM::builder()
    .model("anthropic:claude-3-5-sonnet-20241022".to_string())
    .build()
    .await?;

// Google Gemini - reads from GEMINI_API_KEY env var
let lm = LM::builder()
    .model("gemini:gemini-2.0-flash-exp".to_string())
    .build()
    .await?;

// Groq - reads from GROQ_API_KEY env var
let lm = LM::builder()
    .model("groq:mixtral-8x7b-32768".to_string())
    .build()
    .await?;

// OpenAI (or just use model name directly) - reads from OPENAI_API_KEY env var
let lm = LM::builder()
    .model("gpt-4o".to_string())  // defaults to OpenAI
    .build()
    .await?;

// Ollama (local, no API key needed)
let lm = LM::builder()
    .model("ollama:llama3".to_string())
    .build()
    .await?;

// OpenRouter - reads from OPENROUTER_API_KEY env var
let lm = LM::builder()
    .model("openrouter:anthropic/claude-3-opus".to_string())
    .build()
    .await?;
All provider integrations are powered by Rig, which handles the provider-specific API details.