Skip to main content
DSRs lets you call language models with typed Rust structs. Define your inputs and outputs as a struct, and the library handles prompt formatting and response parsing. This guide walks you through building your first typed LM pipeline. Call init_tracing() once at startup in your app examples.
1

Install DSRs

Add to your Cargo.toml:
[dependencies]
dspy-rs = "0.7"
tokio = { version = "1", features = ["full"] }
anyhow = "1"
Or via cargo:
cargo add dspy-rs tokio anyhow
2

Configure the LM

Tell DSRs which model to use. This sets a global default that all predictors will use:
use dspy_rs::{configure, init_tracing, ChatAdapter, LM};

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    init_tracing()?;

    configure(
        LM::builder()
            .model("openai:gpt-4o-mini".to_string())
            .build()
            .await?,
        ChatAdapter,  // handles prompt formatting and response parsing
    );

    Ok(())
}
Set OPENAI_API_KEY in your environment. For other providers, use the appropriate prefix (e.g., anthropic:claude-3-haiku).
3

Define a signature

A signature declares your task’s inputs and outputs:
use dspy_rs::Signature;

/// Answer questions accurately and concisely.
#[derive(Signature, Clone, Debug)]
struct QA {
    /// The question to answer
    #[input]
    question: String,

    /// A clear, direct answer
    #[output]
    answer: String,
}
The doc comments become:
  • Struct docstring → instruction for the LM
  • Field docstrings → field descriptions in the prompt
4

Call the LM

Create a predictor and call it:
use dspy_rs::Predict;

let predict = Predict::<QA>::new();

// QAInput is auto-generated from your #[input] fields
let output = predict.call(QAInput {
    question: "What is the capital of France?".into(),
}).await?;

println!("Answer: {}", output.answer);
The #[derive(Signature)] macro generates QAInput from your #[input] fields. You get back a QA struct with both input and output fields filled in - output.answer is a typed String.

Complete example

use dspy_rs::{configure, init_tracing, ChatAdapter, LM, Predict, Signature};

/// Answer questions accurately and concisely.
#[derive(Signature, Clone, Debug)]
struct QA {
    #[input]
    question: String,

    #[output]
    answer: String,
}

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    init_tracing()?;

    configure(
        LM::builder()
            .model("openai:gpt-4o-mini".to_string())
            .build()
            .await?,
        ChatAdapter,
    );

    let predict = Predict::<QA>::new();

    let output = predict.call(QAInput {
        question: "What is the capital of France?".into(),
    }).await?;

    println!("Answer: {}", output.answer);

    Ok(())
}

Next steps

Adding complexity

Input formatting and rendering

Use #[format("json" | "yaml" | "toon")] for serialization, or #[render(jinja = "...")] for custom field text. See the full attribute reference in Signatures and runtime behavior in Adapter.

Custom types

When you need more than primitives, add #[BamlType]:
use dspy_rs::{Signature, BamlType};

#[BamlType]
#[derive(Clone, Debug)]
enum Sentiment {
    Positive,
    Negative,
    Neutral,
}

#[derive(Signature, Clone, Debug)]
/// Analyze the sentiment of the text.
struct SentimentAnalysis {
    #[input]
    text: String,

    #[output]
    sentiment: Sentiment,

    #[output]
    confidence: f64,
}

Few-shot demos

Add examples to guide the LM:
use dspy_rs::Example;

let predict = Predict::<QA>::builder()
    .demo(Example::<QA>::new(
        QAInput { question: "What is 2+2?".into() },
        QAOutput { answer: "4".into() },
    ))
    .build();

Constraints

Validate outputs with #[check] and #[assert]:
#[derive(Signature, Clone, Debug)]
struct Rating {
    #[input]
    text: String,

    #[output]
    #[assert("this >= 1 && this <= 5")]
    score: i32,
}

Multi-step pipelines

Chain predictors for complex workflows:
struct SummarizeAndRate {
    summarizer: Predict<Summarize>,
    rater: Predict<Rate>,
}

impl SummarizeAndRate {
    async fn run(&self, text: String) -> anyhow::Result<RateOutput> {
        let summary = self.summarizer.call(SummarizeInput { text }).await?;
        let rating = self.rater.call(RateInput {
            summary: summary.summary,
        }).await?;
        Ok(rating.into_inner())
    }
}
See Modules for how to make this optimizer-compatible.