Skip to main content
This guide will walk you through setting up your Rust project and building your first LM pipeline in DSRs. The entire process takes a few minutes and requires only basic Rust knowledge. Building your LM pipeline in DSRs requires the following steps:
  1. Install DSRs
  2. Set up your language model
  3. Define a signature
  4. Create a module
  5. Execute the pipeline and see the results
1

Install DSRs

You can add DSRs to your project just like any other Rust crate, using either of the following two methods:Option 1: Add via Cargo command
cargo add dspy-rs anyhow serde serde_json tokio
This creates an alias dsrs for the dspy-rs crate which is the intended way to use it.Option 2: Add to Cargo.toml
[dependencies]
anyhow = "1.0.99"
dspy-rs = "0.7.0"
serde = "1.0.221"
serde_json = "1.0.145"
tokio = "1.47.1"
We need to install DSRS using the name dspy-rs for now, because dsrs is an already-published crate. This may change in the future if the dsrs crate name is donated back or becomes available.
2

Set up your language model

The first step in DSRs is to configure your Language Model (LM). DSRs supports any LM supported via the async-openai crate. You can define your LM configuration using the builder pattern as follows.Once the LM instance is created, pass it to the configure function along with a chat adapter to set the global LM and adapter for your application.ChatAdapter is the default adapter in DSRs and is responsible for converting the instructions and the structure from your signature (defined in the next step) into a prompt that the LM can follow to complete the task.
  • OpenAI
  • Ollama
use dspy_rs::{configure, ChatAdapter, LM};

#[tokio::main]
async fn main() -> Result<(), anyhow::Error> {
    // API key automatically read from OPENAI_API_KEY env var
    configure(
        LM::builder()
            .model("gpt-4o-mini".to_string())
            .build()
            .await?,
        ChatAdapter,
    );

    Ok(())
}
3

Define the task via a signature

A signature is a declarative definition of the inputs and outputs of your task. In defining a signature, you define a schema for your LM call.You can create your signature in DSRs in one of two ways: either using an inline macro, or via an attribute macro. While the inline macro is more concise (and easier to get started), the attribute macro gives you more control over the signature.Let us define a signature using the inline macro:Inline Macro
let qa = sign! {
    (question: String) -> answer: String
};
The input fields are to the left of the -> arrow, and the output fields are to the right. Multiple fields can be comma-separated, for e.g., (question: String, context: String) -> answer: String.
4

Create a Module

Modules in DSRs are the building blocks of your application. They encapsulate the logic for a specific task and can be composed together to create complex workflows.A Predictor is the simplest module in DSRs. It takes a signature and input data, and orchestrates the LM call to produce a prediction.
let predictor = Predictor::new(qa);
5

Execute the Pipeline

LM calls in DSRs are asynchronous and return a future, so we need to use the tokio runtime to execute a function that uses a predictor.
use dspy_rs::{
    ChatAdapter, Example, LM, Predict, Predictor, Signature, configure, hashmap,
};

#[tokio::main]
async fn main() -> Result<(), anyhow::Error> {

    // The signature we defined above
    let qa = sign! {
        (question: String) -> answer: String
    };

    // API key automatically read from OPENAI_API_KEY env var
    configure(
        LM::builder()
            .model("gpt-4o-mini".to_string())
            .build()
            .await?,
        ChatAdapter,
    );
    // Create a question-answering signature instance
    let signature = qa::new();
    // Create a predictor
    let predictor = Predict::new(signature);
    // Define the question
    let question = "What is gravity?";
    // Create an example input to the predictor
    let inputs = Example::new(
        hashmap! {
            "question".to_string() => question.to_string().into()
        },
        vec!["question".to_string()],
        vec!["answer".to_string()],
    );

    let result = predictor.forward(inputs).await?;
    println!("Answer: {:?}", result.get("answer", None).as_str().unwrap());
    Ok(())
}
The predictor takes an Example as input, which is a mapping from field names to values. The output would be similar to the following:
Answer: "Gravity is a fundamental force of nature that causes objects
with mass to attract each other. It is responsible for the Earth's pull that
keeps us grounded, the orbits of planets around the sun, and the overall
structure and behavior of the universe. According to Einstein's theory of
general relativity, gravity is the curvature of spacetime
caused by mass and energy."

Going Further

Now that you have built your first pipeline using DSRs, you can explore some of the more advanced features of the library.

Create Signatures with Attribute Macros

If you want to exert more fine-grained control over the signature, you can define it by using an attribute macro over a struct. In the example below, notice how
  1. we use the attribute macro, [#Signature] to indicate that the underlying struct is a signature, and
  2. the doc comment specifically asks the LM to answer the question like a pirate.
#[Signature]
struct QASignature {
    /// Answer the question like a pirate.

    #[input(desc="Question to be answered.")]
    pub question: String,

    #[output]
    pub answer: String,
}
Additionally, you can also annotate each field with #[input] and #[output] attributes, which is useful when you have multiple input and output fields, and when you want to add descriptions to each field. The advantage of this approach is that you can use doc comments at the top of the struct, to specify important domain information or provide specific instructions to the LM. Let us use the above signature in our pipeline to answer questions like a pirate.
#[tokio::main]
async fn main() -> Result<(), anyhow::Error> {
    dotenv().ok();

    // API key automatically read from OPENAI_API_KEY env var
    configure(
        LM::builder()
            .model("gpt-4o-mini".to_string())
            .build()
            .await?,
        ChatAdapter,
    );
    // Create a question-answering signature instance
    let signature = QA::new();
    // Create a predictor
    let predictor = Predict::new(qa);

    // Define the question
    let question = "What is gravity?";

    // Create an example input to the predictor
    let inputs = Example::new(
        hashmap! {
            "question".to_string() => question.to_string().into()
        },
        vec!["question".to_string()],
        vec!["answer".to_string()],
    );

    let result = predictor.forward(inputs).await?;
    println!("Answer: {:?}", result.get("answer", None).as_str().unwrap());
    Ok(())
}
Running this pipeline would possibly generate the following output:
Arrr, gravity be the mighty force of the seas that keeps all the ships and
treasures close to the earth, matey! It be what pulls ye down when ye be
walkin' the plank or standin' on deck, keepin' everything from floatin'
off into the vast blue. Aye, 'tis the invisible hand of the universe,
guidin' all in its watery grip!
Notice how all it takes for us to change the behavior of the pipeline is to just change the signature used by the predictor.

Build Modules for Complex Pipelines

Predictors are the simplest modules in DSRs. You can also compose more complex modules that define your own logic. Let us examine how to do that.
use dspy_rs::{
    ChatAdapter, Example, LM, Module, Predict, Prediction,
    Predictor, Signature, configure, hashmap,
};
use std::env;

struct AnswerQuestion {
    inner: Predict,
}

impl AnswerQuestion {
    fn new() -> Self {
        Self {
            inner: Predict::new(QA::new()),
        }
    }
}

impl Module for AnswerQuestion {
    async fn forward(&self, inputs: Example) -> anyhow::Result<Prediction> {
        self.inner.forward(inputs).await
    }
}
You can define an implementation of the Module trait for your own struct that composes together one or more predictors and your own arbitrary logic. In this case, the AnswerQuestion module wraps a predictor that uses the QA signature defined earlier. The forward method is async because it wraps a predictor call. We can define the main function to use this module as follows:
  1. Create a new instance of the module (line 16).
  2. Call the forward method on the module (line 30).
#[tokio::main]
async fn main() -> Result<(), anyhow::Error> {
    dotenv().ok();

    // API key automatically read from OPENAI_API_KEY env var
    configure(
        LM::builder()
            .model("gpt-4o-mini".to_string())
            .build()
            .await?,
        ChatAdapter,
    );

    let module = AnswerQuestion::new();

    // Define the question
    let question = "What is gravity?";

    // Create an example input to the predictor
    let inputs = Example::new(
        hashmap! {
            "question".to_string() => question.to_string().into()
        },
        vec!["question".to_string()],
        vec!["answer".to_string()],
    );

    let result = module.forward(inputs).await?;
    println!("Answer: {:?}", result.get("answer", None).as_str().unwrap());
    Ok(())
}
This produces a similar result to the previous example with the attribute macro-based signature.
Arrr, matey! Gravity be the mighty force that keeps ye anchored to the deck
and the planets holdin' their orbits, pullin' all things toward the center of
the Earth or any other celestial body. 'Tis like an invisible hand, guidin'
the stars and the seas alike!
DSRs supports many more features, including an Optimizable trait for modules that can help improve your modules via optimizers. Continue exploring the rest of the documentation to learn more!
I