How it Works
COPRO uses a straightforward approach:- Generate candidates: Create multiple prompt variations using an LLM
- Evaluate: Test each candidate on your training data
- Refine: Use the best candidates to generate improved versions
- Repeat: Continue for a fixed number of depth iterations
Configuration
Usage Example
When to Use COPRO
Best for:- Quick iteration cycles
- Simple tasks
- Limited compute budget
- When you need results fast
- You need best possible quality (use MIPROv2 or GEPA)
- Task has complex failure modes (use GEPA)
- You want to leverage prompting best practices (use MIPROv2)
Comparison with Other Optimizers
Feature | COPRO | MIPROv2 | GEPA |
---|---|---|---|
Speed | Fast | Slow | Medium |
Quality | Good | Better | Best |
Feedback | Score | Score | Score + Text |
Diversity | Low | Medium | High |
Setup | Simple | Moderate | Moderate |
Configuration Details
Breadth
Number of candidate prompts generated at each iteration. Higher breadth means more exploration but more compute. Recommended: 5-15Depth
Number of refinement iterations. Each iteration builds on the best candidates from the previous one. Recommended: 2-5Temperature
Controls randomness in prompt generation. Higher temperature means more diverse candidates. Recommended: 1.0-1.5Track Stats
When enabled, COPRO tracks detailed statistics about all evaluated candidates and their scores over time.Implementation Notes
COPRO maintains:- All evaluated candidates with their scores
- Best candidates from each iteration
- History of improvements over iterations
Examples
COPRO Examples
See examples 03-evaluate-hotpotqa.rs and 04-optimize-hotpotqa.rs