All Toolsโ€บPrompt Optimization Helper
๐Ÿ’ป AIMarch 5, 2026โœ… Tests passing

Prompt Optimization Helper

This CLI tool helps AI developers optimize and refine prompts for large language models (LLMs). It allows users to test variations of prompts, automatically benchmarking outputs based on defined quality metrics. This is useful for fine-tuning prompt phrasing for better AI responses in chatbot systems, content generation, or other LLM use cases.

What It Does

  • Test multiple prompt variations with a single command.
  • Automatically score outputs based on user-defined metrics (e.g., relevance, conciseness).
  • Supports batch input for bulk prompt optimization.
  • Outputs results as a ranked and scored CSV file.

Installation

1. Clone the repository:

git clone <repository_url>
   cd prompt_optimization_helper

2. Install dependencies:

pip install -r requirements.txt

Usage

python prompt_optimization_helper.py --base "Explain AI" --variations variations.txt --output results.csv --key <API_KEY>

Arguments

  • --base: The base prompt to use for testing variations.
  • --variations: Path to a text file containing prompt variations (one per line).
  • --output: Path to save the output CSV file.
  • --key: OpenAI API key.

Example

1. Create a variations.txt file:

in simple terms
   to a 5-year-old
   for a technical audience

2. Run the tool:

python prompt_optimization_helper.py --base "Explain AI" --variations variations.txt --output results.csv --key <API_KEY>

3. Check the generated results.csv for the ranked and scored output.

Source Code

import argparse
import openai
import pandas as pd
import os

def evaluate_prompts(base_prompt, variations, api_key):
    """
    Evaluate prompt variations using OpenAI API and return scores.

    Args:
        base_prompt (str): The base prompt to evaluate.
        variations (list): A list of prompt variations.
        api_key (str): OpenAI API key.

    Returns:
        pd.DataFrame: A dataframe containing prompt variations and their scores.
    """
    openai.api_key = api_key

    results = []

    for variation in variations:
        prompt = f"{base_prompt} {variation}"
        try:
            response = openai.Completion.create(
                engine="text-davinci-003",
                prompt=prompt,
                max_tokens=50
            )
            output = response["choices"][0]["text"].strip()

            # Example scoring: length of response as a proxy for relevance
            score = len(output)

            results.append({"Prompt Variation": variation, "Output": output, "Score": score})
        except Exception as e:
            results.append({"Prompt Variation": variation, "Output": "Error", "Score": 0})

    return pd.DataFrame(results)

def main():
    parser = argparse.ArgumentParser(
        description="Prompt Optimization Helper: Optimize and refine prompts for LLMs."
    )

    parser.add_argument(
        "--base", required=True, help="Base prompt to use for testing variations."
    )
    parser.add_argument(
        "--variations", required=True, help="Path to a text file containing prompt variations (one per line)."
    )
    parser.add_argument(
        "--output", required=True, help="Path to save the output CSV file."
    )
    parser.add_argument(
        "--key", required=True, help="OpenAI API key."
    )

    args = parser.parse_args()

    # Read variations from file
    if not os.path.exists(args.variations):
        print(f"Error: Variations file '{args.variations}' not found.")
        return

    with open(args.variations, "r") as file:
        variations = [line.strip() for line in file if line.strip()]

    if not variations:
        print("Error: No variations found in the provided file.")
        return

    # Evaluate prompts
    try:
        results_df = evaluate_prompts(args.base, variations, args.key)
    except Exception as e:
        print(f"Error during prompt evaluation: {e}")
        return

    # Save results to CSV
    try:
        results_df.to_csv(args.output, index=False)
        print(f"Results saved to {args.output}")
    except Exception as e:
        print(f"Error saving results to CSV: {e}")

if __name__ == "__main__":
    main()

Community

Downloads

ยทยทยท

Rate this tool

No ratings yet โ€” be the first!

Details

Tool Name
prompt_optimization_helper
Category
AI
Generated
March 5, 2026
Tests
Passing โœ…

Quick Install

Clone just this tool:

git clone --depth 1 --filter=blob:none --sparse \
  https://github.com/ptulin/autoaiforge.git
cd autoaiforge
git sparse-checkout set generated_tools/2026-03-05/prompt_optimization_helper
cd generated_tools/2026-03-05/prompt_optimization_helper
pip install -r requirements.txt 2>/dev/null || true
python prompt_optimization_helper.py
Prompt Optimization Helper โ€” AI Tools by AutoAIForge