Promptfoo logo

Promptfoo

Test your prompts, agents, and RAGs. Red teaming, pentesting, and vulnerability scanning for LLMs.

Pricing: Free
Visit website

What is Promptfoo?

Promptfoo is a a tool for testing and evaluating Large Language Model (LLM) output quality. With Promptfoo, you can Test your prompts, agents, and RAGs. Red teaming, pentesting, and vulnerability scanning for LLMs. Compare performance of GPT, Claude, Gemini, Llama, and more. Simple declarative configs with command line and CI/CD integration.

Key Features:

There are many different ways to evaluate prompts. Here are some reasons to consider promptfoo:

  • Developer friendly: promptfoo is fast, with quality-of-life features like live reloads and caching.
  • Battle-tested: Originally built for LLM apps serving over 10 million users in production. Our tooling is flexible and can be adapted to many setups.
  • Simple, declarative test cases: Define evals without writing code or working with heavy notebooks.
  • Language agnostic: Use Python, Javascript, or any other language.
  • Share & collaborate: Built-in share functionality & web viewer for working with teammates.
  • Open-source: LLM evals are a commodity and should be served by 100% open-source projects with no strings attached.
  • Private: This software runs completely locally. The evals run on your machine and talk directly with the LLM.

How to use Promptfoo:

  • Start by establishing a handful of test cases - core use cases and failure cases that you want to ensure your prompt can handle.
  • As you explore modifications to the prompt, use promptfoo eval to rate all outputs. This ensures the prompt is actually improving overall.
  • As you collect more examples and establish a user feedback loop, continue to build the pool of test cases.

Visit Promptfoo GitHub Repo

More tools in AI Testing