- Prompt Hub: A central store for managing prompts. This is a especially useful source of truth for your prompts when you are collaborating with a team to iterate on them.
- Playground: A sandbox for testing and experimenting with prompts and models, helping you refine their behavior before deploying.
Develop and iterate on prompts in the playground
Prompt development is an iterative, experimental process. Using the Playground, you can:- Test with different inputs and compare the results of different prompts and LLMs side-by-side.
- Refine your prompt using the embedded Prompt Canvas to have an LLM improve your prompt.
- Prototype tools, set structured outputs and test the interaction within the Playground.
- Run evaluations over your prompt in the Playground. Share experiments with teammates to get feedback and collaboratively optimize performance.
Manage prompts across environments
When managing prompts across your dev/staging/production environments, you want to balance speed of iteration across a team with the need for stability. We recommend using the Prompt Hub along with commit tags to manage prompts across dev/testing/staging environments and a seperate workflow for managing prompts in production.Update prompts in dev/testing/staging environments
After testing in the Playground, you’ll want to see how the prompt interacts within the context of your application. A recommend workflow is:- Set up your dev/testing/staging environments to pull prompts from the Prompt Hub using defined commit tags (e.g.,
dev
,staging
) for each environment. This creates a dynamic reference to the prompt version you want to use in a particular environment. - After saving a new version of your prompt, move the commit tag for the desired environment to the new version of the prompt. Your application will automatically pull the new version on its next request to the Prompt Hub.
- Test the prompt across environments.
Update prompts in production
It is not recommended to put an API call to the Prompt Hub in the hot path of your application on every request. Instead, we recommend either pulling the prompt into your code directly or fetching the prompt once and reusing it.
prod
tag added to it, which can then be used as a trigger to pull the latest version of the prompt into your production environment.
Integrate prompt evaluations into CI/CD. Prompt webhooks can be used in order to trigger CI when a prompt has new commit, or when a commit tag is moved. Use pytest or jest/vitest to run LangSmith evaluations as part of your test suite, blocking promotion if quality thresholds fail.
Sync prompts with your code
A common approach is to sync prompts from the Prompt Hub with your source code, so they can be version-controlled and deployed alongside your application code. The workflow would look like:- A LangSmith user moves the
prod
tag to a new prompt commit. - The webhook triggers a CI/CD pipeline (e.g., GitHub Actions, GitLab CI) that pulls the commit with the
prod
tag into your repository. - The pipeline commits the prompt change to your repo.
Store prompts in a database or cache prompts in your application
If you want your prompts versioned independenty of your application code, you can sync them to a database or cache them in your application. This allows you to update, A/B test prompts without redeploying your application.- Database: Store prompts in a database as a form of dynamic configuration, and update them with a CI/CD process when a webhook is triggered.
- Cache: Cache prompts in the application in memory, and update them by invalidating the cache on webhook receipt.