DEV Community

shubhanshu for Exemplar Dev

Posted on • Edited on • Originally published at github.com

Exemplar Prompt Hub

🧠 API for Managing AI Prompts

I developed Exemplar Prompt Hub asto streamline prompt management for my AI applications in production. It centralizes prompt storage, versioning, tagging, and retrieval via a simple REST APIβ€”perfect for chatbots, RAG systems, or prompt engineering workflows.


πŸš€ Core Features

  • RESTful API for prompt CRUD operations
  • Version control for prompt evolution
  • Tag-based organization and metadata support
  • Powerful search and filtering capabilities
  • Prompt Playground

πŸ› οΈ Quick Start

Prerequisites:

  • Python 3.8+
  • PostgreSQL
  • FastAPI

Clone the repo and follow the README for setup.


πŸ“¦ Example: Create a Greeting Prompt Template

curl -X POST "http://localhost:8000/api/v1/prompts/" \
  -H "Content-Type: application/json" \
  -d '{
    "name": "greeting-template",
    "text": "Hello {{ name }}! Welcome to {{ platform }}. Your role is {{ role }}.",
    "description": "A greeting template with dynamic variables",
    "meta": {
      "template_variables": ["name", "platform", "role"],
      "author": "test-user"
    },
    "tags": ["template", "greeting"]
  }'
Enter fullscreen mode Exit fullscreen mode

🧩 Rendering Prompts with Jinja2 and usage with LLM (OpenAI)

Fetch the prompt by name or ID, then render it dynamically by injecting variables:

import requests
import jinja2
from jinja2 import Template
from openai import OpenAI

client = OpenAI(api_key="<your-api-key>")

# Fetch the prompt template
response = requests.get("http://localhost:8000/api/v1/prompts/?skip=0&limit=1&search=greeting-template")
prompt_data = response.json()

# Create a Jinja template
template = Template(prompt_data[0]["text"])

# Render with variables
rendered_prompt = template.render(
    name="John",
    platform="Exemplar Prompt Hub",
    role="Developer",
    department="Engineering"
)
print("\nRendered Prompt:")
print(rendered_prompt)
# Use the new OpenAI client format
completion = client.chat.completions.create(
    model="o1-mini",
    messages=[
        {
            "role": "user",
            "content": rendered_prompt
        }
    ]
)

print("\nGenerated Response:")
print(completion.choices[0].message.content)

Enter fullscreen mode Exit fullscreen mode

Output

Rendered Prompt:
Hello John! Welcome to Exemplar Prompt Hub. Your role is Developer.

Generated Response:
Hello! Thank you for the warm welcome. I’m John, the Developer at Exemplar Prompt Hub. I’m here to help you with any development needs or questions you might have. How can I assist you today?
Enter fullscreen mode Exit fullscreen mode

Refer the Example Here!

This enables seamless integration of your prompt management service with downstream AI models.

Try Playground API via OpenRouter.ai


πŸ” Use Cases

  • Chatbots with dynamic conversational prompts
  • Retrieval-Augmented Generation systems
  • Collaborative prompt engineering
  • Version-controlled prompt experimentation

For full details, visit the GitHub repository.

Happy Prompting!


Top comments (1)

Collapse
 
hayoung0lee profile image
Hayoung Lee(μ΄ν•˜μ˜)

Amazing job!