Recipe generation API with support for multiple LLM providers (OpenAI, Ollama).
- Create
.env.docker
# LLM Providers (at least one required)
OPENAI_API_KEY=YOUR_OPENAI_KEY
# Ollama (optional)
OLLAMA_ENDPOINT=http://127.0.0.1:11434
OLLAMA_MODEL=phi4
OLLAMA_CLASSIFIER_MODEL=phi4-mini
# Database
MONGO_DB_URL=mongodb://127.0.0.1:27018
MONGO_DB_NAME=restofrigo
MONGO_INITDB_ROOT_USERNAME=root
MONGO_INITDB_ROOT_PASSWORD=change_me
DB_NAME=restofrigo
DB_PORT=27017
DB_USER=user
DB_PASSWORD=change_me
DB_ADMIN=admin
DB_ADMIN_PASSWORD=change_me- Update
docker-compose.ymlto fit your configuration - Start the application:
docker compose up -d - Setup the database (see Database Setup below)
- Create API keys (see API Key Management below)
Initialize MongoDB indexes before first use:
cd server
deno task setupcd server
deno task create --email=user@example.com --tokens=100Parameters:
--email: User email (unique identifier)--tokens: Daily token limit (number of recipe generations allowed per day)
Output:
{
"success": true,
"apiKey": "48-character-hex-string"
}Important:
- Each email can only have one API key
- API keys are 48-character hexadecimal strings
- Tokens reset automatically every 24 hours
- Keys must be created manually (no public registration endpoint)
Tokens are consumed on each recipe generation request:
- 1 token = 1 recipe generation (classification + generation)
- Cached responses do NOT consume tokens
- Tokens reset to
dailyTokenslimit every 24 hours from last reset
Users can check their current token usage:
curl -X GET http://localhost:9992/usage \
-H "Authorization: YOUR_API_KEY"Response:
{
"success": true,
"usage": "95/100",
"nextReset": "in 18 hours 42 minutes"
}curl -X POST http://localhost:9992/ \
-H "Authorization: YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"threadId": "session123",
"prompt": "Homemade vanilla ice cream recipe",
"revision": 0,
"provider": "ollama"
}'Request Parameters:
threadId(string, required): Conversation identifierprompt(string, required): Recipe request (max 300 chars)revision(number, required): Conversation versionprovider(string, optional): LLM provider ("openai"or"ollama"). Defaults to OpenAI if available.
Response Fields:
prompt: The original promptthreadId: The conversation identifierrecipeMD: Recipe in markdown formatrecipeJSON: Parsed recipe as JSON objectrevision: Conversation versioncached: Whether the response came from cacheprovider: LLM provider used ("openai"or"ollama")model: Model name used (e.g.,"gpt-4.1","phi4")success: Request success status
Response:
{
"prompt": "Homemade vanilla ice cream recipe",
"threadId": "session123",
"recipeMD": "# Vanilla Ice Cream\n\n...",
"recipeJSON": {...},
"revision": 0,
"cached": false,
"provider": "ollama",
"model": "phi4",
"success": true
}The API supports multiple LLM providers. At least one provider must be configured.
Set OPENAI_API_KEY in your environment.
Set OLLAMA_ENDPOINT and optionally OLLAMA_MODEL (defaults to phi4).
Specify the provider in the request body with the provider field.
The API implements per-user, per-conversation caching:
Cache Key: {promptHash, threadId, revision}
promptHash: SHA1 hash of the prompt contentthreadId: Unique conversation identifier (includes user token for isolation)revision: Conversation version number
Behavior:
- Cache hits return instantly and do NOT consume tokens
- Each user's cache is isolated
- Same prompt in different conversations creates separate cache entries
- Cached recipes expire after 30 days (TTL)
Example: User A asking "pasta recipe" in conversation 1 gets cached separately from:
- User A asking "pasta recipe" in conversation 2 (different threadId)
- User B asking "pasta recipe" (different token)
From the server/ directory:
# Start the server (development mode with watch)
deno task start
# Setup database indexes
deno task setup
# Create new API key
deno task create --email=user@example.com --tokens=100