Skip to content

studiowebux/restofrigo-backend

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Rest-O-Frigo Backend

Recipe generation API with support for multiple LLM providers (OpenAI, Ollama).

Setup (Docker)

  1. Create .env.docker
# LLM Providers (at least one required)
OPENAI_API_KEY=YOUR_OPENAI_KEY

# Ollama (optional)
OLLAMA_ENDPOINT=http://127.0.0.1:11434
OLLAMA_MODEL=phi4
OLLAMA_CLASSIFIER_MODEL=phi4-mini

# Database
MONGO_DB_URL=mongodb://127.0.0.1:27018
MONGO_DB_NAME=restofrigo
MONGO_INITDB_ROOT_USERNAME=root
MONGO_INITDB_ROOT_PASSWORD=change_me
DB_NAME=restofrigo
DB_PORT=27017
DB_USER=user
DB_PASSWORD=change_me
DB_ADMIN=admin
DB_ADMIN_PASSWORD=change_me
  1. Update docker-compose.yml to fit your configuration
  2. Start the application: docker compose up -d
  3. Setup the database (see Database Setup below)
  4. Create API keys (see API Key Management below)

Database Setup

Initialize MongoDB indexes before first use:

cd server
deno task setup

API Key Management

Create New API Key

cd server
deno task create --email=user@example.com --tokens=100

Parameters:

  • --email: User email (unique identifier)
  • --tokens: Daily token limit (number of recipe generations allowed per day)

Output:

{
  "success": true,
  "apiKey": "48-character-hex-string"
}

Important:

  • Each email can only have one API key
  • API keys are 48-character hexadecimal strings
  • Tokens reset automatically every 24 hours
  • Keys must be created manually (no public registration endpoint)

Token Management

Tokens are consumed on each recipe generation request:

  • 1 token = 1 recipe generation (classification + generation)
  • Cached responses do NOT consume tokens
  • Tokens reset to dailyTokens limit every 24 hours from last reset

Check Usage

Users can check their current token usage:

curl -X GET http://localhost:9992/usage \
  -H "Authorization: YOUR_API_KEY"

Response:

{
  "success": true,
  "usage": "95/100",
  "nextReset": "in 18 hours 42 minutes"
}

API Endpoints

Generate Recipe

curl -X POST http://localhost:9992/ \
  -H "Authorization: YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "threadId": "session123",
    "prompt": "Homemade vanilla ice cream recipe",
    "revision": 0,
    "provider": "ollama"
  }'

Request Parameters:

  • threadId (string, required): Conversation identifier
  • prompt (string, required): Recipe request (max 300 chars)
  • revision (number, required): Conversation version
  • provider (string, optional): LLM provider ("openai" or "ollama"). Defaults to OpenAI if available.

Response Fields:

  • prompt: The original prompt
  • threadId: The conversation identifier
  • recipeMD: Recipe in markdown format
  • recipeJSON: Parsed recipe as JSON object
  • revision: Conversation version
  • cached: Whether the response came from cache
  • provider: LLM provider used ("openai" or "ollama")
  • model: Model name used (e.g., "gpt-4.1", "phi4")
  • success: Request success status

Response:

{
  "prompt": "Homemade vanilla ice cream recipe",
  "threadId": "session123",
  "recipeMD": "# Vanilla Ice Cream\n\n...",
  "recipeJSON": {...},
  "revision": 0,
  "cached": false,
  "provider": "ollama",
  "model": "phi4",
  "success": true
}

Provider Configuration

The API supports multiple LLM providers. At least one provider must be configured.

OpenAI

Set OPENAI_API_KEY in your environment.

Ollama

Set OLLAMA_ENDPOINT and optionally OLLAMA_MODEL (defaults to phi4).

Dynamic Selection

Specify the provider in the request body with the provider field.

Caching Strategy

The API implements per-user, per-conversation caching:

Cache Key: {promptHash, threadId, revision}

  • promptHash: SHA1 hash of the prompt content
  • threadId: Unique conversation identifier (includes user token for isolation)
  • revision: Conversation version number

Behavior:

  • Cache hits return instantly and do NOT consume tokens
  • Each user's cache is isolated
  • Same prompt in different conversations creates separate cache entries
  • Cached recipes expire after 30 days (TTL)

Example: User A asking "pasta recipe" in conversation 1 gets cached separately from:

  • User A asking "pasta recipe" in conversation 2 (different threadId)
  • User B asking "pasta recipe" (different token)

Available Commands

From the server/ directory:

# Start the server (development mode with watch)
deno task start

# Setup database indexes
deno task setup

# Create new API key
deno task create --email=user@example.com --tokens=100

About

Backend for Rest-O-Frigo Mobile Application - An LLM Based recipe generator

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors