-
Notifications
You must be signed in to change notification settings - Fork 1
Sprint2/prompt design v1 2.3 #8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
a55402c
a8863d5
39ed2b2
10937ad
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -6,6 +6,12 @@ | |
|
|
||
| from services import GeminiService | ||
|
|
||
| # Import the new prompt system | ||
| from backend.prompts.study_gen_v1 import ( | ||
| build_study_generation_prompt, | ||
| validate_quiz_quality | ||
| ) | ||
|
|
||
|
|
||
| app = FastAPI(title="Socrato") | ||
|
|
||
|
|
@@ -86,26 +92,32 @@ async def generate_study_materials(request: GenerateRequest): | |
| - quiz (QuizQuestion[]): Array of quiz questions | ||
| """ | ||
| # Call Gemini to generate study materials | ||
| prompt = f"""You are a study assistant. Based on the following notes, generate: | ||
| 1. A summary as a list of bullet points (3-5 key points) | ||
| 2. A quiz with 3 multiple choice questions | ||
|
|
||
| Notes: | ||
| {request.text} | ||
|
|
||
| Respond in this exact JSON format: | ||
| {{ | ||
| "summary": ["point 1", "point 2", "point 3"], | ||
| "quiz": [ | ||
| {{ | ||
| "question": "Question text?", | ||
| "options": ["A", "B", "C", "D"], | ||
| "answer": "A" | ||
| }} | ||
| ] | ||
| }} | ||
|
|
||
| Return ONLY valid JSON, no markdown or extra text.""" | ||
| # prompt = f"""You are a study assistant. Based on the following notes, generate: | ||
| # 1. A summary as a list of bullet points (3-5 key points) | ||
| # 2. A quiz with 3 multiple choice questions | ||
|
|
||
| # Notes: | ||
| # {request.text} | ||
|
|
||
| # Respond in this exact JSON format: | ||
| # {{ | ||
| # "summary": ["point 1", "point 2", "point 3"], | ||
| # "quiz": [ | ||
| # {{ | ||
| # "question": "Question text?", | ||
| # "options": ["A", "B", "C", "D"], | ||
| # "answer": "A" | ||
| # }} | ||
| # ] | ||
| # }} | ||
|
|
||
| # Return ONLY valid JSON, no markdown or extra text.""" | ||
|
|
||
| # Build prompt using the centralized prompt system | ||
| prompt = build_study_generation_prompt( | ||
| user_notes=request.text, | ||
| include_examples=True # Include few-shot examples for better quality | ||
| ) | ||
|
|
||
| response = await gemini_service.call_gemini(prompt) | ||
|
|
||
|
|
@@ -161,6 +173,13 @@ async def generate_study_materials(request: GenerateRequest): | |
| answer=q["answer"] | ||
| )) | ||
|
|
||
| # Optional: Run quality checks on the quiz | ||
| quality_warnings = validate_quiz_quality(data.get("quiz", [])) | ||
| if quality_warnings: | ||
| print(f"[generate] Quality warnings: {quality_warnings}") | ||
| # Can log these or return them to the frontend in the future | ||
|
Comment on lines
+176
to
+180
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sensitive data logged to stdout On invalid/failed responses, this endpoint prints Prompt To Fix With AIThis is a comment left during a code review.
Path: backend/main.py
Line: 176:180
Comment:
**Sensitive data logged to stdout**
On invalid/failed responses, this endpoint prints `Raw response: {response}` and also prints `quality_warnings` unconditionally. Gemini output can contain user-provided notes verbatim, so this will leak user content into server logs. Since this PR adds additional logging paths, it should be gated/removed or switched to structured logging with redaction (and avoid printing raw model output).
How can I resolve this? If you propose a fix, please make it concise. |
||
|
|
||
|
|
||
| return GenerateResponse( | ||
| summary=data.get("summary", []), | ||
| quiz=quiz_questions | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Broken import when run in backend/
from backend.prompts.study_gen_v1 import ...will fail when starting the app from withinbackend/(as documented viauvicorn main:app --reloadinbackend/README.md), becausebackendwon’t be a top-level package in that execution context. This makes the server crash on startup in the common local/dev invocation; use an import that works frombackend/(e.g.from prompts.study_gen_v1 ...) or adjust the run command touvicorn backend.main:appso the package import is valid.Prompt To Fix With AI