Conversation
I doesnt test it very good but it should work ;-)
There was a problem hiding this comment.
Code Review
This pull request integrates Google Gemini as a new AI provider for LinkedIn post generation, updating the documentation, backend routes, and AI service logic. Feedback focuses on refactoring the Gemini implementation to use the official Python SDK for better maintainability and reliability, as well as consolidating duplicated API key retrieval logic within the backend routes. A minor documentation error in the README was also identified where a provider was omitted.
| def _generate_with_gemini( | ||
| system_prompt: str, | ||
| user_prompt: str, | ||
| api_key: Optional[str] = None, | ||
| temperature: float = 0.8, | ||
| ) -> Optional[str]: | ||
| """ | ||
| Generate post using Google Gemini API. | ||
|
|
||
| This is a PRO tier provider. | ||
| """ | ||
| key = api_key or GEMINI_API_KEY | ||
| if not key: | ||
| logger.warning("No Gemini API key available") | ||
| return None | ||
|
|
||
| try: | ||
| url = f"https://generativelanguage.googleapis.com/v1beta/models/{GEMINI_MODEL}:generateContent" | ||
| payload = { | ||
| "system_instruction": { | ||
| "parts": [{"text": system_prompt}] | ||
| }, | ||
| "contents": [ | ||
| { | ||
| "role": "user", | ||
| "parts": [{"text": user_prompt}], | ||
| } | ||
| ], | ||
| "generationConfig": { | ||
| "temperature": temperature, | ||
| "maxOutputTokens": 1000, | ||
| }, | ||
| } | ||
|
|
||
| response = requests.post( | ||
| url, | ||
| params={"key": key}, | ||
| json=payload, | ||
| timeout=30, | ||
| ) | ||
| response.raise_for_status() | ||
|
|
||
| data = response.json() | ||
| candidates = data.get("candidates") or [] | ||
| if not candidates: | ||
| logger.warning("gemini_empty_candidates", model=GEMINI_MODEL) | ||
| return None | ||
|
|
||
| parts = ((candidates[0].get("content") or {}).get("parts")) or [] | ||
| if not parts: | ||
| logger.warning("gemini_empty_parts", model=GEMINI_MODEL) | ||
| return None | ||
|
|
||
| text = parts[0].get("text") | ||
| if not text: | ||
| logger.warning("gemini_empty_text", model=GEMINI_MODEL) | ||
| return None | ||
|
|
||
| return text | ||
|
|
||
| except Exception as e: | ||
| logger.error("gemini_generation_failed", error=str(e)) | ||
| return None |
There was a problem hiding this comment.
The Gemini integration is implemented using raw requests calls to a v1beta API endpoint. This has a few drawbacks:
- Maintainability: Using the official
google-generativeaiPython SDK is more robust. The SDK provides a cleaner interface and handles details like authentication, error handling, and API evolution more gracefully. - Reliability: Relying on a
v1betaendpoint in production is risky, as it may have breaking changes. The official SDK typically uses stable API versions. - Verbosity: The response parsing logic is verbose and can be significantly simplified by using the SDK.
I suggest refactoring this function to use the official SDK. You'll need to add google-generativeai to your requirements.txt file and import it at the top of this file, similar to the other AI provider SDKs:
try:
import google.generativeai as genai
GEMINI_AVAILABLE = True
except ImportError:
genai = None
GEMINI_AVAILABLE = Falsedef _generate_with_gemini(
system_prompt: str,
user_prompt: str,
api_key: Optional[str] = None,
temperature: float = 0.8,
) -> Optional[str]:
"""
Generate post using Google Gemini API via the official Python SDK.
This is a PRO tier provider.
"""
if not GEMINI_AVAILABLE:
logger.error("Google Generative AI package not installed")
return None
key = api_key or GEMINI_API_KEY
if not key:
logger.warning("No Gemini API key available")
return None
try:
genai.configure(api_key=key)
model = genai.GenerativeModel(
model_name=GEMINI_MODEL,
system_instruction=system_prompt,
generation_config=genai.types.GenerationConfig(
temperature=temperature,
max_output_tokens=1000,
)
)
response = model.generate_content(user_prompt)
# The SDK handles response parsing and provides a clean text attribute
if response.text:
return response.text
logger.warning("gemini_empty_response", model=GEMINI_MODEL, parts=response.parts)
return None
except Exception as e:
logger.error("gemini_generation_failed", error=str(e))
return None| groq_api_key = None | ||
| openai_api_key = None | ||
| anthropic_api_key = None | ||
| gemini_api_key = None |
There was a problem hiding this comment.
The logic for fetching API keys from user settings is duplicated across three endpoints: generate_preview, repurpose_url, and generate_batch. This makes the code harder to maintain, as any change needs to be applied in three places.
To adhere to the DRY (Don't Repeat Yourself) principle, consider refactoring this logic into a single helper function. For example:
async def _get_user_api_keys(user_id: str) -> dict:
if not user_id or not get_user_settings:
return {}
try:
settings = await get_user_settings(user_id)
if not settings:
return {}
return {
'groq_api_key': settings.get('groq_api_key'),
'openai_api_key': settings.get('openai_api_key'),
'anthropic_api_key': settings.get('anthropic_api_key'),
'gemini_api_key': settings.get('gemini_api_key'),
}
except Exception as e:
logger.warning("failed_to_get_user_settings", error=str(e))
return {}You could then call this helper in each endpoint to retrieve the keys.
There was a problem hiding this comment.
Pull request overview
Adds Google Gemini as an additional Pro-tier AI provider for LinkedIn post generation, and updates routing, tests, and documentation accordingly.
Changes:
- Added
ModelProvider.GEMINIplus tier allowlisting for Pro/Enterprise users. - Implemented Gemini generation via Google Generative Language REST API and wired it into provider routing.
- Updated FastAPI routes, tests, and docs to include Gemini in provider lists and setup instructions.
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| services/ai_service.py | Adds Gemini provider constants, tier allowlisting, REST-based generation, and provider availability reporting. |
| backend/tests/test_bug_fixes.py | Extends tier enforcement and provider listing tests to include Gemini. |
| backend/routes/posts.py | Passes per-user gemini_api_key through request handling and updates provider docs/fallbacks. |
| README.md | Updates high-level docs to mention Gemini (but contains a small inconsistency). |
| Guides/SETUP_GEMINI.md | Removes the standalone guide file. |
| Guides/SETUP_AI.md | Updates the main AI setup guide to include Gemini API key/configuration. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
services/ai_service.py
Outdated
| url = f"https://generativelanguage.googleapis.com/v1beta/models/{GEMINI_MODEL}:generateContent" | ||
| payload = { | ||
| "system_instruction": { | ||
| "parts": [{"text": system_prompt}] | ||
| }, | ||
| "contents": [ | ||
| { | ||
| "role": "user", | ||
| "parts": [{"text": user_prompt}], | ||
| } | ||
| ], | ||
| "generationConfig": { | ||
| "temperature": temperature, | ||
| "maxOutputTokens": 1000, | ||
| }, | ||
| } |
There was a problem hiding this comment.
The Google Generative Language generateContent REST API expects camelCase field names (e.g., systemInstruction), not system_instruction. As written, the system prompt may be ignored or the request may be rejected due to unknown fields. Rename the request JSON fields to the API’s expected schema (and keep naming consistent across the payload).
| response = requests.post( | ||
| url, | ||
| params={"key": key}, | ||
| json=payload, | ||
| timeout=30, | ||
| ) |
There was a problem hiding this comment.
Creating a new requests.post call without a shared requests.Session prevents connection pooling and can significantly increase latency and resource usage under load. Reuse a module-level/session-cached Session (similar to how other providers are cached) so TLS connections can be kept alive and performance is more predictable.
| def _generate_with_gemini( | ||
| system_prompt: str, | ||
| user_prompt: str, | ||
| api_key: Optional[str] = None, | ||
| temperature: float = 0.8, | ||
| ) -> Optional[str]: |
There was a problem hiding this comment.
The new Gemini integration introduces request/response parsing and multiple error branches (empty candidates/parts/text, HTTP errors). Add unit tests that mock the HTTP call (e.g., via responses/requests-mock) to validate payload construction and response parsing behavior, including failure modes.
| │ └── dependencies.py # DI helpers | ||
| ├── services/ # Core Business Logic | ||
| │ ├── ai_service.py # Multi-provider AI (Groq, OpenAI, Anthropic, Mistral) | ||
| │ ├── ai_service.py # Multi-provider AI (Groq, OpenAI, Anthropic, or Gemini) |
There was a problem hiding this comment.
This line appears to have an extra space before or, and it omits Mistral even though the service supports it elsewhere in the README. Update it to list all supported providers consistently (Groq, OpenAI, Anthropic, Mistral, Gemini) and fix the spacing.
| │ ├── ai_service.py # Multi-provider AI (Groq, OpenAI, Anthropic, or Gemini) | |
| │ ├── ai_service.py # Multi-provider AI (Groq, OpenAI, Anthropic, Mistral or Gemini) |
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Hello there, I added Gemini support via the Google API. I haven't tested it thoroughly, but it works on my test account.