Skip to content

Added full Gemini support#27

Open
Joosibaeri wants to merge 6 commits intocliff-de-tech:mainfrom
Joosibaeri:main
Open

Added full Gemini support#27
Joosibaeri wants to merge 6 commits intocliff-de-tech:mainfrom
Joosibaeri:main

Conversation

@Joosibaeri
Copy link
Copy Markdown

Hello there, I added Gemini support via the Google API. I haven't tested it thoroughly, but it works on my test account.

I doesnt test it very good but it should work ;-)
Copilot AI review requested due to automatic review settings March 27, 2026 19:09
Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request integrates Google Gemini as a new AI provider for LinkedIn post generation, updating the documentation, backend routes, and AI service logic. Feedback focuses on refactoring the Gemini implementation to use the official Python SDK for better maintainability and reliability, as well as consolidating duplicated API key retrieval logic within the backend routes. A minor documentation error in the README was also identified where a provider was omitted.

Comment on lines +1002 to +1064
def _generate_with_gemini(
system_prompt: str,
user_prompt: str,
api_key: Optional[str] = None,
temperature: float = 0.8,
) -> Optional[str]:
"""
Generate post using Google Gemini API.

This is a PRO tier provider.
"""
key = api_key or GEMINI_API_KEY
if not key:
logger.warning("No Gemini API key available")
return None

try:
url = f"https://generativelanguage.googleapis.com/v1beta/models/{GEMINI_MODEL}:generateContent"
payload = {
"system_instruction": {
"parts": [{"text": system_prompt}]
},
"contents": [
{
"role": "user",
"parts": [{"text": user_prompt}],
}
],
"generationConfig": {
"temperature": temperature,
"maxOutputTokens": 1000,
},
}

response = requests.post(
url,
params={"key": key},
json=payload,
timeout=30,
)
response.raise_for_status()

data = response.json()
candidates = data.get("candidates") or []
if not candidates:
logger.warning("gemini_empty_candidates", model=GEMINI_MODEL)
return None

parts = ((candidates[0].get("content") or {}).get("parts")) or []
if not parts:
logger.warning("gemini_empty_parts", model=GEMINI_MODEL)
return None

text = parts[0].get("text")
if not text:
logger.warning("gemini_empty_text", model=GEMINI_MODEL)
return None

return text

except Exception as e:
logger.error("gemini_generation_failed", error=str(e))
return None
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The Gemini integration is implemented using raw requests calls to a v1beta API endpoint. This has a few drawbacks:

  • Maintainability: Using the official google-generativeai Python SDK is more robust. The SDK provides a cleaner interface and handles details like authentication, error handling, and API evolution more gracefully.
  • Reliability: Relying on a v1beta endpoint in production is risky, as it may have breaking changes. The official SDK typically uses stable API versions.
  • Verbosity: The response parsing logic is verbose and can be significantly simplified by using the SDK.

I suggest refactoring this function to use the official SDK. You'll need to add google-generativeai to your requirements.txt file and import it at the top of this file, similar to the other AI provider SDKs:

try:
    import google.generativeai as genai
    GEMINI_AVAILABLE = True
except ImportError:
    genai = None
    GEMINI_AVAILABLE = False
def _generate_with_gemini(
    system_prompt: str,
    user_prompt: str,
    api_key: Optional[str] = None,
    temperature: float = 0.8,
) -> Optional[str]:
    """
    Generate post using Google Gemini API via the official Python SDK.

    This is a PRO tier provider.
    """
    if not GEMINI_AVAILABLE:
        logger.error("Google Generative AI package not installed")
        return None

    key = api_key or GEMINI_API_KEY
    if not key:
        logger.warning("No Gemini API key available")
        return None

    try:
        genai.configure(api_key=key)
        
        model = genai.GenerativeModel(
            model_name=GEMINI_MODEL,
            system_instruction=system_prompt,
            generation_config=genai.types.GenerationConfig(
                temperature=temperature,
                max_output_tokens=1000,
            )
        )
        
        response = model.generate_content(user_prompt)
        
        # The SDK handles response parsing and provides a clean text attribute
        if response.text:
            return response.text
        
        logger.warning("gemini_empty_response", model=GEMINI_MODEL, parts=response.parts)
        return None

    except Exception as e:
        logger.error("gemini_generation_failed", error=str(e))
        return None

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’ll go through that again

groq_api_key = None
openai_api_key = None
anthropic_api_key = None
gemini_api_key = None
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for fetching API keys from user settings is duplicated across three endpoints: generate_preview, repurpose_url, and generate_batch. This makes the code harder to maintain, as any change needs to be applied in three places.

To adhere to the DRY (Don't Repeat Yourself) principle, consider refactoring this logic into a single helper function. For example:

async def _get_user_api_keys(user_id: str) -> dict:
    if not user_id or not get_user_settings:
        return {}
    try:
        settings = await get_user_settings(user_id)
        if not settings:
            return {}
        return {
            'groq_api_key': settings.get('groq_api_key'),
            'openai_api_key': settings.get('openai_api_key'),
            'anthropic_api_key': settings.get('anthropic_api_key'),
            'gemini_api_key': settings.get('gemini_api_key'),
        }
    except Exception as e:
        logger.warning("failed_to_get_user_settings", error=str(e))
        return {}

You could then call this helper in each endpoint to retrieve the keys.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds Google Gemini as an additional Pro-tier AI provider for LinkedIn post generation, and updates routing, tests, and documentation accordingly.

Changes:

  • Added ModelProvider.GEMINI plus tier allowlisting for Pro/Enterprise users.
  • Implemented Gemini generation via Google Generative Language REST API and wired it into provider routing.
  • Updated FastAPI routes, tests, and docs to include Gemini in provider lists and setup instructions.

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
services/ai_service.py Adds Gemini provider constants, tier allowlisting, REST-based generation, and provider availability reporting.
backend/tests/test_bug_fixes.py Extends tier enforcement and provider listing tests to include Gemini.
backend/routes/posts.py Passes per-user gemini_api_key through request handling and updates provider docs/fallbacks.
README.md Updates high-level docs to mention Gemini (but contains a small inconsistency).
Guides/SETUP_GEMINI.md Removes the standalone guide file.
Guides/SETUP_AI.md Updates the main AI setup guide to include Gemini API key/configuration.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +1019 to +1034
url = f"https://generativelanguage.googleapis.com/v1beta/models/{GEMINI_MODEL}:generateContent"
payload = {
"system_instruction": {
"parts": [{"text": system_prompt}]
},
"contents": [
{
"role": "user",
"parts": [{"text": user_prompt}],
}
],
"generationConfig": {
"temperature": temperature,
"maxOutputTokens": 1000,
},
}
Copy link

Copilot AI Mar 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Google Generative Language generateContent REST API expects camelCase field names (e.g., systemInstruction), not system_instruction. As written, the system prompt may be ignored or the request may be rejected due to unknown fields. Rename the request JSON fields to the API’s expected schema (and keep naming consistent across the payload).

Copilot uses AI. Check for mistakes.
Comment on lines +1036 to +1041
response = requests.post(
url,
params={"key": key},
json=payload,
timeout=30,
)
Copy link

Copilot AI Mar 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Creating a new requests.post call without a shared requests.Session prevents connection pooling and can significantly increase latency and resource usage under load. Reuse a module-level/session-cached Session (similar to how other providers are cached) so TLS connections can be kept alive and performance is more predictable.

Copilot uses AI. Check for mistakes.
Comment on lines +1002 to +1007
def _generate_with_gemini(
system_prompt: str,
user_prompt: str,
api_key: Optional[str] = None,
temperature: float = 0.8,
) -> Optional[str]:
Copy link

Copilot AI Mar 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new Gemini integration introduces request/response parsing and multiple error branches (empty candidates/parts/text, HTTP errors). Add unit tests that mock the HTTP call (e.g., via responses/requests-mock) to validate payload construction and response parsing behavior, including failure modes.

Copilot uses AI. Check for mistakes.
│ └── dependencies.py # DI helpers
├── services/ # Core Business Logic
│ ├── ai_service.py # Multi-provider AI (Groq, OpenAI, Anthropic, Mistral)
│ ├── ai_service.py # Multi-provider AI (Groq, OpenAI, Anthropic, or Gemini)
Copy link

Copilot AI Mar 27, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line appears to have an extra space before or, and it omits Mistral even though the service supports it elsewhere in the README. Update it to list all supported providers consistently (Groq, OpenAI, Anthropic, Mistral, Gemini) and fix the spacing.

Suggested change
│ ├── ai_service.py # Multi-provider AI (Groq, OpenAI, Anthropic, or Gemini)
│ ├── ai_service.py # Multi-provider AI (Groq, OpenAI, Anthropic, Mistral or Gemini)

Copilot uses AI. Check for mistakes.
Joosibaeri and others added 2 commits March 28, 2026 05:35
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
@Joosibaeri Joosibaeri changed the title Added Gemini support Added full Gemini support Mar 28, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants