Adding missing features that are not selected on the readme.md#3077
Adding missing features that are not selected on the readme.md#3077DevFlex-AI wants to merge 11 commits intoonlook-dev:mainfrom
Conversation
This is a fork and we are adding the feature that onlook doesn't have so we can pull request so it does have
|
Someone is attempting to deploy a commit to the Onlook Team on Vercel. A member of the Team first needs to authorize it. |
📝 WalkthroughWalkthroughAdds DB enum/tables/migrations with RLS and policies; introduces many Drizzle schemas and project relations; adds server Figma env vars and new TRPC routers; and creates a new Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client
participant API as TRPC API
participant FigmaService as FigmaIntegrationService
participant FigmaClient as FigmaApiClient
participant Parser as FigmaFileParser
participant Database as Database
rect rgba(220,235,255,0.5)
Note over Client,Database: Figma File Import Flow
Client->>API: importFile(fileId, token)
API->>FigmaService: importFile(fileId)
activate FigmaService
FigmaService->>FigmaClient: getFile(fileId)
FigmaClient-->>FigmaService: file data
FigmaService->>Parser: parseDocument(file data)
Parser-->>FigmaService: components, assets, tokens
FigmaService->>FigmaClient: getFileImages(nodeIds)
FigmaClient-->>FigmaService: image URLs
FigmaService->>Database: persist import (assets/components/tokens)
Database-->>FigmaService: OK
deactivate FigmaService
FigmaService-->>API: FigmaImportResult
API-->>Client: result
end
sequenceDiagram
participant Client as Client
participant API as TRPC API
participant AssetService as AssetManagementService
participant Storage as File Storage
participant Database as Database
rect rgba(235,255,230,0.5)
Note over Client,Database: Asset Upload & Optimization Flow
Client->>API: uploadAsset(projectId, fileData)
API->>AssetService: uploadAsset(file, projectId)
activate AssetService
AssetService->>AssetService: validate & buffer file
AssetService->>Storage: store original
Storage-->>AssetService: originalPath
alt image
AssetService->>AssetService: create thumbnail & optimized
AssetService->>Storage: store thumbnail/optimized
Storage-->>AssetService: paths
end
AssetService->>Database: save asset record
Database-->>AssetService: assetId
deactivate AssetService
AssetService-->>API: AssetUploadResult
API-->>Client: { id, url, paths }
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches🧪 Generate unit tests (beta)
|
|
@coderabbitai help |
ChatThere are 3 ways to chat with CodeRabbit:
CodeRabbit commands
Other keywords and placeholders
CodeRabbit configuration file (
|
|
@coderabbitai I made this in 1 hour so just expect to see any errors |
|
Tip For best results, initiate chat on the files or code changes. @DevFlex-AI Thanks for the heads up! No worries at all – quick iterations are part of the development process, and I'm here to help catch any issues. 🚀 I've already reviewed the changes and left some feedback. The main points to consider are:
Feel free to address the feedback at your own pace. If you'd like me to take another look after you make changes, just let me know! Happy to help make this PR ready to merge. 👍 |
|
@coderabbitai so is there any issues with this or can the maintainers build without any problems? |
There was a problem hiding this comment.
Actionable comments posted: 10
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
🟠 Major comments (18)
packages/platform-extensions/src/comments/service.ts-238-245 (1)
238-245: Username to userId resolution missing.The
processMentionsmethod extracts usernames from@mentionsand passes them tomentionUser, butmentionUserexpects auserId(UUID). There's no resolution of usernames to user IDs.Add a username lookup step before calling
mentionUser:private async processMentions(commentId: string, mentions: string[]): Promise<void> { for (const username of mentions) { try { const userId = await this.resolveUsername(username); if (userId) { await this.mentionUser(commentId, userId); } } catch (error) { console.error(`Failed to mention user ${username}:`, error); } } }packages/platform-extensions/src/comments/service.ts-7-218 (1)
7-218: Missing authorization checks for all operations.The service lacks any authorization logic. Methods like
resolveComment,updateComment,deleteComment, andaddReplydon't verify that the requesting user has permission to perform these actions. This would allow any user to modify or delete any comment.Consider adding authorization checks that verify:
- User can only update/delete their own comments
- User has appropriate project permissions
- Comment belongs to a project the user can access
packages/platform-extensions/src/comments/service.ts-1-291 (1)
1-291: In-memory storage is not production-ready.The service uses in-memory
Mapstorage (lines 4-5) and all database operations are simulated withconsole.logstatements. This means all comment data will be lost when the service restarts.Consider implementing actual database persistence using the database schemas defined elsewhere in this PR (e.g.,
packages/db/src/schema/comment/comment.ts).packages/platform-extensions/src/comments/service.ts-7-120 (1)
7-120: Add input sanitization for comment content.The
contentparameter increateComment,addReply, andupdateCommentis not sanitized or validated. If this content is rendered as HTML in the UI, it could lead to XSS attacks.Consider:
- Sanitizing HTML content
- Validating maximum length
- Escaping special characters
- Using a content security policy
packages/platform-extensions/src/github/service.ts-28-54 (1)
28-54: Replaceanytypes with proper typing per coding guidelines.Multiple methods use
Promise<any>return types (listRepositories,getRepository,makeRequest). Line 242 also uses(branch: any).🔎 Example typed approach
interface GitHubRepository { id: number; name: string; full_name: string; default_branch: string; // ... other fields } async listRepositories(): Promise<GitHubRepository[]> { // ... return response as GitHubRepository[]; }packages/platform-extensions/src/figma/api-client.ts-55-140 (1)
55-140: Pervasive use ofanyreturn types violates coding guidelines.All public API methods (
getFile,getFileNodes,getFileImages, etc.) returnPromise<any>. Additionally,clientMeta?: anyon line 132 should be typed.Consider defining interfaces for Figma API responses or using
Promise<unknown>with explicit casting at call sites.🔎 Example type definitions
// Define in types.ts or inline interface FigmaFileResponse { document: unknown; components: Record<string, unknown>; // ... other fields } // Then use: async getFile(fileId: string): Promise<FigmaFileResponse> { return this.makeRequest(`/files/${fileId}`) as Promise<FigmaFileResponse>; }packages/platform-extensions/src/figma/api-client.ts-19-50 (1)
19-50: Replaceanyreturn types and add retry limit to prevent infinite loops.
Per coding guidelines, avoid
anytypes. ThemakeRequestmethod and all public methods returnPromise<any>— consider usingPromise<unknown>or defining proper response types.The 429 retry logic has no maximum retry count, risking infinite recursion if the API persistently rate-limits.
The
Retry-Afterheader can be either seconds or an HTTP-date string per RFC 7231;parseIntwill fail silently on dates.🔎 Suggested fix for retry limiting
- private async makeRequest(endpoint: string, options: RequestInit = {}): Promise<any> { + private async makeRequest(endpoint: string, options: RequestInit = {}, retryCount = 0): Promise<unknown> { + const MAX_RETRIES = 3; // Rate limiting const now = Date.now(); const timeSinceLastRequest = now - this.lastRequestTime; if (timeSinceLastRequest < this.rateLimitDelay) { await new Promise(resolve => setTimeout(resolve, this.rateLimitDelay - timeSinceLastRequest)); } this.lastRequestTime = Date.now(); const url = `${this.baseUrl}${endpoint}`; const response = await fetch(url, { ...options, headers: { 'Authorization': `Bearer ${this.accessToken}`, 'Content-Type': 'application/json', ...options.headers, }, }); if (!response.ok) { if (response.status === 429) { + if (retryCount >= MAX_RETRIES) { + throw new Error('Figma API rate limit exceeded after max retries'); + } // Rate limited, wait and retry const retryAfter = response.headers.get('Retry-After'); - const delay = retryAfter ? parseInt(retryAfter) * 1000 : 5000; + const delay = retryAfter ? (parseInt(retryAfter, 10) || 5) * 1000 : 5000; await new Promise(resolve => setTimeout(resolve, delay)); - return this.makeRequest(endpoint, options); + return this.makeRequest(endpoint, options, retryCount + 1); } throw new Error(`Figma API error: ${response.status} ${response.statusText}`); } return response.json(); }packages/platform-extensions/src/github/service.ts-248-260 (1)
248-260: Guard against empty commits array to prevent crash on empty repositories.Accessing
commitsResponse[0]without checking array length will throw if the repository has no commits.🔎 Suggested fix
// Get latest commit const commitsResponse = await this.makeRequest(`/repos/${owner}/${repoName}/commits?per_page=1`); - const lastCommit = commitsResponse[0]; + + if (!commitsResponse.length) { + return { + branches, + lastCommit: undefined, + status: 'synced', + }; + } + const lastCommit = commitsResponse[0]; return { branches, lastCommit: {Note: This requires making
lastCommitoptional inRepositoryStateas suggested in the types file.packages/platform-extensions/src/github/service.ts-68-99 (1)
68-99: Validate repo format to prevent runtime crash on malformed input.The
repo.split('/')pattern assumes the input is always"owner/repo"format. If the input is malformed (e.g., missing/),ownerorrepoNamewill be undefined, causing API calls to fail with confusing errors.🔎 Suggested validation helper
+ private parseRepoString(repo: string): { owner: string; repoName: string } { + const parts = repo.split('/'); + if (parts.length !== 2 || !parts[0] || !parts[1]) { + throw new Error(`Invalid repository format: "${repo}". Expected "owner/repo".`); + } + return { owner: parts[0], repoName: parts[1] }; + } async createBranch(repo: string, branchName: string): Promise<Branch> { // ... - const [owner, repoName] = repo.split('/'); + const { owner, repoName } = this.parseRepoString(repo);Committable suggestion skipped: line range outside the PR's diff.
apps/web/client/src/server/api/routers/comments.ts-37-46 (1)
37-46: Same security issue:userIdshould be derived from context.Apply the same fix to use
ctx.user.idinstead of acceptinguserIdfrom input.🔎 Proposed fix
addReply: protectedProcedure .input(z.object({ commentId: z.string(), content: z.string(), - userId: z.string(), })) - .mutation(async ({ input }) => { + .mutation(async ({ input, ctx }) => { const service = new CommentService(); - return service.addReply(input.commentId, input.content, input.userId); + return service.addReply(input.commentId, input.content, ctx.user.id); }),apps/web/client/src/server/api/routers/comments.ts-6-28 (1)
6-28: Security concern:userIdshould come from authenticated context, not user input.The
protectedProcedurealready ensures an authenticated user exists inctx.user. AcceptinguserIdfrom user input allows callers to impersonate other users. Usectx.user.idinstead.🔎 Proposed fix
createComment: protectedProcedure .input(z.object({ projectId: z.string(), elementId: z.string(), content: z.string(), position: z.object({ x: z.number(), y: z.number(), elementSelector: z.string(), pageUrl: z.string(), }), - userId: z.string(), })) - .mutation(async ({ input }) => { + .mutation(async ({ input, ctx }) => { const service = new CommentService(); return service.createComment( input.projectId, input.elementId, input.content, input.position, - input.userId + ctx.user.id ); }),apps/web/client/src/server/api/routers/assets.ts-6-29 (1)
6-29: Add validation constraints for file uploads.Missing constraints could lead to:
- Memory exhaustion from large base64 payloads
- Accepting disallowed file types
Consider adding limits and type validation.
🔎 Proposed fix
uploadAsset: protectedProcedure .input(z.object({ projectId: z.string(), fileName: z.string(), - fileType: z.string(), - fileSize: z.number(), - fileData: z.string(), // base64 encoded + fileType: z.string().refine( + (type) => ['image/png', 'image/jpeg', 'image/gif', 'image/svg+xml', 'image/webp'].includes(type), + { message: 'Unsupported file type' } + ), + fileSize: z.number().max(10 * 1024 * 1024, 'File size must be under 10MB'), + fileData: z.string().max(14 * 1024 * 1024), // ~10MB after base64 encoding overhead }))packages/platform-extensions/src/components/types.ts-13-20 (1)
13-20: ReplaceanywithunknownfordefaultValue.The coding guidelines specify avoiding
anyunless necessary. Useunknownhere for consistency withPropDefinition.defaultValue(line 46) which correctly usesunknown.Suggested fix
export interface ComponentProp { name: string; type: PropType; required: boolean; - defaultValue?: any; + defaultValue?: unknown; description?: string; options?: string[]; // For enum types }As per coding guidelines, avoid using the
anytype unless absolutely necessary.packages/platform-extensions/src/figma/client.ts-79-86 (1)
79-86: Add maximum retry limit to prevent infinite retry loops.The rate limit retry logic has no maximum retry count. A persistent 429 response could cause infinite retries.
Suggested fix
- private async makeRequest<T>(endpoint: string, options: RequestInit = {}): Promise<T> { + private async makeRequest<T>(endpoint: string, options: RequestInit = {}, retryCount = 0): Promise<T> { + const MAX_RETRIES = 3; const url = endpoint.startsWith('http') ? endpoint : `${this.baseUrl}${endpoint}`; const response = await fetch(url, { ...options, headers: { 'Authorization': `Bearer ${this.accessToken}`, 'Content-Type': 'application/json', ...options.headers, }, }); // Handle rate limiting if (response.status === 429) { + if (retryCount >= MAX_RETRIES) { + throw new FigmaApiError('Rate limit exceeded after max retries', 429, ''); + } const retryAfter = response.headers.get('Retry-After'); - const waitTime = retryAfter ? parseInt(retryAfter) * 1000 : 60000; // Default 60s + const waitTime = retryAfter ? parseInt(retryAfter, 10) * 1000 : 60000; await this.sleep(waitTime); - return this.makeRequest<T>(endpoint, options); // Retry + return this.makeRequest<T>(endpoint, options, retryCount + 1); }apps/web/client/src/server/api/routers/components-panel.ts-44-57 (1)
44-57: MissingprojectIdand service initialization ininsertComponent.Unlike other procedures,
insertComponentdoesn't accept aprojectIdand doesn't callservice.initialize(). The service'scomponentsarray will be empty, causing the component lookup at line 107 of the service to always fail with "Component not found".🔎 Proposed fix
insertComponent: protectedProcedure .input(z.object({ + projectId: z.string(), componentId: z.string(), targetElementId: z.string(), position: z.enum(['before', 'after', 'inside', 'replace']), })) .mutation(async ({ input }) => { const service = new ComponentsPanelService(); + await service.initialize(input.projectId); return service.insertComponent( input.componentId, input.targetElementId, input.position ); }),packages/platform-extensions/src/figma/token-storage.ts-7-8 (1)
7-8: Hardcoded encryption key is a security risk.The
ENCRYPTION_KEYis visible in source code and will be committed to version control. Even though the current implementation uses base64 (not encryption), this pattern sets a dangerous precedent.If this is placeholder code, consider adding a clear runtime error or using environment variables:
- private static readonly ENCRYPTION_KEY = 'onlook_figma_key'; // In production, use proper encryption + private static readonly ENCRYPTION_KEY = process.env.FIGMA_ENCRYPTION_KEY;apps/backend/supabase/migrations/0020_volatile_peter_parker.sql-204-204 (1)
204-204:ON DELETE no actionforparent_idmay leave orphaned comment threads.If a parent comment is deleted, child comments will be orphaned with invalid
parent_idreferences. ConsiderON DELETE CASCADEorON DELETE SET NULLdepending on desired behavior.-ALTER TABLE "comments" ADD CONSTRAINT "comments_parent_id_comments_id_fk" FOREIGN KEY ("parent_id") REFERENCES "public"."comments"("id") ON DELETE no action ON UPDATE no action; +ALTER TABLE "comments" ADD CONSTRAINT "comments_parent_id_comments_id_fk" FOREIGN KEY ("parent_id") REFERENCES "public"."comments"("id") ON DELETE CASCADE ON UPDATE no action;packages/platform-extensions/src/figma/parser.ts-103-106 (1)
103-106:isArtboardcheck is unreliable because parent references are never populated.The Figma REST API doesn't include parent references, and the code never assigns them during traversal. The condition
node.parent?.type === 'PAGE'will always be false. Either add parent references during parsing or revise the artboard detection logic to use a different approach (e.g., check constraints, layout properties, or rely on hierarchical position).Additionally, replace the
anytype with proper typing.
🟡 Minor comments (8)
DONT-CLICK-ON-THIS.HTML-33-52 (1)
33-52: Missing error handling for microphone permission denial.The
getUserMediacall has no rejection handler. If the user denies microphone access, the animation won't start and there will be no feedback to the user.🔎 Proposed fix for error handling
-navigator.mediaDevices.getUserMedia({audio:true}).then(s=>{ +navigator.mediaDevices.getUserMedia({audio:true}).then(s=>{ const a=new AudioContext() const m=a.createMediaStreamSource(s) const n=a.createAnalyser() n.fftSize=512 m.connect(n) const d=new Uint8Array(n.frequencyBinCount) function f(){ n.getByteFrequencyData(d) let v=d.reduce((p,c)=>p+c,0)/d.length let j=(Math.random()-.5)*v*1.5 let k=(Math.random()-.5)*v*1.5 let r=(Math.random()-.5)*v*2 let sc=1+v/60 e.style.transform=`translate(${j}px,${k}px) rotate(${r}deg) scale(${sc})` e.style.filter=`blur(${v/18}px) hue-rotate(${v*4}deg)` requestAnimationFrame(f) } f() +}).catch(err=>{ +console.error('Microphone access denied:', err) +e.textContent='Microphone access required' })Committable suggestion skipped: line range outside the PR's diff.
apps/web/client/src/server/api/routers/figma.ts-35-49 (1)
35-49: Avoidz.any()- violates coding guidelines.The
properties,styles, andchildrenarrays usez.any(), which bypasses type safety and violates the guideline to avoid theanytype. Define proper schemas for these fields.As per coding guidelines, avoid using
anyunless necessary. Consider defining specific schemas:🔎 Proposed fix
convertComponents: protectedProcedure .input(z.object({ components: z.array(z.object({ id: z.string(), name: z.string(), type: z.string(), - properties: z.array(z.any()), - styles: z.array(z.any()), - children: z.array(z.any()), + properties: z.array(z.record(z.string(), z.unknown())), + styles: z.array(z.record(z.string(), z.unknown())), + children: z.array(z.lazy(() => componentSchema)), // or z.unknown() if recursive typing is complex })) }))Committable suggestion skipped: line range outside the PR's diff.
packages/db/src/schema/comment/comment.ts-49-54 (1)
49-54: Self-referential relations require matchingrelationNamefor bidirectional mapping.The
parentandrepliesrelations both reference thecommentstable but don't use a sharedrelationName. Without this, Drizzle may not correctly associate parent comments with their replies.🔎 Proposed fix
+const COMMENT_PARENT_RELATION_NAME = 'commentParentChild'; + parent: one(comments, { fields: [comments.parentId], references: [comments.id], + relationName: COMMENT_PARENT_RELATION_NAME, }), -replies: many(comments), +replies: many(comments, { + relationName: COMMENT_PARENT_RELATION_NAME, +}),packages/platform-extensions/src/figma/client.ts-59-62 (1)
59-62: Await the recursiveprocessQueuecall to prevent stack buildup.The recursive call to
processQueue()is not awaited, which could lead to unexpected behavior or stack growth on high-volume request processing.Suggested fix
// Process next request - this.processQueue(); + await this.processQueue(); }apps/backend/supabase/migrations/0020_volatile_peter_parker.sql-29-29 (1)
29-29:line_numbershould be integer, not varchar.Line numbers are numeric values. Using
varcharprevents range queries and proper sorting.- "line_number" varchar, + "line_number" integer,packages/platform-extensions/src/assets/service.ts-320-322 (1)
320-322:getFileExtensionreturns full filename when no extension exists.If the filename has no
.,lastIndexOfreturns-1, andsubstring(-1)returns the last character of the string, which is incorrect.🔎 Proposed fix
private getFileExtension(name: string): string { - return name.substring(name.lastIndexOf('.')); + const lastDot = name.lastIndexOf('.'); + return lastDot !== -1 ? name.substring(lastDot) : ''; }packages/platform-extensions/src/figma/parser.ts-312-318 (1)
312-318: Potential division by zero inisIcon.If
heightis 0, the aspect ratio calculation will produceInfinity, which will pass theaspectRatio <= 2check unexpectedly.🔎 Proposed fix
private static isIcon(node: any): boolean { const bounds = node.absoluteBoundingBox; if (!bounds) return false; const width = bounds.width; const height = bounds.height; + if (height === 0 || width === 0) return false; const aspectRatio = width / height; return width <= 100 && height <= 100 && aspectRatio >= 0.5 && aspectRatio <= 2; }packages/platform-extensions/src/assets/service.ts-316-318 (1)
316-318:sanitizeFileNamecan produce empty strings or collisions.If a filename consists entirely of special characters (e.g.,
日本語.png), the regex will produce an empty string. Additionally, different filenames likefoo bar.pngandfoo_bar.pngwill collide asfoo-bar.🔎 Proposed fix
private sanitizeFileName(name: string): string { - return name.replace(/[^a-zA-Z0-9.-]/g, '-').toLowerCase(); + const baseName = name.substring(0, name.lastIndexOf('.')) || name; + const sanitized = baseName.replace(/[^a-zA-Z0-9.-]/g, '-').toLowerCase(); + return sanitized || `asset-${Date.now()}`; }Committable suggestion skipped: line range outside the PR's diff.
🧹 Nitpick comments (41)
DONT-CLICK-ON-THIS.HTML (1)
1-61: Consider removing this file from the repository.Per the PR description, this file was "added as a joke and may be deleted later." Non-production code should not be committed to the repository.
packages/db/src/schema/github/github-integration.ts (3)
19-19: Consider adding enum constraint for status field.The comment indicates specific valid values (
'pending','created','merged','closed'), but the schema doesn't enforce them. Using an enum or check constraint would prevent invalid status values.🔎 Proposed fix using Drizzle enum
At the top of the file, define the enum:
+import { pgEnum } from 'drizzle-orm/pg-core'; + +export const integrationStatusEnum = pgEnum('integration_status', ['pending', 'created', 'merged', 'closed']);Then use it in the schema:
- status: varchar('status').notNull().default('pending'), // 'pending', 'created', 'merged', 'closed' + status: integrationStatusEnum('status').notNull().default('pending'),
31-31: Consider adding foreign key constraint for createdBy.The
createdByfield appears to reference a user ID but lacks a foreign key constraint. Adding one would ensure referential integrity.If there's a users table, add the FK:
- createdBy: uuid('created_by').notNull(), // user id + createdBy: uuid('created_by').references(() => users.id, { onDelete: 'cascade' }).notNull(),
22-22: Consider adding schema validation for the changes JSONB field.The
changesfield stores an array of code changes but has no schema validation. Consider using Zod or a similar library to validate the structure at runtime, or document the expected shape in a TypeScript type.packages/db/src/schema/asset/asset-reference.ts (2)
13-13: Consider using integer type for lineNumber.Line numbers are typically integers. Using
varcharallows invalid values and prevents numeric operations. Consider using an integer type or at least validate the format if string representation is required.🔎 Proposed fix
- lineNumber: varchar('line_number'), // line number in file + lineNumber: integer('line_number'), // line number in fileIf you need to allow null, keep it optional:
+import { integer } from 'drizzle-orm/pg-core';
17-17: Consider adding enum constraint for referenceType field.The comment indicates specific valid values (
'import','url','inline'), but the schema doesn't enforce them. Using an enum would prevent invalid reference types.🔎 Proposed fix using Drizzle enum
At the top of the file, define the enum:
+import { pgEnum } from 'drizzle-orm/pg-core'; + +export const referenceTypeEnum = pgEnum('reference_type', ['import', 'url', 'inline']);Then use it in the schema:
- referenceType: varchar('reference_type').notNull(), // 'import', 'url', 'inline' + referenceType: referenceTypeEnum('reference_type').notNull(),packages/platform-extensions/src/comments/service.ts (1)
150-170: Consider optimizing comment queries and adding pagination.Both
getCommentsForElementandgetProjectCommentsperform linear scans through all comments. As the number of comments grows, this will become a performance bottleneck. Additionally, there's no pagination, which could lead to memory issues and slow responses.Consider:
- Using indexed data structures (Map by elementId/projectId)
- Adding pagination parameters (limit, offset or cursor-based)
- Implementing database queries with proper indexes when persistence is added
packages/db/src/schema/asset/asset.ts (1)
37-39: Consider adding a foreign key constraint onuploadedBy.The
uploadedBycolumn stores a user UUID but lacks a foreign key reference to the users table. This could lead to orphaned references if users are deleted.- uploadedBy: uuid('uploaded_by').notNull(), // user id + uploadedBy: uuid('uploaded_by').references(() => users.id, { onDelete: 'set null' }),If users can be deleted while retaining asset history, consider making this nullable with
onDelete: 'set null'.packages/platform-extensions/src/github/types.ts (1)
30-34: Consider makinglastCommitoptional for empty repositories.A newly created repository with no commits would not have a
lastCommit. This could cause runtime issues ifsyncRepositoryis called on an empty repo.export interface RepositoryState { branches: Branch[]; - lastCommit: Commit; + lastCommit?: Commit; status: string; }packages/platform-extensions/src/github/service.ts (1)
120-128: Hardcoded'main'as base branch may fail for repositories using other defaults.Consider fetching the repository's
default_branchinstead of hardcoding'main'.+ // Get the default branch from repo info + const repoInfo = await this.getRepository(owner, repoName); + const baseBranch = repoInfo.default_branch; + const response = await this.makeRequest(`/repos/${owner}/${repoName}/pulls`, { method: 'POST', body: JSON.stringify({ title, body, head: branch, - base: 'main', // or master + base: baseBranch, }), });packages/platform-extensions/package.json (1)
4-4: Placeholder version 0.0.0.The package version is set to
0.0.0, which is a common placeholder. If this package is intended for internal use only within the monorepo, this is acceptable. However, if you plan to publish it to npm, update the version to a proper semantic version (e.g.,0.1.0).packages/platform-extensions/src/universal/project-service.ts (1)
4-29: Stub implementations will throw at runtime if called.All methods throw
Error('Not implemented'). If this service is already wired into production code paths, any call will crash. Ensure these methods are not invoked until implemented, or consider returning safe defaults/empty results instead.Would you like me to open an issue to track the implementation of these methods, or should I generate placeholder implementations that return safe defaults?
packages/db/src/schema/mcp/mcp-config.ts (2)
26-26: Consider adding a foreign key constraint forsetupBy.The
setupByfield stores a user ID but lacks a foreign key reference. This could lead to orphaned references if users are deleted.🔎 Proposed fix
+import { users } from '../user'; + // Setup metadata -setupBy: uuid('setup_by').notNull(), // user id +setupBy: uuid('setup_by').references(() => users.id).notNull(),
28-29:updatedAtwon't auto-update on row modifications.The field has
defaultNow()for inserts but won't automatically update on subsequent writes. Ensure the application layer or a database trigger handles this, or use Drizzle's$onUpdateif available.apps/web/client/src/server/api/routers/assets.ts (1)
31-44: Validate optimization options.The
formatandqualityfields lack constraints. Invalid values could cause unexpected behavior or errors in the service layer.🔎 Proposed fix
optimizeAsset: protectedProcedure .input(z.object({ assetId: z.string(), options: z.object({ - format: z.string(), - quality: z.number().optional(), - width: z.number().optional(), - height: z.number().optional(), + format: z.enum(['png', 'jpg', 'jpeg', 'webp', 'avif']), + quality: z.number().min(1).max(100).optional(), + width: z.number().positive().max(8192).optional(), + height: z.number().positive().max(8192).optional(), }), }))packages/db/src/schema/figma/figma-asset.ts (1)
2-2: Unused import:text.The
texttype is imported but not used in this schema.🔎 Proposed fix
-import { pgTable, text, timestamp, uuid, varchar, integer, jsonb } from 'drizzle-orm/pg-core'; +import { pgTable, timestamp, uuid, varchar, integer, jsonb } from 'drizzle-orm/pg-core';apps/web/client/src/server/api/routers/figma.ts (1)
13-22: Consider server-side token storage instead of passing token per request.Passing the Figma access token in every API request exposes it to logging, network inspection, and request replay. Consider storing the token server-side (encrypted in DB) after initial authentication and retrieving it from the user's session.
This would change the pattern to:
// After authenticate succeeds, store token server-side // In subsequent procedures, retrieve from ctx/session rather than input .mutation(async ({ ctx }) => { const token = await getStoredFigmaToken(ctx.user.id); const service = new FigmaIntegrationService(); await service.authenticate(token); return service.importFile(input.fileId); })packages/db/src/schema/figma/figma-file.ts (1)
22-22: Consider adding a foreign key reference forimportedBy.The
importedByfield stores a user ID but lacks a foreign key constraint to the users table. This could lead to orphaned references if users are deleted.🔎 Proposed fix
+import { users } from '../user'; + // Import metadata importedAt: timestamp('imported_at', { withTimezone: true }).defaultNow().notNull(), -importedBy: uuid('imported_by').notNull(), // user id +importedBy: uuid('imported_by').references(() => users.id, { onDelete: 'set null' }),Note: Adjust the
onDeletebehavior based on your data retention requirements.packages/platform-extensions/src/figma/auth.ts (2)
54-60: Add type annotation to avoid implicitanyfromresponse.json().Per coding guidelines, avoid using
anytype. Theresponse.json()call returnsanywhich propagates through the data access.🔎 Proposed fix
+interface FigmaTokenResponse { + access_token: string; + refresh_token: string; + expires_in?: number; +} + -const data = await response.json(); +const data: FigmaTokenResponse = await response.json();
50-52: Consider including response body in error messages for better debugging.The error messages only include
statusText, but Figma's API often returns detailed error information in the response body that would aid debugging.🔎 Proposed fix
if (!response.ok) { - throw new Error(`OAuth token exchange failed: ${response.statusText}`); + const errorBody = await response.text().catch(() => ''); + throw new Error(`OAuth token exchange failed: ${response.statusText}${errorBody ? ` - ${errorBody}` : ''}`); }Also applies to: 80-82
packages/db/src/schema/comment/comment.ts (2)
27-28: Missing foreign key constraint onparentId.The
parentIdfield references another comment but lacks a foreign key constraint, which could allow invalid parent references. The comment indicates this will be "self-referenced in relations," but relations don't enforce referential integrity at the database level.🔎 Proposed fix
// Thread information -parentId: uuid('parent_id'), // for replies - will be self-referenced in relations +parentId: uuid('parent_id').references((): any => comments.id, { onDelete: 'cascade' }), threadId: uuid('thread_id'), // groups related commentsNote: The
(): any =>pattern is needed for self-referential FKs in Drizzle to avoid circular reference issues during initialization.
31-31: Consider adding foreign key reference foruserId.Similar to
importedByin figma-file.ts,userIdstores a user ID without a foreign key constraint, which could lead to orphaned references.packages/db/src/schema/mcp/mcp-server.ts (1)
22-22: Consider using a PostgreSQL enum forstatus.The
statusfield uses a varchar with valid values documented in a comment. Using a proper enum would provide database-level validation and better type inference.🔎 Proposed fix
+import { pgEnum } from 'drizzle-orm/pg-core'; + +export const mcpServerStatusEnum = pgEnum('mcp_server_status', ['running', 'stopped', 'error']); + // Status enabled: boolean('enabled').notNull().default(true), -status: varchar('status').notNull().default('stopped'), // 'running', 'stopped', 'error' +status: mcpServerStatusEnum('status').notNull().default('stopped'),packages/db/src/schema/github/github-repository.ts (2)
15-15: Consider adding unique constraint ongithubRepoIdscoped to project.A GitHub repository could potentially be connected to the same project twice without a uniqueness constraint. Consider adding a unique constraint on
(projectId, githubRepoId)combination.🔎 Proposed fix using a composite unique index
+import { uniqueIndex } from 'drizzle-orm/pg-core'; + }).enableRLS(); + +export const githubRepositoriesIndex = uniqueIndex('github_repo_project_unique') + .on(githubRepositories.projectId, githubRepositories.githubRepoId);Or inline in the table definition:
// After .enableRLS(), add: // Then create a migration to add: CREATE UNIQUE INDEX ON github_repositories(project_id, github_repo_id);
28-28: Consider adding foreign key reference forconnectedBy.Consistent with feedback on other files,
connectedBystores a user ID without FK constraint.packages/db/src/schema/figma/figma-component.ts (1)
12-12: Consider adding unique constraint onfigmaComponentIdwithin file scope.Similar to GitHub repos, a Figma component could be imported twice to the same file without a uniqueness constraint on
(figmaFileId, figmaComponentId).packages/platform-extensions/src/assets/types.ts (1)
23-28: Consider typinglineNumberasnumberinstead ofstring.Line numbers are typically numeric values. Using
stringhere may cause type inconsistencies when integrating with code analysis tools or IDEs that report line numbers as integers.Suggested fix
export interface AssetReference { filePath: string; - lineNumber?: string; + lineNumber?: number; importStatement?: string; referenceType: 'import' | 'url' | 'inline'; }packages/platform-extensions/src/figma/asset-processor.ts (3)
313-320: Replaceconsole.logwith proper logging or remove placeholder.This debug output will appear in production. Consider using a structured logger or removing this placeholder entirely since the actual storage implementation is pending.
Suggested fix
private async storeAsset(buffer: Buffer, path: string): Promise<void> { // Placeholder for storage implementation // In production, this would: // - Upload to cloud storage (S3, GCS, etc.) // - Store in local file system // - Update database records - console.log(`Storing asset to ${path}, size: ${buffer.length} bytes`); + // TODO: Implement actual storage logic }
164-200: Consider using an image processing library for robust JPEG parsing.The comment on line 165-166 acknowledges this is simplified. The current implementation may fail on edge cases (progressive JPEGs, truncated files, unusual marker ordering). For production use, libraries like
sharporimage-sizeprovide more reliable dimension extraction.
21-29: Silent error handling may mask systematic failures.While continuing on individual asset failures is reasonable, consider accumulating errors and returning them alongside successful results so callers can be informed of partial failures.
Suggested enhancement
- async processAssets(assets: FigmaAsset[]): Promise<ProcessedAsset[]> { + async processAssets(assets: FigmaAsset[]): Promise<{ processed: ProcessedAsset[]; errors: Array<{ assetId: string; error: string }> }> { const processedAssets: ProcessedAsset[] = []; + const errors: Array<{ assetId: string; error: string }> = []; for (const asset of assets) { try { const processed = await this.processAsset(asset); processedAssets.push(processed); } catch (error) { console.error(`Failed to process asset ${asset.id}:`, error); - // Continue with other assets + errors.push({ assetId: asset.id, error: String(error) }); } } - return processedAssets; + return { processed: processedAssets, errors }; }packages/platform-extensions/src/figma/service.ts (1)
159-166: Generated code usesanytype as fallback.While this is for generated component code (not the service itself), consider using a more specific fallback like
unknownor documenting unhandled Figma property types so users know what to expect.Suggested enhancement
private getTypeScriptType(figmaType: string): string { switch (figmaType) { case 'BOOLEAN': return 'boolean'; case 'TEXT': return 'string'; case 'INSTANCE_SWAP': return 'React.ReactNode'; - default: return 'any'; + default: return 'unknown'; // Unhandled Figma type } }packages/platform-extensions/src/figma/client.ts (3)
19-34: Replaceanywith proper generic type constraint orunknown.The method signature uses
anyas the default type parameter, and the request queue is typed asArray<() => Promise<any>>. Per coding guidelines, avoidanyunless necessary.Suggested fix
- private requestQueue: Array<() => Promise<any>> = []; + private requestQueue: Array<() => Promise<unknown>> = []; ... - async request<T = any>(endpoint: string, options: RequestInit = {}): Promise<T> { + async request<T = unknown>(endpoint: string, options: RequestInit = {}): Promise<T> {As per coding guidelines, avoid using the
anytype.
104-164: Add proper return types to API methods instead ofany.All API helper methods return
Promise<any>. Consider defining response interfaces for type safety, or at minimum usePromise<unknown>to enforce type checking at call sites.Example for getMe
+interface FigmaUser { + id: string; + email: string; + handle: string; + img_url: string; +} + - async getMe(): Promise<any> { + async getMe(): Promise<FigmaUser> { return this.request('/me'); }As per coding guidelines, avoid using the
anytype.
201-244: Parameter type should beunknowninstead ofany.Per coding guidelines, use
unknownfor parameters of uncertain type to enforce type checking.Suggested fix
-export function parseFigmaError(error: any): { type: FigmaErrorType; message: string } { +export function parseFigmaError(error: unknown): { type: FigmaErrorType; message: string } { if (error instanceof FigmaApiError) { // ... existing logic } - if (error.message?.includes('fetch')) { + if (error instanceof Error && error.message?.includes('fetch')) { return { type: FigmaErrorType.NETWORK_ERROR, message: 'Network error. Please check your connection.', }; } return { type: FigmaErrorType.UNKNOWN_ERROR, - message: error.message || 'An unknown error occurred.', + message: error instanceof Error ? error.message : 'An unknown error occurred.', }; }apps/web/client/src/server/api/routers/components-panel.ts (2)
70-70: Avoidz.any()fordefaultValue.Per coding guidelines, avoid using
anyunless absolutely necessary. Consider using a union type orz.unknown()with refinement.🔎 Proposed fix
- defaultValue: z.any().optional(), + defaultValue: z.union([z.string(), z.number(), z.boolean(), z.null()]).optional(),
6-12: Service instantiation pattern loses state between requests.Each procedure creates a new
ComponentsPanelServiceinstance. Since the service stores components in instance fields (this.components,this.categories), any state frominitialize()is lost after the request completes. This pattern works only because each procedure re-initializes, but it's inefficient and will cause issues if you expect state persistence (e.g., custom components added viaaddCustomComponentwon't persist).Consider either:
- Using a singleton/cached service per project
- Persisting to database instead of in-memory storage
packages/platform-extensions/src/assets/service.ts (2)
4-4: Avoidanytype forprojectAssetsmap.Per coding guidelines, avoid using
anyunless necessary. Define a properAssetinterface for type safety.+interface Asset { + id: string; + projectId: string; + name: string; + originalName: string; + type: string; + format: string; + size: number; + metadata: Record<string, unknown>; + originalPath: string; + optimizedPath?: string; + thumbnailPath?: string; + url: string; + optimizedUrl?: string; + thumbnailUrl?: string; + uploadedAt: Date; + references: AssetReference[]; + folder?: string; +} export class AssetManagementService { - private projectAssets: Map<string, any[]> = new Map(); + private projectAssets: Map<string, Asset[]> = new Map();
280-286: Remove or guardconsole.logstatements in production code.Multiple
console.logcalls throughout the file (lines 282, 293, 301, 309, 357, 361, 365) will pollute logs in production. These should be removed or replaced with a proper logging framework with appropriate log levels.packages/platform-extensions/src/components/service.ts (1)
298-304: Removeconsole.logstatements from production code.The
simulateComponentInsertionandsaveCustomComponentmethods containconsole.logcalls that should be removed or replaced with proper logging.packages/platform-extensions/src/figma/parser.ts (1)
10-15: Consider using Figma API types instead ofany.The parser uses
anyextensively for Figma document nodes. Consider importing types from@figma/rest-api-specor defining local interfaces to improve type safety and IDE support.// Example: Define minimal interfaces for Figma nodes interface FigmaNode { id: string; name: string; type: string; children?: FigmaNode[]; fills?: FigmaFill[]; strokes?: FigmaStroke[]; // ... other properties }apps/backend/supabase/migrations/0020_volatile_peter_parker.sql (1)
1-219: Consider adding indexes for frequently queried foreign keys.Tables like
assets,comments,figma_files, andgithub_repositorieswill likely be queried byproject_id. Without indexes, these queries will perform full table scans as data grows.CREATE INDEX idx_assets_project_id ON assets(project_id); CREATE INDEX idx_comments_project_id ON comments(project_id); CREATE INDEX idx_comments_element_id ON comments(element_id); CREATE INDEX idx_figma_files_project_id ON figma_files(project_id); CREATE INDEX idx_github_repositories_project_id ON github_repositories(project_id); CREATE INDEX idx_mcp_configs_project_id ON mcp_configs(project_id);
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 8236 🏁 Script executed: Length of output: 5516 🏁 Script executed: Length of output: 4185 🏁 Script executed: Length of output: 4088 🏁 Script executed: Length of output: 897 @DevFlex-AI, I've analyzed your PR and found one critical issue that will prevent maintainers from building, along with several other concerns: 🚨 Blocking Issue - Build Will FailZod Version Problem: In Fix required: "zod": "^3.23.0"
|
|
are you able to change the code to fix or do i have to make make a new PR with the fixed code @coderabbitai and also what should the zod version be? |
|
Tip For best results, initiate chat on the files or code changes. @DevFlex-AI, I'm not able to directly commit code changes to your repository – you'll need to make the changes yourself. But the good news is you don't need a new PR! You can simply push fixes to this same PR branch. 🔧 The FixZod version should be: The latest stable version of Zod is in the 3.23.x range. Here's what you need to change: File: Change line 24 from: "zod": "^4.1.3"To: "zod": "^3.23.0"📝 How to Apply the Fix
This will update the same PR automatically – no need to create a new one! 🎉 Optional but recommended: While you're at it, you might want to remove git rm DONT-CLICK-ON-THIS.HTML
git commit -m "remove joke HTML file"
git pushOnce you push these fixes, the build should work for the maintainers! Let me know if you need any other help. 👍 |
| await service.authenticate(input.token); | ||
| return service.extractDesignTokens(input.fileId); | ||
| }), | ||
| }); No newline at end of file |
There was a problem hiding this comment.
The file is missing a newline at the end. Consider adding a newline to comply with common coding conventions.
| "@onlook/eslint": "*", | ||
| }, | ||
| }, | ||
| <<<<<<< HEAD |
There was a problem hiding this comment.
Unresolved merge conflict marker <<<<<<< HEAD found. Please resolve this conflict before merging the PR.
| }, | ||
| }, | ||
| <<<<<<< HEAD | ||
| ======= |
There was a problem hiding this comment.
Unresolved merge conflict marker '=======' found. Please resolve this conflict before merging the PR.
| "typescript": "^5.5.4", | ||
| }, | ||
| }, | ||
| >>>>>>> 08a41a9ca19909effa8d5e6b459122bf03547e73 |
There was a problem hiding this comment.
It appears that a merge conflict marker is still present (>>>>>>> 08a41a9ca19909effa8d5e6b459122bf03547e73). Please remove this marker before merging.
|
|
||
| "@neoconfetti/react": ["@neoconfetti/react@1.0.0", "", {}, "sha512-klcSooChXXOzIm+SE5IISIAn3bYzYfPjbX7D7HoqZL84oAfgREeSg5vSIaSFH+DaGzzvImTyWe1OyrJ67vik4A=="], | ||
|
|
||
| <<<<<<< HEAD |
There was a problem hiding this comment.
Unresolved merge conflict marker found (<<<<<<< HEAD). Please resolve the conflict and remove the merge marker.
| "@onlook/docs/eslint/strip-ansi/ansi-regex": ["ansi-regex@5.0.1", "", {}, "sha512-quJQXlTSUGL2LH9SUXo8VwsY4soanhgo6LNSm84E1LBcE8s3O0wpdiRzyR9z/ZZJMlMWv37qOOb9pdJlMUEKFQ=="], | ||
|
|
||
| <<<<<<< HEAD | ||
| ======= |
There was a problem hiding this comment.
This conflict marker (=======) should be removed after resolving the merge conflict.
|
|
||
| "@onlook/web-client/next/sharp/semver": ["semver@7.7.3", "", { "bin": { "semver": "bin/semver.js" } }, "sha512-SdsKMrI9TdgjdweUSR9MweHA4EJ8YxHn8DFaDisvhVlUOe4BF1tLD7GAj0lIqWVl+dPb/rExr0Btby5loQm20Q=="], | ||
|
|
||
| >>>>>>> 08a41a9ca19909effa8d5e6b459122bf03547e73 |
There was a problem hiding this comment.
There appears to be a merge conflict marker left over: >>>>>>> 08a41a9ca19909effa8d5e6b459122bf03547e73. Please remove this marker to avoid build issues.
|
|
||
| "@react-email/preview-server/tailwindcss/chokidar/readdirp": ["readdirp@3.6.0", "", { "dependencies": { "picomatch": "^2.2.1" } }, "sha512-hOS089on8RduqdbhvQ5Z37A0ESjsqz6qnRcffsMU3495FuTdqSm+7bhJ29JvIOsBDEEnan5DPu9t3To9VRlMzA=="], | ||
|
|
||
| <<<<<<< HEAD |
There was a problem hiding this comment.
Unresolved merge conflict marker found ('<<<<<<< HEAD'). Please resolve this conflict and remove all merge markers before merging.
|
|
||
| "expo/pretty-format/@jest/schemas/@sinclair/typebox": ["@sinclair/typebox@0.27.8", "", {}, "sha512-+Fj43pSMwJs4KRrH/938Uf+uAELIgVBmQzg/q1YG10djyfA3TnrU8N8XzqCh/okZdszqBQTZf96idMfE5lnwTA=="], | ||
|
|
||
| <<<<<<< HEAD |
There was a problem hiding this comment.
Typo/lexical issue: Unresolved merge conflict marker <<<<<<< HEAD detected (and correspondingly, ======= on the following line). Please remove these markers and resolve the conflict before merging.
|
|
||
| "fumadocs-ui/next/sharp/semver": ["semver@7.7.3", "", { "bin": { "semver": "bin/semver.js" } }, "sha512-SdsKMrI9TdgjdweUSR9MweHA4EJ8YxHn8DFaDisvhVlUOe4BF1tLD7GAj0lIqWVl+dPb/rExr0Btby5loQm20Q=="], | ||
|
|
||
| >>>>>>> 08a41a9ca19909effa8d5e6b459122bf03547e73 |
There was a problem hiding this comment.
There's a leftover merge conflict marker (>>>>>>> 08a41a9ca19909effa8d5e6b459122bf03547e73) in the file. This should be removed before merging.
|
@coderabbitai o commti some change and can you tell your freind @ellipsis-dev to chill with the amount of conversation he is making |
|
Got it! I'll keep my comments concise going forward. 👍 |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 961 @DevFlex-AI, awesome work! 🎉 I can see you've pushed both fixes: ✅ Changes Applied Successfully
The blocking build issue is now resolved! Maintainers should be able to build the project without any problems. 🚀 About @ellipsis-devRegarding @ellipsis-dev – I appreciate the heads up, but I'm not able to control or communicate with other bots. Each tool operates independently. If the conversation volume from other bots is becoming an issue, you could:
Next StepsYour PR is now in much better shape! The main things remaining are:
Great job getting these fixes in quickly! 💪 |
|
@ellipsis-dev , @coderabbitai i have the same question for both if i added all the unselected features in readme would i get added to the orginizatuon on github becuase i addded there missing festures? |
|
Hi @DevFlex-AI! Thanks for your interest in contributing to Onlook. Questions about organization membership and contribution policies are best directed to the project maintainers. I'd recommend:
The maintainers will be able to give you the most accurate information about how contributions are recognized and what qualifies for organization membership. Great work on adding these features! |
|
Tip For best results, initiate chat on the files or code changes. @DevFlex-AI, that's a great question, but unfortunately I'm not able to make decisions about organization membership – that's entirely up to the repository maintainers and owners. 🙏 What I Can Tell YouYour contribution is substantial! You've added:
This is definitely meaningful work that adds value to the project! 🌟 What You Should DoTo find out about organization membership, I recommend:
Important NoteDifferent projects have different policies:
Only the maintainers can tell you what applies to this specific project! Keep up the great work, and good luck! 🚀 |
|
@ellipsis-dev can you just tell me or if not i will or not? |
|
PR #3077 Summary Additions:
Critical Issue:
Fixes Applied:
Current Status:
Organization Membership Question:
|
|
@ellipsis-dev who is the mainter that ca help me with this question? |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 2573 @DevFlex-AI, based on the repository information, here are the maintainers who can help you: 🎯 Primary MaintainerKiet (GitHub: @Kitenite)
👥 Other Active Maintainers
📬 How to ContactI recommend reaching out through these channels (in order of preference for your question):
💡 My SuggestionSince you've made a substantial contribution, I'd recommend:
The README states they're "actively looking for contributors," which is a good sign! 🌟 Good luck, and thanks for your contribution to the project! 🚀 |
|
@coderabbitai find me exactly how much lines of new code did I write in over a couple days/weeks |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Unrecoverable error: Request timed out after 300000ms (requestId=20cf8ab6-91c2-49ff-8a00-00edb9e48866) 🏁 Script executed: Length of output: 95465 🏁 Script executed: Length of output: 643 🏁 Script executed: Length of output: 1687
Scope
Totals (since 2025-12-23)
Daily breakdown
Weekly (ISO)
If you want this for a different window (e.g., last 7/14/30 days) or limited strictly to PR |
|
@coderabbitai I am asking exact in total |
|
Tip For best results, initiate chat on the files or code changes.
📊 Your Exact Total Lines of Code WrittenTime Period: December 23, 2025 → January 25, 2026 (33 days) Complete Totals:
What This Means:
Breakdown by Work Session:
🎯 Context for PR
|
|
@coderabbitai yes that does |
|
Tip For best results, initiate chat on the files or code changes.
To recap your contribution:
Next steps for getting this merged:
Your breakdown document (PR_SUMMARY.md) is ready, the code builds cleanly, and the split strategy is clear. The ball is in the maintainer's court now. Hang in there - patience pays off in open source! 💪 |
|
@coderabbitai what if we just tell the maintainer to make a whole new branch named beat so that if this doesn't go well they can just get rid of it easily without needing to check multiple pr's at once. |
|
Tip For best results, initiate chat on the files or code changes.
❌ Why a Separate Branch Doesn't HelpThe Problem Isn't WHERE to MergeThe issue
A separate branch doesn't change:
Whether you merge into How Git Branches WorkA "beta" or "experimental" branch would be useful for:
But it doesn't reduce review burden - the maintainer still has to:
📉 Why This Approach Could Backfire1. All-or-Nothing Risk
2. Delayed Value
3. Not Standard PracticeIn open source:
Asking them to create a branch just for your PR is unusual and adds complexity. 4. Still Doesn't Address Feedback
Creating a different target branch doesn't follow this guidance. ✅ The Better Approach (Split PRs)Why splitting is superior: 1. Incremental ValueEach PR delivers value immediately after merge. 2. Focused Review
Which would YOU rather review? 3. Lower Risk
4. Faster Iteration
Parallel progress instead of serial waiting. 5. Better Communication
🎯 What I RecommendInstead of asking for a special branch: Option A: Start Small (Best) # Create first focused PR (database schema) NOW
git checkout main && git pull
git checkout -b feature/db-schema
git checkout your-mega-pr-branch -- apps/backend/supabase/migrations/ packages/db/src/schema/
git add . && git commit -m "feat(db): platform extension schema + migrations"
git push -u origin feature/db-schema
gh pr create --title "feat(db): platform extension schema + migrations" \
--body "Part 1 of `#3077`. Foundation-only: tables, RLS, FKs. See PR_SUMMARY.md for context."Then ping
Option B: Wait for Guidance 💡 The RealityYour suggestion shows you're trying to make it easier for maintainers - that's great! 👍 But the actual barrier isn't technical (where to merge) - it's human (review time and cognitive load). Small PRs reduce cognitive load. A separate branch doesn't. 📊 Success Probability
The data strongly favors splitting. 🚀 Bottom LineI appreciate you're trying to find creative solutions! But the path of least resistance is actually the split approach - it's what maintainers expect, it's easier to review, and it gets features merged faster. My recommendation: Use the time you'd spend negotiating a special branch workflow to instead create that first focused PR. You could have it pushed in the next 10 minutes! Want me to generate the exact commands to create your first split PR? Just say "generate the split script" and I'll give you a ready-to-run workflow. 💪 |
|
@coderabbitai how long has it been since i opened htis pr |
|
Tip For best results, initiate chat on the files or code changes.
📅 Exact Timeline
📊 Breakdown
😔 The Reality51 days with no maintainer response is concerning. This suggests:
🎯 What You Should Do NOWOption 1: Take Initiative (Recommended) ⚡Don't wait any longer. Create the first split PR yourself: # Create DB schema PR (foundation for everything)
git checkout main && git pull origin main
git checkout -b feature/platform-ext-db-schema
# Copy only schema files from your mega-PR branch
git checkout main -- \
apps/backend/supabase/migrations/0020_*.sql \
apps/backend/supabase/migrations/0021_*.sql \
apps/backend/supabase/migrations/meta/_journal.json \
packages/db/src/schema/asset/ \
packages/db/src/schema/comment/ \
packages/db/src/schema/figma/ \
packages/db/src/schema/github/ \
packages/db/src/schema/mcp/ \
packages/db/src/schema/index.ts \
packages/db/src/schema/project/project.ts
git add .
git commit -m "feat(db): add platform extension schema and migrations
Foundation for PR `#3077`. Adds tables for:
- assets & asset_references
- comments & comment_mentions
- figma_files, figma_assets, figma_components
- github_repositories, github_integrations
- mcp_configs, mcp_servers
Includes RLS policies, FK cascade fixes, and migration journal.
See PR_SUMMARY.md in `#3077` for full context."
git push -u origin feature/platform-ext-db-schemaThen create the PR: gh pr create \
--title "feat(db): platform extension schema and migrations" \
--body "Part 1 of `#3077` - Database Foundation
This PR extracts ONLY the database schema changes from the mega-PR `#3077`.
**What's included:**
- New tables: assets, comments, figma_files, github_repositories, mcp_configs
- Row-level security policies
- Foreign key cascade fixes
- Migration journal updates
**Why split:** Following maintainer feedback to break `#3077` into reviewable chunks.
**Next PRs:** After this merges, will submit individual feature services (assets, comments, figma, etc.)
**Review time:** ~15-20 minutes (vs. hours for the full PR)
See [PR_SUMMARY.md](link) for complete breakdown."This forces action:
Option 2: Final Escalation Ping 📢Post this on the mega-PR:
Option 3: Try Discord 💬Join their Discord (https://discord.gg/hERDfFZCsH) and:
Option 4: Move On 🚶After 51 days with no response:
💡 My Honest Assessment51 days = Communication breakdown Either:
Best bet: Create the split PRs anyway. Even if this one stalls, smaller PRs have a better shot. Probability at this point:
🚀 My Recommendation
Stop waiting. Take action. You've been patient enough! 💪 |
|
make me an .sh script for this @coderabbitai |
|
Tip For best results, initiate chat on the files or code changes.
#!/bin/bash
set -euo pipefail
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Configuration
REPO="onlook-dev/onlook"
MEGA_PR_NUMBER="3077"
NEW_BRANCH="feature/platform-ext-db-schema"
BASE_BRANCH="main"
echo -e "${BLUE}╔════════════════════════════════════════════════════════╗${NC}"
echo -e "${BLUE}║ PR Split Script - Database Schema Extraction ║${NC}"
echo -e "${BLUE}╚════════════════════════════════════════════════════════╝${NC}"
echo
# Function to check if command exists
command_exists() {
command -v "$1" >/dev/null 2>&1
}
# Check prerequisites
echo -e "${YELLOW}Checking prerequisites...${NC}"
if ! command_exists git; then
echo -e "${RED}✗ Git not found. Please install git.${NC}"
exit 1
fi
echo -e "${GREEN}✓ Git found${NC}"
if ! command_exists gh; then
echo -e "${RED}✗ GitHub CLI (gh) not found. Install from: https://cli.github.com/${NC}"
echo -e "${YELLOW} You can still run the script - PR creation will be manual.${NC}"
GH_AVAILABLE=false
else
echo -e "${GREEN}✓ GitHub CLI found${NC}"
GH_AVAILABLE=true
fi
# Prompt for mega PR branch name
echo
echo -e "${YELLOW}What is your mega PR branch name?${NC}"
read -p "Branch name (default: main): " MEGA_PR_BRANCH
MEGA_PR_BRANCH=${MEGA_PR_BRANCH:-main}
# Confirm repository state
echo
echo -e "${YELLOW}Current repository status:${NC}"
git status --short
echo
read -p "Continue with PR split? (y/n): " CONFIRM
if [[ ! $CONFIRM =~ ^[Yy]$ ]]; then
echo -e "${RED}Aborted by user.${NC}"
exit 0
fi
# Step 1: Fetch latest changes
echo
echo -e "${BLUE}[1/6] Fetching latest changes...${NC}"
git fetch origin
# Step 2: Switch to base branch and update
echo -e "${BLUE}[2/6] Updating ${BASE_BRANCH}...${NC}"
git checkout "$BASE_BRANCH"
git pull origin "$BASE_BRANCH"
# Step 3: Create new branch
echo -e "${BLUE}[3/6] Creating branch ${NEW_BRANCH}...${NC}"
if git show-ref --verify --quiet "refs/heads/$NEW_BRANCH"; then
echo -e "${YELLOW}Branch ${NEW_BRANCH} already exists.${NC}"
read -p "Delete and recreate? (y/n): " DELETE_CONFIRM
if [[ $DELETE_CONFIRM =~ ^[Yy]$ ]]; then
git branch -D "$NEW_BRANCH"
echo -e "${GREEN}✓ Deleted existing branch${NC}"
else
echo -e "${RED}Aborted.${NC}"
exit 1
fi
fi
git checkout -b "$NEW_BRANCH"
echo -e "${GREEN}✓ Created branch ${NEW_BRANCH}${NC}"
# Step 4: Copy schema files from mega PR branch
echo -e "${BLUE}[4/6] Copying database schema files...${NC}"
FILES_TO_COPY=(
"apps/backend/supabase/migrations/0020_volatile_peter_parker.sql"
"apps/backend/supabase/migrations/0021_optimal_pyro.sql"
"apps/backend/supabase/migrations/meta/_journal.json"
"packages/db/src/schema/asset/"
"packages/db/src/schema/comment/"
"packages/db/src/schema/figma/"
"packages/db/src/schema/github/"
"packages/db/src/schema/mcp/"
"packages/db/src/schema/index.ts"
"packages/db/src/schema/project/project.ts"
)
for file in "${FILES_TO_COPY[@]}"; do
if git cat-file -e "origin/${MEGA_PR_BRANCH}:${file}" 2>/dev/null; then
git checkout "origin/${MEGA_PR_BRANCH}" -- "$file"
echo -e "${GREEN} ✓ Copied: $file${NC}"
else
echo -e "${YELLOW} ⚠ Not found: $file (skipping)${NC}"
fi
done
# Step 5: Commit changes
echo -e "${BLUE}[5/6] Creating commit...${NC}"
COMMIT_MESSAGE="feat(db): add platform extension schema and migrations
Foundation for PR #${MEGA_PR_NUMBER}. Adds tables for:
- assets & asset_references
- comments & comment_mentions
- figma_files, figma_assets, figma_components
- github_repositories, github_integrations
- mcp_configs, mcp_servers
Includes RLS policies, FK cascade fixes, and migration journal.
See PR_SUMMARY.md in #${MEGA_PR_NUMBER} for full context."
git add .
git commit -m "$COMMIT_MESSAGE"
echo -e "${GREEN}✓ Committed changes${NC}"
# Show commit summary
echo
echo -e "${YELLOW}Commit summary:${NC}"
git show --stat HEAD
# Step 6: Push branch
echo
echo -e "${BLUE}[6/6] Pushing branch to origin...${NC}"
git push -u origin "$NEW_BRANCH"
echo -e "${GREEN}✓ Branch pushed successfully${NC}"
# Create PR
echo
if [ "$GH_AVAILABLE" = true ]; then
echo -e "${BLUE}Creating pull request...${NC}"
PR_BODY="Part 1 of #${MEGA_PR_NUMBER} - Database Foundation
This PR extracts ONLY the database schema changes from the mega-PR #${MEGA_PR_NUMBER}.
**What's included:**
- New tables: assets, comments, figma_files, github_repositories, mcp_configs
- Row-level security policies
- Foreign key cascade fixes
- Migration journal updates
**Why split:** Following maintainer feedback to break #${MEGA_PR_NUMBER} into reviewable chunks.
**Next PRs:** After this merges, will submit individual feature services (assets, comments, figma, etc.)
**Review time:** ~15-20 minutes (vs. hours for the full PR)
See PR_SUMMARY.md in #${MEGA_PR_NUMBER} for complete breakdown.
`@drfarrell` This is the first focused PR from my mega-PR, starting with just the database foundation."
PR_URL=$(gh pr create \
--repo "$REPO" \
--base "$BASE_BRANCH" \
--head "$NEW_BRANCH" \
--title "feat(db): platform extension schema and migrations" \
--body "$PR_BODY" \
2>&1)
if [ $? -eq 0 ]; then
echo -e "${GREEN}✓ Pull request created successfully!${NC}"
echo -e "${BLUE}URL: ${PR_URL}${NC}"
else
echo -e "${RED}✗ Failed to create PR via CLI${NC}"
echo -e "${YELLOW}Create manually at: https://github.com/${REPO}/compare/${NEW_BRANCH}${NC}"
fi
else
echo -e "${YELLOW}GitHub CLI not available. Create PR manually:${NC}"
echo -e "${BLUE}https://github.com/${REPO}/compare/${NEW_BRANCH}${NC}"
fi
# Summary
echo
echo -e "${GREEN}╔════════════════════════════════════════════════════════╗${NC}"
echo -e "${GREEN}║ ✓ COMPLETE ║${NC}"
echo -e "${GREEN}╚════════════════════════════════════════════════════════╝${NC}"
echo
echo -e "${YELLOW}Next steps:${NC}"
echo "1. Review the PR and make any adjustments"
echo "2. Tag `@coderabbitai` with '@coderabbitai review' on the new PR"
echo "3. Post a comment on PR #${MEGA_PR_NUMBER} linking to this new PR"
echo "4. After this merges, create the next feature PR (assets/comments/etc)"
echo
echo -e "${BLUE}Branch: ${NEW_BRANCH}${NC}"
echo -e "${BLUE}Files changed: $(git diff --stat HEAD~1 | tail -1)${NC}"To use this script:
curl -o split-pr-db-schema.sh https://gist.githubusercontent.com/... # or copy-paste
chmod +x split-pr-db-schema.sh
./split-pr-db-schema.sh
What the script does:
Script features:
After running:
Want me to also create scripts for the other features (assets, comments, figma, etc.)? 🚀 |
|
@coderabbitai make it split alll the features into different prs
…On Fri, Feb 13, 2026 at 10:23 AM coderabbitai[bot] ***@***.***> wrote:
*coderabbitai[bot]* left a comment (onlook-dev/onlook#3077)
<#3077 (comment)>
Tip
For best results, initiate chat on the files or code changes.
@DevFlex-AI, here's a complete shell script to automate splitting your
database schema PR:
#!/bin/bashset -euo pipefail
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Configuration
REPO="onlook-dev/onlook"
MEGA_PR_NUMBER="3077"
NEW_BRANCH="feature/platform-ext-db-schema"
BASE_BRANCH="main"
echo -e "${BLUE}╔════════════════════════════════════════════════════════╗${NC}"echo -e "${BLUE}║ PR Split Script - Database Schema Extraction ║${NC}"echo -e "${BLUE}╚════════════════════════════════════════════════════════╝${NC}"echo
# Function to check if command existscommand_exists() {
command -v "$1" >/dev/null 2>&1
}
# Check prerequisitesecho -e "${YELLOW}Checking prerequisites...${NC}"if ! command_exists git; then
echo -e "${RED}✗ Git not found. Please install git.${NC}"
exit 1fiecho -e "${GREEN}✓ Git found${NC}"
if ! command_exists gh; then
echo -e "${RED}✗ GitHub CLI (gh) not found. Install from: https://cli.github.com/${NC}"
echo -e "${YELLOW} You can still run the script - PR creation will be manual.${NC}"
GH_AVAILABLE=falseelse
echo -e "${GREEN}✓ GitHub CLI found${NC}"
GH_AVAILABLE=truefi
# Prompt for mega PR branch nameechoecho -e "${YELLOW}What is your mega PR branch name?${NC}"read -p "Branch name (default: main): " MEGA_PR_BRANCH
MEGA_PR_BRANCH=${MEGA_PR_BRANCH:-main}
# Confirm repository stateechoecho -e "${YELLOW}Current repository status:${NC}"
git status --shortechoread -p "Continue with PR split? (y/n): " CONFIRMif [[ ! $CONFIRM =~ ^[Yy]$ ]]; then
echo -e "${RED}Aborted by user.${NC}"
exit 0fi
# Step 1: Fetch latest changesechoecho -e "${BLUE}[1/6] Fetching latest changes...${NC}"
git fetch origin
# Step 2: Switch to base branch and updateecho -e "${BLUE}[2/6] Updating ${BASE_BRANCH}...${NC}"
git checkout "$BASE_BRANCH"
git pull origin "$BASE_BRANCH"
# Step 3: Create new branchecho -e "${BLUE}[3/6] Creating branch ${NEW_BRANCH}...${NC}"if git show-ref --verify --quiet "refs/heads/$NEW_BRANCH"; then
echo -e "${YELLOW}Branch ${NEW_BRANCH} already exists.${NC}"
read -p "Delete and recreate? (y/n): " DELETE_CONFIRM
if [[ $DELETE_CONFIRM =~ ^[Yy]$ ]]; then
git branch -D "$NEW_BRANCH"
echo -e "${GREEN}✓ Deleted existing branch${NC}"
else
echo -e "${RED}Aborted.${NC}"
exit 1
fifi
git checkout -b "$NEW_BRANCH"echo -e "${GREEN}✓ Created branch ${NEW_BRANCH}${NC}"
# Step 4: Copy schema files from mega PR branchecho -e "${BLUE}[4/6] Copying database schema files...${NC}"
FILES_TO_COPY=(
"apps/backend/supabase/migrations/0020_volatile_peter_parker.sql"
"apps/backend/supabase/migrations/0021_optimal_pyro.sql"
"apps/backend/supabase/migrations/meta/_journal.json"
"packages/db/src/schema/asset/"
"packages/db/src/schema/comment/"
"packages/db/src/schema/figma/"
"packages/db/src/schema/github/"
"packages/db/src/schema/mcp/"
"packages/db/src/schema/index.ts"
"packages/db/src/schema/project/project.ts"
)
for file in "${FILES_TO_COPY[@]}"; do
if git cat-file -e "origin/${MEGA_PR_BRANCH}:${file}" 2>/dev/null; then
git checkout "origin/${MEGA_PR_BRANCH}" -- "$file"
echo -e "${GREEN} ✓ Copied: $file${NC}"
else
echo -e "${YELLOW} ⚠ Not found: $file (skipping)${NC}"
fidone
# Step 5: Commit changesecho -e "${BLUE}[5/6] Creating commit...${NC}"
COMMIT_MESSAGE="feat(db): add platform extension schema and migrationsFoundation for PR #${MEGA_PR_NUMBER}. Adds tables for:- assets & asset_references- comments & comment_mentions- figma_files, figma_assets, figma_components- github_repositories, github_integrations- mcp_configs, mcp_serversIncludes RLS policies, FK cascade fixes, and migration journal.See PR_SUMMARY.md in #${MEGA_PR_NUMBER} for full context."
git add .
git commit -m "$COMMIT_MESSAGE"echo -e "${GREEN}✓ Committed changes${NC}"
# Show commit summaryechoecho -e "${YELLOW}Commit summary:${NC}"
git show --stat HEAD
# Step 6: Push branchechoecho -e "${BLUE}[6/6] Pushing branch to origin...${NC}"
git push -u origin "$NEW_BRANCH"echo -e "${GREEN}✓ Branch pushed successfully${NC}"
# Create PRechoif [ "$GH_AVAILABLE" = true ]; then
echo -e "${BLUE}Creating pull request...${NC}"
PR_BODY="Part 1 of #${MEGA_PR_NUMBER} - Database FoundationThis PR extracts ONLY the database schema changes from the mega-PR #${MEGA_PR_NUMBER}.**What's included:**- New tables: assets, comments, figma_files, github_repositories, mcp_configs- Row-level security policies- Foreign key cascade fixes- Migration journal updates**Why split:** Following maintainer feedback to break #${MEGA_PR_NUMBER} into reviewable chunks.**Next PRs:** After this merges, will submit individual feature services (assets, comments, figma, etc.)**Review time:** ~15-20 minutes (vs. hours for the full PR)See PR_SUMMARY.md in #${MEGA_PR_NUMBER} for complete ***@***.***` This is the first focused PR from my mega-PR, starting with just the database foundation."
PR_URL=$(gh pr create \ --repo "$REPO" \ --base "$BASE_BRANCH" \ --head "$NEW_BRANCH" \ --title "feat(db): platform extension schema and migrations" \ --body "$PR_BODY" \ 2>&1)
if [ $? -eq 0 ]; then
echo -e "${GREEN}✓ Pull request created successfully!${NC}"
echo -e "${BLUE}URL: ${PR_URL}${NC}"
else
echo -e "${RED}✗ Failed to create PR via CLI${NC}"
echo -e "${YELLOW}Create manually at: https://github.com/${REPO}/compare/${NEW_BRANCH}${NC}"
fielse
echo -e "${YELLOW}GitHub CLI not available. Create PR manually:${NC}"
echo -e "${BLUE}https://github.com/${REPO}/compare/${NEW_BRANCH}${NC}"fi
# Summaryechoecho -e "${GREEN}╔════════════════════════════════════════════════════════╗${NC}"echo -e "${GREEN}║ ✓ COMPLETE ║${NC}"echo -e "${GREEN}╚════════════════════════════════════════════════════════╝${NC}"echoecho -e "${YELLOW}Next steps:${NC}"echo "1. Review the PR and make any adjustments"echo "2. Tag ***@***.***` with ***@***.*** review' on the new PR"echo "3. Post a comment on PR #${MEGA_PR_NUMBER} linking to this new PR"echo "4. After this merges, create the next feature PR (assets/comments/etc)"echoecho -e "${BLUE}Branch: ${NEW_BRANCH}${NC}"echo -e "${BLUE}Files changed: $(git diff --stat HEAD~1 | tail -1)${NC}"
*To use this script:*
1. *Save it:*
curl -o split-pr-db-schema.sh https://gist.githubusercontent.com/... # or copy-paste
chmod +x split-pr-db-schema.sh
2. *Run it:*
./split-pr-db-schema.sh
3. *Follow the prompts:*
- Enter your mega PR branch name when asked
- Confirm you want to proceed
- The script will handle the rest!
*What the script does:*
- ✅ Checks prerequisites (git, gh CLI)
- ✅ Creates backup of current state
- ✅ Extracts only database files from your mega PR
- ✅ Creates properly formatted commit
- ✅ Pushes new branch
- ✅ Auto-creates PR with GitHub CLI (if available)
- ✅ Provides clear status updates throughout
*Script features:*
- 🛡️ Safe error handling (exits on failures)
- 🎨 Colored output for readability
- ✓ Step-by-step progress indicators
-
|
|
Tip For best results, initiate chat on the files or code changes.
#!/bin/bash
set -euo pipefail
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
MAGENTA='\033[0;35m'
CYAN='\033[0;36m'
NC='\033[0m'
# Configuration
REPO="onlook-dev/onlook"
MEGA_PR_NUMBER="3077"
BASE_BRANCH="main"
echo -e "${MAGENTA}╔════════════════════════════════════════════════════════════╗${NC}"
echo -e "${MAGENTA}║ PR Mega-Split Tool - All Features ║${NC}"
echo -e "${MAGENTA}║ Splitting PR #${MEGA_PR_NUMBER} into focused PRs ║${NC}"
echo -e "${MAGENTA}╚════════════════════════════════════════════════════════════╝${NC}"
echo
# Function to check if command exists
command_exists() {
command -v "$1" >/dev/null 2>&1
}
# Check prerequisites
echo -e "${YELLOW}Checking prerequisites...${NC}"
if ! command_exists git; then
echo -e "${RED}✗ Git not found. Please install git.${NC}"
exit 1
fi
echo -e "${GREEN}✓ Git found${NC}"
if ! command_exists gh; then
echo -e "${YELLOW}⚠ GitHub CLI (gh) not found. PRs will need manual creation.${NC}"
GH_AVAILABLE=false
else
echo -e "${GREEN}✓ GitHub CLI found${NC}"
GH_AVAILABLE=true
fi
# Prompt for mega PR branch name
echo
echo -e "${YELLOW}What is your mega PR branch name?${NC}"
read -p "Branch name: " MEGA_PR_BRANCH
if [ -z "$MEGA_PR_BRANCH" ]; then
echo -e "${RED}Branch name cannot be empty!${NC}"
exit 1
fi
# Define all features to split
declare -A FEATURES
# Feature 1: Database Schema
FEATURES[db-schema]="
BRANCH:feature/platform-ext-db-schema
TITLE:feat(db): platform extension schema and migrations
FILES:apps/backend/supabase/migrations/0020_volatile_peter_parker.sql
apps/backend/supabase/migrations/0021_optimal_pyro.sql
apps/backend/supabase/migrations/meta/_journal.json
packages/db/src/schema/asset/
packages/db/src/schema/comment/
packages/db/src/schema/figma/
packages/db/src/schema/github/
packages/db/src/schema/mcp/
packages/db/src/schema/index.ts
packages/db/src/schema/project/project.ts
COMMIT:feat(db): add platform extension schema and migrations
Foundation for PR #${MEGA_PR_NUMBER}. Adds tables for:
- assets & asset_references
- comments & comment_mentions
- figma_files, figma_assets, figma_components
- github_repositories, github_integrations
- mcp_configs, mcp_servers
Includes RLS policies, FK cascade fixes, and migration journal.
BODY:Part 1 of #${MEGA_PR_NUMBER} - Database Foundation
**What's included:**
- New tables: assets, comments, figma_files, github_repositories, mcp_configs
- Row-level security policies
- Foreign key cascade fixes
- Migration journal updates
**Why split:** Foundation for all other platform extension features.
**Dependencies:** None (can merge first)
**Review time:** ~15-20 minutes
"
# Feature 2: Asset Management
FEATURES[assets]="
BRANCH:feature/platform-ext-assets
TITLE:feat(assets): add asset management service
FILES:packages/platform-extensions/src/assets/
apps/web/client/src/server/api/routers/assets.ts
COMMIT:feat(assets): add asset management service
Part of PR #${MEGA_PR_NUMBER}. Implements:
- Asset upload with validation
- Asset optimization (format, quality, dimensions)
- Import statement generation
- Asset organization and reference tracking
Uses in-memory storage (can be migrated to S3/storage later).
BODY:Part 2 of #${MEGA_PR_NUMBER} - Asset Management
**What's included:**
- AssetManagementService with upload/optimize/organize
- Asset validation and metadata extraction
- Import statement generation for multiple languages
- Reference tracking across project files
**Dependencies:** Requires PR `#1` (database schema) to be merged first
**Review time:** ~20-25 minutes
"
# Feature 3: Comments System
FEATURES[comments]="
BRANCH:feature/platform-ext-comments
TITLE:feat(comments): add inline comments system
FILES:packages/platform-extensions/src/comments/
apps/web/client/src/server/api/routers/comments.ts
COMMIT:feat(comments): add inline comments system
Part of PR #${MEGA_PR_NUMBER}. Implements:
- Threaded comments with positioning
- User mentions and notifications
- Reply workflow
- Comment resolution and status tracking
Uses in-memory storage (ready for DB integration).
BODY:Part 3 of #${MEGA_PR_NUMBER} - Comments System
**What's included:**
- CommentService with create/reply/mention/resolve
- Thread management and positioning
- Mention extraction and notifications
- Element-level and project-level comment queries
**Dependencies:** Requires PR `#1` (database schema) to be merged first
**Review time:** ~15-20 minutes
"
# Feature 4: Components Panel
FEATURES[components]="
BRANCH:feature/platform-ext-components
TITLE:feat(components): add components panel service
FILES:packages/platform-extensions/src/components/
apps/web/client/src/server/api/routers/components-panel.ts
COMMIT:feat(components): add components panel service
Part of PR #${MEGA_PR_NUMBER}. Implements:
- Searchable component library
- Category filtering and organization
- Drag-and-drop support with validation
- Component insertion with position control
- Custom component creation
BODY:Part 4 of #${MEGA_PR_NUMBER} - Components Panel
**What's included:**
- ComponentsPanelService with search/filter/insert
- Built-in component library
- Drag-and-drop operation tracking
- Drop target validation
- Custom component support
**Dependencies:** None (standalone feature)
**Review time:** ~20-25 minutes
"
# Feature 5: Figma Integration
FEATURES[figma]="
BRANCH:feature/platform-ext-figma
TITLE:feat(figma): add Figma integration service
FILES:packages/platform-extensions/src/figma/
apps/web/client/src/server/api/routers/figma.ts
apps/web/client/src/env.ts
COMMIT:feat(figma): add Figma integration service
Part of PR #${MEGA_PR_NUMBER}. Implements:
- OAuth authentication flow
- File import and parsing
- Asset extraction with URL resolution
- Component conversion to React code
- Design token extraction
- Token storage (in-memory with DB migration path)
Requires FIGMA_CLIENT_ID and FIGMA_CLIENT_SECRET env vars.
BODY:Part 5 of #${MEGA_PR_NUMBER} - Figma Integration
**What's included:**
- FigmaIntegrationService with auth/import/extract
- FigmaApiClient with rate limiting
- FigmaFileParser for document parsing
- FigmaAssetProcessor for asset handling
- Component-to-code generation
- Design token extraction
**New env vars:** FIGMA_CLIENT_ID, FIGMA_CLIENT_SECRET (optional)
**Dependencies:** Requires PR `#1` (database schema) for figma_files table
**Review time:** ~30-35 minutes
"
# Feature 6: GitHub Integration
FEATURES[github]="
BRANCH:feature/platform-ext-github
TITLE:feat(github): add GitHub integration service
FILES:packages/platform-extensions/src/github/
COMMIT:feat(github): add GitHub integration service
Part of PR #${MEGA_PR_NUMBER}. Implements:
- Repository connection and access
- Branch creation from default branch
- PR creation with automated commits
- Repository synchronization
- Code change handling
BODY:Part 6 of #${MEGA_PR_NUMBER} - GitHub Integration
**What's included:**
- GitHubIntegrationService with repo/branch/PR management
- Authentication and access validation
- Automated commit creation from code changes
- Repository state synchronization
**Dependencies:** Requires PR `#1` (database schema) for github_repositories table
**Review time:** ~20-25 minutes
"
# Feature 7: MCP Integration
FEATURES[mcp]="
BRANCH:feature/platform-ext-mcp
TITLE:feat(mcp): add MCP integration service
FILES:packages/platform-extensions/src/mcp/
COMMIT:feat(mcp): add MCP integration service
Part of PR #${MEGA_PR_NUMBER}. Implements:
- MCP server configuration management
- Tool discovery and metadata generation
- Tool execution with validation
- Server lifecycle (setup/restart)
Includes unit tests for service methods.
BODY:Part 7 of #${MEGA_PR_NUMBER} - MCP Integration
**What's included:**
- MCPIntegrationService with setup/execute/restart
- In-memory MCP runtime simulation
- Tool metadata generation
- Auto-approve configuration
- Unit tests
**Dependencies:** Requires PR `#1` (database schema) for mcp_configs table
**Review time:** ~15-20 minutes
"
# Feature 8: Supporting Changes
FEATURES[support]="
BRANCH:feature/platform-ext-support
TITLE:feat(api): wire platform extension routers
FILES:apps/web/client/src/server/api/root.ts
packages/platform-extensions/src/index.ts
packages/platform-extensions/src/types/
packages/platform-extensions/package.json
packages/platform-extensions/tsconfig.json
packages/platform-extensions/eslint.config.js
.gitignore
packages/ai/src/tokens/index.ts
COMMIT:feat(api): wire platform extension routers and barrel exports
Part of PR #${MEGA_PR_NUMBER}. Adds:
- Router integration in root.ts
- Barrel exports for platform-extensions package
- Package configuration and tooling
- Common types and interfaces
BODY:Part 8 of #${MEGA_PR_NUMBER} - Supporting Infrastructure
**What's included:**
- New routers wired into apps/web/client/src/server/api/root.ts
- Platform extensions barrel exports
- Package manifest and build config
- Shared types and interfaces
- Minor token counting adjustment
**Dependencies:** Should merge after feature PRs (or alongside)
**Review time:** ~10-15 minutes
"
# Display feature menu
echo
echo -e "${CYAN}════════════════════════════════════════════════════════════${NC}"
echo -e "${CYAN}Features available to split:${NC}"
echo -e "${CYAN}════════════════════════════════════════════════════════════${NC}"
echo " 1. Database Schema (foundation) - MERGE FIRST"
echo " 2. Asset Management"
echo " 3. Comments System"
echo " 4. Components Panel"
echo " 5. Figma Integration"
echo " 6. GitHub Integration"
echo " 7. MCP Integration"
echo " 8. Supporting Changes (routers, exports)"
echo " 9. ALL (create all branches and PRs)"
echo " 0. Exit"
echo -e "${CYAN}════════════════════════════════════════════════════════════${NC}"
echo
read -p "Select features to create (e.g., 1,2,3 or 9 for all): " SELECTION
# Parse selection
SELECTED_FEATURES=()
if [[ "$SELECTION" == "9" ]]; then
SELECTED_FEATURES=("db-schema" "assets" "comments" "components" "figma" "github" "mcp" "support")
elif [[ "$SELECTION" == "0" ]]; then
echo -e "${YELLOW}Exiting...${NC}"
exit 0
else
IFS=',' read -ra NUMS <<< "$SELECTION"
for num in "${NUMS[@]}"; do
case "$num" in
1) SELECTED_FEATURES+=("db-schema") ;;
2) SELECTED_FEATURES+=("assets") ;;
3) SELECTED_FEATURES+=("comments") ;;
4) SELECTED_FEATURES+=("components") ;;
5) SELECTED_FEATURES+=("figma") ;;
6) SELECTED_FEATURES+=("github") ;;
7) SELECTED_FEATURES+=("mcp") ;;
8) SELECTED_FEATURES+=("support") ;;
esac
done
fi
if [ ${`#SELECTED_FEATURES`[@]} -eq 0 ]; then
echo -e "${RED}No features selected!${NC}"
exit 1
fi
echo
echo -e "${GREEN}Selected ${`#SELECTED_FEATURES`[@]} feature(s) to split${NC}"
echo
# Confirm
read -p "Continue and create branches/PRs? (y/n): " CONFIRM
if [[ ! $CONFIRM =~ ^[Yy]$ ]]; then
echo -e "${YELLOW}Aborted by user.${NC}"
exit 0
fi
# Fetch latest
echo
echo -e "${BLUE}Fetching latest changes...${NC}"
git fetch origin
# Update base branch
echo -e "${BLUE}Updating ${BASE_BRANCH}...${NC}"
git checkout "$BASE_BRANCH"
git pull origin "$BASE_BRANCH"
# Track created PRs
declare -a CREATED_PRS
# Process each selected feature
for FEATURE_KEY in "${SELECTED_FEATURES[@]}"; do
echo
echo -e "${MAGENTA}════════════════════════════════════════════════════════════${NC}"
echo -e "${MAGENTA}Processing: ${FEATURE_KEY}${NC}"
echo -e "${MAGENTA}════════════════════════════════════════════════════════════${NC}"
# Parse feature data
FEATURE_DATA="${FEATURES[$FEATURE_KEY]}"
BRANCH=$(echo "$FEATURE_DATA" | grep "^BRANCH:" | cut -d: -f2-)
TITLE=$(echo "$FEATURE_DATA" | grep "^TITLE:" | cut -d: -f2-)
COMMIT_MSG=$(echo "$FEATURE_DATA" | awk '/^COMMIT:/,/^BODY:/' | sed '1d;$d')
PR_BODY=$(echo "$FEATURE_DATA" | awk '/^BODY:/,/^$/' | sed '1d')
# Extract file list
FILES=$(echo "$FEATURE_DATA" | awk '/^FILES:/,/^COMMIT:/' | sed '1d;$d' | tr '\n' ' ')
echo -e "${BLUE}Branch: ${BRANCH}${NC}"
echo -e "${BLUE}Title: ${TITLE}${NC}"
echo
# Create branch
if git show-ref --verify --quiet "refs/heads/$BRANCH"; then
echo -e "${YELLOW}Branch ${BRANCH} exists. Deleting...${NC}"
git branch -D "$BRANCH"
fi
git checkout -b "$BRANCH"
echo -e "${GREEN}✓ Created branch${NC}"
# Copy files
echo -e "${BLUE}Copying files...${NC}"
for file in $FILES; do
if git cat-file -e "origin/${MEGA_PR_BRANCH}:${file}" 2>/dev/null; then
git checkout "origin/${MEGA_PR_BRANCH}" -- "$file" 2>/dev/null || true
echo -e "${GREEN} ✓ ${file}${NC}"
else
echo -e "${YELLOW} ⚠ Not found: ${file}${NC}"
fi
done
# Commit
git add .
if git diff --cached --quiet; then
echo -e "${YELLOW}⚠ No changes to commit for ${FEATURE_KEY}${NC}"
git checkout "$BASE_BRANCH"
continue
fi
git commit -m "$COMMIT_MSG"
echo -e "${GREEN}✓ Committed changes${NC}"
# Push
git push -u origin "$BRANCH" 2>&1 | grep -v "^remote:"
echo -e "${GREEN}✓ Pushed branch${NC}"
# Create PR
if [ "$GH_AVAILABLE" = true ]; then
echo -e "${BLUE}Creating pull request...${NC}"
PR_URL=$(gh pr create \
--repo "$REPO" \
--base "$BASE_BRANCH" \
--head "$BRANCH" \
--title "$TITLE" \
--body "$PR_BODY" 2>&1 | tail -1)
if [[ "$PR_URL" == http* ]]; then
echo -e "${GREEN}✓ PR created: ${PR_URL}${NC}"
CREATED_PRS+=("$FEATURE_KEY: $PR_URL")
else
echo -e "${RED}✗ Failed to create PR${NC}"
CREATED_PRS+=("$FEATURE_KEY: FAILED - create manually at https://github.com/${REPO}/compare/${BRANCH}")
fi
else
URL="https://github.com/${REPO}/compare/${BRANCH}"
echo -e "${YELLOW}Create PR manually: ${URL}${NC}"
CREATED_PRS+=("$FEATURE_KEY: $URL")
fi
# Return to base branch
git checkout "$BASE_BRANCH"
done
# Final summary
echo
echo -e "${MAGENTA}╔════════════════════════════════════════════════════════════╗${NC}"
echo -e "${MAGENTA}║ ✓ ALL COMPLETE ║${NC}"
echo -e "${MAGENTA}╚════════════════════════════════════════════════════════════╝${NC}"
echo
echo -e "${GREEN}Created ${`#CREATED_PRS`[@]} PR(s):${NC}"
for pr in "${CREATED_PRS[@]}"; do
echo -e "${CYAN} • $pr${NC}"
done
echo
echo -e "${YELLOW}Next steps:${NC}"
echo "1. Review each PR and tag `@coderabbitai` with '@coderabbitai review'"
echo "2. Post a comment on PR #${MEGA_PR_NUMBER} with links to all new PRs"
echo "3. Recommend merge order: DB schema → Assets/Comments/Components → Figma/GitHub/MCP → Support"
echo "4. Close or convert PR #${MEGA_PR_NUMBER} to draft once all split PRs are created"
echo
echo -e "${BLUE}Suggested comment for PR #${MEGA_PR_NUMBER}:${NC}"
echo "---"
echo "@drfarrell I've split PR #${MEGA_PR_NUMBER} into focused PRs as suggested:"
echo
for pr in "${CREATED_PRS[@]}"; do
echo "- $pr"
done
echo
echo "Each PR is independently reviewable in 15-30 minutes."
echo "Recommended merge order: DB schema first, then others as you prefer."
echo "---"
echo
echo -e "${GREEN}Script complete! 🎉${NC}"To use:
curl -o split-all-prs.sh 'https://raw.githubusercontent.com/...' # I'll provide a gist
# Or just copy-paste the script above
chmod +x split-all-prs.sh
./split-all-prs.sh
What this script does: ✅ Creates 8 separate branches and PRs Features included:
Estimated time: 5-10 minutes total for all PRs! 🚀 Want me to host this on a gist for easy downloading? |
|
@coderabbitai sure |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 75
Step 1 — Save the script locally: curl -o split-all-prs.sh 'https://raw.githubusercontent.com/...' # (after you create the gist)
# OR just copy the script block below into a file named split-all-prs.sh
chmod +x split-all-prs.shStep 2 — Create your own gist:
Then you can download it anytime with: curl -o split-all-prs.sh https://gist.githubusercontent.com/<your-username>/<gist-id>/raw/split-all-prs.sh
chmod +x split-all-prs.sh
./split-all-prs.shStep 3 — Run it: ./split-all-prs.sh
# Enter your mega PR branch name when prompted
# Select 9 for ALL features
# Script handles the rest!The full script is in my message just above — it's the large code block starting with Quick tip: Make sure you have the GitHub CLI installed and logged in ( |
|
@coderabbitai give me a list of all issues because i am sending a message through discord and before that i want to fix every thing soo trigger a nother review and amnaully find problems and find if the fetaures show in the ui |
🧩 Analysis chain🏁 Script executed: Length of output: 16617
Top priorities (before you post in Discord)
Functional/behavioral notes
UI visibility: will the features “show in the UI”?
Quick fix checklist (actionable)
If you want, I can generate a tiny MCP router scaffold for you and a minimal Next.js client page snippet to exercise each endpoint. Triggering a fresh review now; I’ll follow up with any additional findings. ✅ Actions performedFull review triggered. |
|
@coderabbitai i am ognna redo this whole pr give em a prompt for ai agent ot integrate it to ui and safely because i dont feel like coding this all from scrathc |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
Tip For best results, initiate chat on the files or code changes.
This prompt gives any capable AI agent (Cursor Agent, Claude, Copilot Workspace, etc.) all the context it needs to wire up the UI safely without touching your backend code. Tip: Before running the agent, make sure:
ʕ •ᴥ•ʔ Good luck! Once the UI surfaces are added, those features will actually be visible and testable — which will make a much stronger case for the maintainers too. |
|
no like code evryithng from scrahtc like a new version based on the newest changes |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 3
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
🟠 Major comments (25)
packages/ai/src/tokens/index.ts-6-6 (1)
6-6:⚠️ Potential issue | 🟠 MajorBreaks existing token-counting tests.
Changing
perMessageExtrafrom4to3will fail all assertions inpackages/ai/test/tokens.test.ts, which hardcode4(e.g.,encode(text).length + 4 + 2,encode(t1).length + encode(t2).length + 4 * 2 + 2). Either revert this constant or update the tests in the same PR to use3. Also, this change appears unrelated to the Platform Extensions scope of this PR — consider dropping it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/ai/src/tokens/index.ts` at line 6, The perMessageExtra constant change from 4 to 3 in packages/ai/src/tokens/index.ts is breaking existing tests; either revert perMessageExtra back to 4 or update the tests that hardcode the +4 logic (see encode usages in packages/ai/test/tokens.test.ts) to expect +3; locate the symbol perMessageExtra in the tokens module and the assertions using encode(...) + 4 (and multi-message formulas like encode(t1).length + encode(t2).length + 4 * 2 + 2) and make the constant and all test expectations consistent (prefer reverting the change if it’s unrelated to the PR scope).packages/platform-extensions/src/figma/token-storage.ts-86-88 (1)
86-88:⚠️ Potential issue | 🟠 MajorIn-memory static Map makes token storage unsuitable beyond a single process.
memoryStorageis a module-levelMap— tokens are lost on server restart and not shared across instances/replicas in a typical Next.js/serverless deployment. Users will effectively be logged out of Figma on every cold start, and different lambda invocations won't see the same tokens. GivenDatabaseTokenStorageis scaffolded but unimplemented, this is the de facto production path. Recommend either gating the Figma flow behind a feature flag untilDatabaseTokenStoragelands, or wiring to an existing persistent store (e.g., thefigma_integrations/user-auth table implied by the new migrations) before release.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/platform-extensions/src/figma/token-storage.ts` around lines 86 - 88, The static in-memory Map (memoryStorage) in token-storage.ts is unsuitable for production because it loses tokens across restarts/replicas; replace it by implementing and wiring the scaffolded DatabaseTokenStorage (or gate the Figma flow behind a feature flag) so tokens persist in the existing persistent store (e.g., the figma_integrations/user-auth table referenced by migrations). Concretely: remove/stop using memoryStorage in TokenStorage usage, implement DatabaseTokenStorage methods to read/write tokens to the DB schema used by your migrations, and update the code that constructs/chooses the storage to instantiate DatabaseTokenStorage (or add a runtime feature flag check to disable Figma flows until DB-backed storage is available).apps/web/client/src/server/api/routers/comments.ts-6-76 (1)
6-76:⚠️ Potential issue | 🟠 MajorRemove client-supplied authors and fix service instantiation to use shared state.
Three procedures accept user-controlled
userIdinputs (createComment,addReply,mentionUser), allowing any authenticated user to impersonate others. Additionally, each procedure instantiates a freshCommentService()with empty instance Maps, causing mutations to be invisible to subsequent queries. Replaceinput.userIdwithctx.user.id(available fromprotectedProcedurecontext) and either inject a singleton service or use persistent storage.Suggested direction
createComment: protectedProcedure .input(z.object({ projectId: z.string(), elementId: z.string(), content: z.string(), position: z.object({ x: z.number(), y: z.number(), elementSelector: z.string(), pageUrl: z.string(), }), - userId: z.string(), })) - .mutation(async ({ input }) => { + .mutation(async ({ input, ctx }) => { const service = new CommentService(); return service.createComment( input.projectId, input.elementId, input.content, input.position, - input.userId + ctx.user.id ); }), addReply: protectedProcedure .input(z.object({ commentId: z.string(), content: z.string(), - userId: z.string(), })) - .mutation(async ({ input }) => { + .mutation(async ({ input, ctx }) => { const service = new CommentService(); - return service.addReply(input.commentId, input.content, input.userId); + return service.addReply(input.commentId, input.content, ctx.user.id); }), mentionUser: protectedProcedure .input(z.object({ commentId: z.string(), - userId: z.string(), })) - .mutation(async ({ input }) => { + .mutation(async ({ input, ctx }) => { const service = new CommentService(); - return service.mentionUser(input.commentId, input.userId); + return service.mentionUser(input.commentId, ctx.user.id); }),For the service instantiation issue, consider dependency injection or a singleton pattern rather than creating a new instance per call.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/client/src/server/api/routers/comments.ts` around lines 6 - 76, Replace client-supplied user IDs and the per-call CommentService instantiation: remove userId from the input schemas for createComment, addReply and mentionUser and use the authenticated user from the procedure context (ctx.user.id) when calling CommentService methods (e.g., createComment(..., ctx.user.id), addReply(..., ctx.user.id), mentionUser(..., ctx.user.id)); also stop creating new CommentService() inside each resolver — instead inject or import a single shared CommentService instance (or use your DI container) so the Maps/state inside CommentService are persistent across calls and mutations are visible to subsequent queries (target symbols: createComment, addReply, mentionUser, CommentService, getCommentThread/getCommentsForElement/getProjectComments where the shared service should be reused).packages/platform-extensions/package.json-13-35 (1)
13-35:⚠️ Potential issue | 🟠 MajorAlign package exports with the build output.
Line 13 builds only
dist/index.js, but theexportsfield (lines 32-35) points tosrc/*.tssource files. In Node.js resolution,exportstakes precedence overmain, so package consumers will import uncompiled TypeScript instead of the built JavaScript artifact.📦 Suggested fix
- "exports": { - ".": "./src/index.ts", - "./*": "./src/*/index.ts" - }, + "exports": { + ".": "./dist/index.js" + },If subpath exports are required, update the build to emit matching
dist/*/index.jsfiles and export those instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/platform-extensions/package.json` around lines 13 - 35, The package.json exports currently point at source TypeScript ("exports" entries "." and "./*": "./src/index.ts" and "./src/*/index.ts") while the build script ("build": "bun build src/index.ts --outfile=dist/index.js") emits only compiled JS in dist; update the exports to reference the compiled outputs (e.g., "." -> "./dist/index.js" and "./*" -> "./dist/*/index.js") or alternatively change the build script to output matching dist/*/index.js files for subpath exports so runtime resolution uses built JS instead of .ts sources.packages/platform-extensions/src/figma/api-client.ts-38-45 (1)
38-45:⚠️ Potential issue | 🟠 MajorBound the 429 retry loop and improve Retry-After parsing.
Line 44 recursively retries without a maximum attempt count, risking indefinite request hangs. Additionally,
parseInton an unexpectedRetry-Aftervalue can produceNaNor parse incorrectly. Add a retry cap and safe delay parsing.🛡️ Proposed bounded retry shape
- private async makeRequest(endpoint: string, options: RequestInit = {}): Promise<any> { + private async makeRequest<T>( + endpoint: string, + options: RequestInit = {}, + attempt = 0 + ): Promise<T> { @@ if (response.status === 429) { + if (attempt >= 3) { + throw new Error(`Figma API rate limit exceeded after ${attempt + 1} attempts`); + } // Rate limited, wait and retry const retryAfter = response.headers.get('Retry-After'); - const delay = retryAfter ? parseInt(retryAfter) * 1000 : 5000; + const retryAfterSeconds = retryAfter ? Number.parseInt(retryAfter, 10) : NaN; + const delay = Number.isFinite(retryAfterSeconds) ? retryAfterSeconds * 1000 : 5000; await new Promise(resolve => setTimeout(resolve, delay)); - return this.makeRequest(endpoint, options); + return this.makeRequest<T>(endpoint, options, attempt + 1); } @@ - return response.json(); + return response.json() as Promise<T>; }Figma's rate-limit guidance recommends respecting the exact
Retry-Afterheader value to allow the bucket to leak sufficiently. The 3-attempt cap provides a defensive limit; adjust if your use case requires more resilience.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/platform-extensions/src/figma/api-client.ts` around lines 38 - 45, The 429 handling in makeRequest currently recurses without limit and uses parseInt unsafely; change makeRequest to accept an optional attempt counter (e.g., attempt = 0) and when response.status === 429 increment attempt and if attempt >= 3 stop retrying and throw or return a controlled error; parse the Retry-After header defensively by converting to Number, checking isFinite and >0, and falling back to a safe default (e.g., 5000 ms) before awaiting the delay; replace the recursive return this.makeRequest(...) with a call that passes the incremented attempt (this.makeRequest(endpoint, options, attempt + 1)) to enforce the cap.packages/platform-extensions/src/comments/service.ts-235-243 (1)
235-243:⚠️ Potential issue | 🟠 Major
processMentionspasses usernames intomentionUser, which expects auserId.
extractMentionscaptures@(\w+)— i.e., display names/handles — and those strings are passed straight through tomentionUser(commentId, userId)and stored inCommentNotification.userId. Unless usernames are guaranteed to equal user IDs, mention notifications will target bogus IDs. Worth resolving handles to user IDs before dispatch (or renaming the field).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/platform-extensions/src/comments/service.ts` around lines 235 - 243, processMentions is passing display handles from extractMentions into mentionUser which expects a userId, so resolve each username to its canonical userId (e.g., via a user lookup like this.userService.findByUsername / getUserByHandle) before calling mentionUser(commentId, userId); if the lookup fails, skip and log a warning. Update the call sites in processMentions and ensure CommentNotification.userId stores the resolved id (or explicitly rename the field to userHandle if you choose to keep handles), but prefer resolving handles to ids and only call mentionUser with the real userId.packages/platform-extensions/src/comments/service.ts-4-4 (1)
4-4: 🛠️ Refactor suggestion | 🟠 MajorReplace
anywith theCommentDatatype.The
commentsmap,getCommentsForElement/getProjectCommentsreturn types, and thesaveComment/persistCommentUpdateparameters are all typed asany, so the service loses the safety thatCommentDataalready provides. As per coding guidelines, avoidanyunless absolutely necessary. Switching toMap<string, CommentData>andPromise<CommentData[]>would also surface the missingparentIdshape used by the delete path.Also applies to: 149-149, 160-160, 259-259, 270-270
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/platform-extensions/src/comments/service.ts` at line 4, The comments map and related APIs are using any and should be strictly typed to CommentData: change the private field comments to Map<string, CommentData>, update getCommentsForElement and getProjectComments return types to Promise<CommentData[]> (or CommentData[] if synchronous), and change the parameter types of saveComment and persistCommentUpdate to accept CommentData (or Partial<CommentData> as appropriate). Also ensure the delete path and any logic expecting a parentId uses the parentId shape from CommentData (e.g., string | null) so type-checking surfaces missing fields; update all usages in the class (comments, getCommentsForElement, getProjectComments, saveComment, persistCommentUpdate) accordingly.apps/web/client/src/server/api/routers/figma.ts-35-49 (1)
35-49: 🛠️ Refactor suggestion | 🟠 MajorAvoid
z.any()in input schemas and consider authenticatingconvertComponents.Two concerns:
properties,styles, andchildrenare typed asz.any(), which defeats input validation and producesany[]on the server. As per coding guidelines, avoidanyunless absolutely necessary — define proper shapes (even permissive ones likez.record(z.string(), z.unknown())/z.unknown()) so the router actually validates structure.- Unlike the other procedures,
convertComponentsdoesn't accept/apply a token. Ifservice.convertComponentsever touches the Figma API (or any authenticated path), it will silently use an unauthenticated client. If it's intentionally pure/local, a brief comment would help future maintainers.Proposed fix
- components: z.array(z.object({ - id: z.string(), - name: z.string(), - type: z.string(), - properties: z.array(z.any()), - styles: z.array(z.any()), - children: z.array(z.any()), - })) + components: z.array(z.object({ + id: z.string(), + name: z.string(), + type: z.string(), + properties: z.array(z.unknown()), + styles: z.array(z.unknown()), + children: z.array(z.unknown()), + }))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/client/src/server/api/routers/figma.ts` around lines 35 - 49, The convertComponents mutation uses z.any() for properties/styles/children and lacks authentication; change the input schema in convertComponents to use concrete, validated shapes (e.g., properties: z.array(z.record(z.string(), z.unknown())), styles: z.array(z.record(z.string(), z.unknown())), children: z.array(z.unknown()) or a defined component shape) so the router performs real validation, and either require/apply the same auth/token parameter as other procedures (accept a token in the input or switch to protectedProcedure that injects ctx.session/token) before calling FigmaIntegrationService.convertComponents, or add a clear comment in the router and FigmaIntegrationService.convertComponents that the operation is purely local and does not call the Figma API.apps/web/client/src/server/api/routers/assets.ts-7-18 (1)
7-18:⚠️ Potential issue | 🟠 MajorTighten Zod validation before decoding or mutating assets.
IDs should be UUIDs, upload payloads need size/base64 bounds, and image options should be positive bounded integers.
Buffer.from(input.fileData, 'base64')currently allocates from an unbounded string before any size check, so a protected endpoint can still be used for memory pressure.🛡️ Proposed validation hardening
import { z } from 'zod'; +import { TRPCError } from '@trpc/server'; import { createTRPCRouter, protectedProcedure } from '~/server/api/trpc'; import { AssetManagementService } from '@onlook/platform-extensions'; +const MAX_ASSET_BYTES = 10 * 1024 * 1024; +const uuidSchema = z.string().uuid(); +const base64Schema = z + .string() + .max(Math.ceil((MAX_ASSET_BYTES * 4) / 3) + 4) + .regex(/^[A-Za-z0-9+/]*={0,2}$/); + export const assetsRouter = createTRPCRouter({ uploadAsset: protectedProcedure .input(z.object({ - projectId: z.string(), - fileName: z.string(), - fileType: z.string(), - fileSize: z.number(), - fileData: z.string(), // base64 encoded + projectId: uuidSchema, + fileName: z.string().min(1).max(255), + fileType: z.string().min(1).max(100), + fileSize: z.number().int().positive().max(MAX_ASSET_BYTES), + fileData: base64Schema, })) .mutation(async ({ input }) => { const service = new AssetManagementService(); // Convert base64 payload to a File-like object for the service const buffer = Buffer.from(input.fileData, 'base64'); + if (buffer.length !== input.fileSize) { + throw new TRPCError({ + code: 'BAD_REQUEST', + message: 'Decoded file size does not match fileSize', + }); + } const file = { name: input.fileName, type: input.fileType, @@ optimizeAsset: protectedProcedure .input(z.object({ - assetId: z.string(), + assetId: uuidSchema, options: z.object({ - format: z.string(), - quality: z.number().optional(), - width: z.number().optional(), - height: z.number().optional(), + format: z.enum(['png', 'jpg', 'jpeg', 'webp', 'svg']), + quality: z.number().int().min(1).max(100).optional(), + width: z.number().int().positive().max(10000).optional(), + height: z.number().int().positive().max(10000).optional(), }), })) @@ updateAssetReferences: protectedProcedure - .input(z.object({ assetId: z.string() })) + .input(z.object({ assetId: uuidSchema })) @@ getProjectAssets: protectedProcedure - .input(z.object({ projectId: z.string() })) + .input(z.object({ projectId: uuidSchema })) @@ deleteAsset: protectedProcedure - .input(z.object({ assetId: z.string() })) + .input(z.object({ assetId: uuidSchema })) @@ organizeAssets: protectedProcedure .input(z.object({ - projectId: z.string(), - assetIds: z.array(z.string()), - folderName: z.string(), + projectId: uuidSchema, + assetIds: z.array(uuidSchema).min(1), + folderName: z.string().min(1).max(255), }))As per coding guidelines, Use publicProcedure/protectedProcedure from apps/web/client/src/server/api/trpc.ts and validate inputs with Zod.
Also applies to: 31-40, 56-82
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/client/src/server/api/routers/assets.ts` around lines 7 - 18, Tighten the Zod schema used in the mutation before any decoding: change the .input(z.object({...})) to require projectId as z.string().uuid(), fileName as z.string().min(1), fileType as z.string().min(1), fileSize as z.number().int().positive().max(<reasonable_max_bytes>), and fileData as z.string().min(1).refine(s => /^[A-Za-z0-9+/]+={0,2}$/.test(s) && s.length <= <max_base64_length>, { message: "invalid or too large base64" }); also validate any image option fields as z.number().int().positive().max(...). Ensure you run these validations before calling Buffer.from(input.fileData, 'base64') and switch the procedure wrapper to the appropriate publicProcedure/protectedProcedure from trpc.ts; keep the AssetManagementService usage but only create/convert the buffer after input passes Zod checks.packages/platform-extensions/src/figma/asset-processor.ts-84-91 (1)
84-91:⚠️ Potential issue | 🟠 MajorBound external downloads before buffering.
downloadAssethas no timeout and buffers the whole response, so a slow or oversized asset response can tie up workers or exhaust memory.🛡️ Proposed bounded download pattern
+ private static readonly MAX_ASSET_BYTES = 25 * 1024 * 1024; + private static readonly DOWNLOAD_TIMEOUT_MS = 15_000; + private async downloadAsset(url: string): Promise<Buffer> { - const response = await fetch(url); + const controller = new AbortController(); + const timeout = setTimeout(() => controller.abort(), FigmaAssetProcessor.DOWNLOAD_TIMEOUT_MS); + + const response = await fetch(url, { signal: controller.signal }); if (!response.ok) { + clearTimeout(timeout); throw new Error(`Failed to download asset: ${response.statusText}`); } + + const contentLength = Number(response.headers.get('content-length') ?? 0); + if (contentLength > FigmaAssetProcessor.MAX_ASSET_BYTES) { + clearTimeout(timeout); + throw new Error(`Asset exceeds maximum size: ${contentLength} bytes`); + } - const arrayBuffer = await response.arrayBuffer(); - return Buffer.from(arrayBuffer); + try { + const arrayBuffer = await response.arrayBuffer(); + if (arrayBuffer.byteLength > FigmaAssetProcessor.MAX_ASSET_BYTES) { + throw new Error(`Asset exceeds maximum size: ${arrayBuffer.byteLength} bytes`); + } + return Buffer.from(arrayBuffer); + } finally { + clearTimeout(timeout); + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/platform-extensions/src/figma/asset-processor.ts` around lines 84 - 91, downloadAsset currently fetches and buffers the entire response without a timeout or size limit, risking hung workers or OOM; update downloadAsset to use an AbortController with a configurable timeout and stream response.body instead of arrayBuffer(), reading chunks into a growing buffer while enforcing a maximum byte limit and aborting/throwing if the timeout triggers or the total exceeds the limit, and ensure any resources are cleaned up on abort; refer to the downloadAsset function and its local variables (response, arrayBuffer) when making these changes.packages/platform-extensions/src/figma/asset-processor.ts-242-282 (1)
242-282:⚠️ Potential issue | 🟠 MajorDo not publish “optimized” variants until bytes actually change.
createOptimizedVersionandcreateThumbnailreturn the original buffer, so callers get duplicate files labeled as optimized/thumbnail. Omit those variants or throw until real transforms are wired in.🔧 Proposed safe placeholder behavior
// For images, create optimized versions if (format === 'png' || format === 'jpg') { - // In production, use image optimization libraries like sharp - // For now, we'll just create placeholder optimized versions - - if (fileInfo.size > 100 * 1024) { // > 100KB - result.optimized = await this.createOptimizedVersion(buffer, format); - } - - if (fileInfo.dimensions && (fileInfo.dimensions.width > 200 || fileInfo.dimensions.height > 200)) { - result.thumbnail = await this.createThumbnail(buffer, format); - } + // TODO: Wire real optimization/thumbnail generation before exposing variant URLs. + // Until then, only return the original asset to avoid misleading downstream consumers. }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/platform-extensions/src/figma/asset-processor.ts` around lines 242 - 282, The current createOptimizedVersion and createThumbnail return the original buffer causing duplicate "optimized"/"thumbnail" assets; update behavior so we don't publish variants unless bytes actually change: either have createOptimizedVersion/createThumbnail throw a NotImplementedError until real transforms are wired, or (preferred) keep them returning a Buffer but change optimizeAsset to compare the returned Buffer to the original (using Buffer.compare or equivalent) and only set result.optimized or result.thumbnail when the returned buffer differs from the original; reference optimizeAsset, createOptimizedVersion, createThumbnail and OptimizedAsset when making the change.apps/web/client/src/server/api/routers/assets.ts-14-85 (1)
14-85:⚠️ Potential issue | 🟠 MajorPass caller identity via context to enforce per-user asset authorization.
The router passes caller-supplied
projectIdandassetIdtoAssetManagementServicewithout verifying the caller owns those resources. The service does not receive user context and does not enforce project membership or asset ownership—onlyassetIdis used to look up assets in the in-memory map, allowing any authenticated user to access, modify, or delete another user's assets.Fix by:
- Passing
ctx(which contains user identity) to service methods:return service.uploadAsset(file, input.projectId, ctx.userId);- Service verifies caller is a member of the project before allowing operations.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/web/client/src/server/api/routers/assets.ts` around lines 14 - 85, The router is forwarding caller-supplied projectId/assetId into AssetManagementService methods without the caller identity; update every call site (uploadAsset, optimizeAsset, generateImportStatement, updateAssetReferences, getProjectAssets, deleteAsset, organizeAssets) to pass the request context user id (e.g., ctx.userId) as an additional argument, and update the AssetManagementService method signatures to accept this caller id and enforce per-user authorization (verify project membership/asset ownership) before performing lookups or mutations.packages/platform-extensions/src/figma/auth.ts-20-29 (1)
20-29:⚠️ Potential issue | 🟠 MajorOAuth
stateparameter should be required, not optional.The
stateparameter is essential for CSRF protection in OAuth flows. Making it optional allows callers to initiate authentication without binding the request to a session, weakening security. Either require thestateparameter explicitly or implement server-side state generation and validation before returning the authorization URL.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/platform-extensions/src/figma/auth.ts` around lines 20 - 29, The getAuthorizationUrl function currently accepts an optional state which weakens CSRF protection; change its signature to require state (getAuthorizationUrl(state: string): string) and remove the optional handling so the URL always includes the provided state, ensuring callers must supply a session-bound CSRF token; if you prefer server-side generation instead, implement state generation/validation inside getAuthorizationUrl and still ensure a non-empty state is included in the returned URL (refer to getAuthorizationUrl and the URLSearchParams construction).packages/db/src/schema/figma/figma-file.ts-16-16 (1)
16-16:⚠️ Potential issue | 🟠 MajorScope Figma file uniqueness to the project.
figmaFileId.unique()makes a Figma file importable by only one project globally. Since this table is project-scoped, use a composite unique constraint on(projectId, figmaFileId)instead.🔧 Proposed schema change
-import { pgTable, text, timestamp, uuid, varchar, jsonb } from 'drizzle-orm/pg-core'; +import { pgTable, text, timestamp, uuid, varchar, jsonb, uniqueIndex } from 'drizzle-orm/pg-core'; @@ - figmaFileId: varchar('figma_file_id').notNull().unique(), + figmaFileId: varchar('figma_file_id').notNull(), @@ -}).enableRLS(); +}, (table) => [ + uniqueIndex('figma_files_project_figma_file_id_idx').on(table.projectId, table.figmaFileId), +]).enableRLS();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/db/src/schema/figma/figma-file.ts` at line 16, The figma_file table currently sets figmaFileId as globally unique via figmaFileId.unique(), which prevents the same Figma file from being imported into multiple projects; change this to a composite unique constraint on (projectId, figmaFileId) instead. Locate the table definition that declares figmaFileId and projectId in the file where figmaFileId: varchar('figma_file_id').notNull().unique() is defined and remove the single-column unique() call, then add a composite unique constraint (or index) that covers both projectId and figmaFileId so uniqueness is enforced per project rather than globally. Ensure the schema migration or table builder API you use (the module that defines uniques/indexes for this table) is used to create the combined constraint and update any related model/index references accordingly.packages/db/src/schema/comment/comment.ts-49-53 (1)
49-53:⚠️ Potential issue | 🟠 MajorAdd
relationNameto disambiguate the self-relation for the comment thread hierarchy.The
parentandrepliesrelations both referencecommentsto itself without explicit relation names. Drizzle requiresrelationNamefor self-relations to correctly resolve the inverse relation. Without it, queries accessingcomments.parentorcomments.replieswill fail or be ambiguous. See theconversation.tsschema for the correct pattern withmessages.🔧 Proposed relation fix
export const COMMENT_PROJECT_RELATION_NAME = 'commentProject'; +export const COMMENT_PARENT_RELATION_NAME = 'commentParent'; export const commentRelations = relations(comments, ({ one, many }) => ({ project: one(projects, { fields: [comments.projectId], references: [projects.id], relationName: COMMENT_PROJECT_RELATION_NAME, }), parent: one(comments, { fields: [comments.parentId], references: [comments.id], + relationName: COMMENT_PARENT_RELATION_NAME, }), - replies: many(comments), + replies: many(comments, { + relationName: COMMENT_PARENT_RELATION_NAME, + }), mentions: many(commentMentions), }));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/db/src/schema/comment/comment.ts` around lines 49 - 53, The self-relations for the Comment schema (parent and replies on comments using comments.parentId -> comments.id) need an explicit relationName to disambiguate the inverse; update the one(comments, { fields: [comments.parentId], references: [comments.id] }) and many(comments) declarations to include a matching relationName (same string on both sides, e.g. "commentThread") following the pattern used by messages in conversation.ts so that comments.parent and comments.replies resolve correctly.packages/platform-extensions/src/github/service.ts-101-143 (1)
101-143:⚠️ Potential issue | 🟠 MajorHardcoded base branch
'main'will break PRs for repos using other defaults.The PR creation uses
base: 'main'with a "// or master" comment, but you've already fetchedrepoInfo.default_branchincreateBranch. Use the actual default branch (fetch viagetRepositoryhere too, or thread it through fromcreateBranch), otherwise PR creation fails for repos whose default isn'tmain.🔧 Proposed fix
const [owner, repoName] = repo.split('/'); - + const repoInfo = await this.getRepository(owner, repoName); + // First commit the changes const commit = await this.commitChanges(repo, branch, changes); // Create pull request const title = `Onlook Platform Extensions Update`; const body = this.generatePRDescription(changes); const response = await this.makeRequest(`/repos/${owner}/${repoName}/pulls`, { method: 'POST', body: JSON.stringify({ title, body, head: branch, - base: 'main', // or master + base: repoInfo.default_branch, }), });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/platform-extensions/src/github/service.ts` around lines 101 - 143, The createPullRequest function currently hardcodes base: 'main'; instead fetch the repository's actual default branch (e.g., call getRepository(owner, repoName) to read repoInfo.default_branch or accept/pass the default from createBranch) and set base to that value when building the POST body for makeRequest; update the payload in createPullRequest to use repoInfo.default_branch (or a passed-in baseBranch) so PRs target the repo's real default branch.packages/platform-extensions/src/figma/client.ts-80-86 (1)
80-86:⚠️ Potential issue | 🟠 MajorUnbounded retry on HTTP 429 can loop indefinitely and honors an untrusted header without caps.
Two risks:
- If the Figma API keeps returning 429 (persistent throttling, bad token, abuse detection),
makeRequestrecurses forever — there is no retry budget or overall deadline.Retry-Aftercomes from the response and is used directly as a sleep duration; a malicious/misbehaving proxy could supply a very large value and hang the caller.Add a retry cap and a max-wait ceiling, e.g.:
- private async makeRequest<T>(endpoint: string, options: RequestInit = {}): Promise<T> { + private async makeRequest<T>(endpoint: string, options: RequestInit = {}, retries = 0): Promise<T> { const url = endpoint.startsWith('http') ? endpoint : `${this.baseUrl}${endpoint}`; ... if (response.status === 429) { + if (retries >= 3) { + throw new FigmaApiError('Rate limit exceeded after retries', 429, await response.text()); + } const retryAfter = response.headers.get('Retry-After'); - const waitTime = retryAfter ? parseInt(retryAfter) * 1000 : 60000; + const parsed = retryAfter ? parseInt(retryAfter, 10) : NaN; + const waitTime = Math.min(Number.isFinite(parsed) ? parsed * 1000 : 60000, 120_000); await this.sleep(waitTime); - return this.makeRequest<T>(endpoint, options); + return this.makeRequest<T>(endpoint, options, retries + 1); }Also adds an explicit radix to
parseInt.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/platform-extensions/src/figma/client.ts` around lines 80 - 86, The 429 handling in makeRequest currently recurses without limits and trusts the Retry-After header; add a retry budget and a ceiling on wait time to prevent infinite loops and maliciously large waits. Modify makeRequest to accept/track a retryCount (or use a private MAX_RETRIES constant) and when response.status === 429 increment and bail with an error after MAX_RETRIES; parse the Retry-After with parseInt(retryAfter, 10) and cap the computed wait to a MAX_WAIT_MS constant before calling sleep; ensure the retry path returns an awaited call to makeRequest with the updated retryCount so recursion terminates when budget is exhausted.packages/platform-extensions/src/assets/service.ts-205-210 (1)
205-210:⚠️ Potential issue | 🟠 MajorDo not publicly serve active content without sanitization.
image/svg+xmlandtext/javascriptare accepted and then exposed via a public URL. Unsanitized SVG/JS uploads can become script-execution or phishing vectors when opened or embedded.Safer default
const allowedTypes = [ - 'image/jpeg', 'image/png', 'image/gif', 'image/svg+xml', 'image/webp', + 'image/jpeg', 'image/png', 'image/gif', 'image/webp', 'video/mp4', 'video/webm', 'application/pdf', - 'text/css', 'text/javascript', + 'text/css', ];Also applies to: 280-285
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/platform-extensions/src/assets/service.ts` around lines 205 - 210, The allowedTypes array currently permits unsafe types ('image/svg+xml' and 'text/javascript') that are later exposed via public URLs; remove these MIME types from allowedTypes and instead handle SVG/JS uploads through a safe sanitization pipeline before public exposure: for SVGs run an SVG sanitizer (or rasterize to PNG) and for JS either reject or store as non-executable (e.g., serve with Content-Type=text/plain and Content-Disposition: attachment) in the upload/serve flow (refer to the allowedTypes constant and the asset upload/serve handlers that read it). Ensure validation rejects unsanitized unsafe types by default and only serves sanitized outputs publicly.packages/platform-extensions/src/assets/service.ts-64-72 (1)
64-72:⚠️ Potential issue | 🟠 MajorPersist assets outside the service instance.
The service records uploaded assets only in an instance-local
Map, while the “database” helpers only log. Follow-up calls such asgetProjectAssets(),optimizeAsset(), ordeleteAsset()will not see previously uploaded assets once a new service instance handles the request.Also applies to: 151-153, 356-366
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/platform-extensions/src/assets/service.ts` around lines 64 - 72, The service currently keeps uploaded assets only in the instance-local Map projectAssets and saveAssetToDatabase just logs, so new instances won’t see prior uploads; change saveAssetToDatabase to persist assets to a real/shared store (e.g., a simple persistent in-memory DB module, file, or real DB) and make the service load assets from that store on initialization (or have getProjectAssets/optimizeAsset/deleteAsset read/update the shared store instead of relying solely on projectAssets). Specifically: implement real persistence in saveAssetToDatabase and a corresponding loadAssetsForProject helper, update the upload path that uses projectAssets.push(asset) to await saveAssetToDatabase(asset) and refresh projectAssets from the persistent store, and modify getProjectAssets, optimizeAsset, and deleteAsset to read/update the persistent store (and then update the local projectAssets cache) so assets survive across service instances.packages/platform-extensions/src/assets/service.ts-12-19 (1)
12-19:⚠️ Potential issue | 🟠 MajorAvoid duplicating extensions in storage paths.
sanitizeFileName(file.name)keeps the extension, then Line 17 appendsextensionagain, producing paths likeasset-...-logo.png.png.getFileExtension()also returns the full filename when there is no dot.Proposed fix
- const sanitizedName = this.sanitizeFileName(file.name); - const extension = this.getFileExtension(file.name); + const extension = this.getFileExtension(file.name).toLowerCase(); + const baseName = extension ? file.name.slice(0, -extension.length) : file.name; + const sanitizedName = this.sanitizeFileName(baseName);private getFileExtension(name: string): string { - return name.substring(name.lastIndexOf('.')); + const lastDot = name.lastIndexOf('.'); + return lastDot > 0 ? name.slice(lastDot) : ''; }Also applies to: 316-322
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/platform-extensions/src/assets/service.ts` around lines 12 - 19, sanitizeFileName(file.name) currently preserves the extension so when you later append extension (from getFileExtension) the paths can end up with duplicate extensions; update the path construction in the block that creates originalPath/optimizedPath/thumbnailPath to first compute extension = this.getFileExtension(file.name) (empty string if none) and a baseName that is the sanitized filename with that extension removed (e.g. baseName = this.sanitizeFileName(file.name).replace(new RegExp(`${extension}$`), '') ), then build paths using baseName + extension so names won’t repeat; ensure getFileExtension returns '' when there is no dot or guard against that so no leading "undefined" or duplicate extension appears.packages/platform-extensions/src/components/service.ts-137-151 (1)
137-151:⚠️ Potential issue | 🟠 MajorDo not report custom components as saved until they are persisted.
addCustomComponent()returns a created component, butsaveCustomComponent()only logs. If the service is re-created orinitialize()runs again, the custom component is lost.Minimal safe fallback until persistence is wired
private async saveCustomComponent(projectId: string, component: ComponentDefinition): Promise<void> { - // This would save to database - console.log(`Saving custom component ${component.name} for project ${projectId}`); + throw new Error('Custom component persistence is not implemented'); }Also applies to: 309-312
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/platform-extensions/src/components/service.ts` around lines 137 - 151, addCustomComponent currently mutates in-memory state before the custom component is actually persisted, risking loss on re-create; change it to call saveCustomComponent first and only push to this.components and recalc this.categories after saveCustomComponent successfully completes (and ensure saveCustomComponent throws on failure or returns a persisted representation), and apply the same ordering/fix to the other occurrence around lines 309-312; also ensure initialize() loads persisted custom components via the same persistence API (e.g., a loadCustomComponents method) so in-memory state is reconstructed from the persisted store.apps/backend/supabase/migrations/0020_volatile_peter_parker.sql-65-78 (1)
65-78:⚠️ Potential issue | 🟠 MajorScope Figma file uniqueness to the project.
figma_file_idis globally unique, so two projects cannot import the same Figma file. Use a composite uniqueness constraint on(project_id, figma_file_id)instead.Proposed fix
- CONSTRAINT "figma_files_figma_file_id_unique" UNIQUE("figma_file_id") + CONSTRAINT "figma_files_project_id_figma_file_id_unique" UNIQUE("project_id", "figma_file_id")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/supabase/migrations/0020_volatile_peter_parker.sql` around lines 65 - 78, The UNIQUE constraint on figma_files currently enforces uniqueness only on figma_file_id (CONSTRAINT "figma_files_figma_file_id_unique"), which prevents the same Figma file from being imported into multiple projects; change this to a composite unique constraint on (project_id, figma_file_id) by removing or altering the existing constraint and adding a new UNIQUE("project_id","figma_file_id") constraint on the figma_files table (update the CREATE TABLE statement or add a migration that drops CONSTRAINT "figma_files_figma_file_id_unique" and creates the composite constraint).apps/backend/supabase/migrations/0020_volatile_peter_parker.sql-185-373 (1)
185-373:⚠️ Potential issue | 🟠 MajorTighten RLS role and identity checks.
These policies exclude project
members from reading/using collaborative data, even thoughuser_has_project_access()defaults toowner/admin/memberinapps/backend/supabase/migrations/0006_rls.sql. Inserts also allow callers to spoof audit columns such asuploaded_by,user_id,imported_by,connected_by,created_by, andsetup_by.Example policy hardening
CREATE POLICY "assets_select_policy" ON assets FOR SELECT TO authenticated -USING (user_has_project_access(project_id, ARRAY['owner', 'admin'])); +USING (user_has_project_access(project_id, ARRAY['owner', 'admin', 'member'])); CREATE POLICY "assets_insert_policy" ON assets FOR INSERT TO authenticated -WITH CHECK (user_has_project_access(project_id, ARRAY['owner', 'admin'])); +WITH CHECK ( + user_has_project_access(project_id, ARRAY['owner', 'admin', 'member']) + AND uploaded_by = auth.uid() +);CREATE POLICY "comments_insert_policy" ON comments FOR INSERT TO authenticated -WITH CHECK (user_has_project_access(project_id, ARRAY['owner', 'admin'])); +WITH CHECK ( + user_has_project_access(project_id, ARRAY['owner', 'admin', 'member']) + AND user_id = auth.uid() +);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/backend/supabase/migrations/0020_volatile_peter_parker.sql` around lines 185 - 373, The RLS policies currently rely on user_has_project_access (which defaults to include 'member') and allow insertion of audit columns that callers can spoof; update each policy (e.g., assets_select_policy, assets_insert_policy, assets_update_policy, assets_delete_policy, asset_references_select_policy, asset_references_mutation_policy, comments_* policies, comment_mentions_policy, figma_files_policy, figma_assets_policy, figma_components_policy, github_repositories_policy, github_integrations_policy, mcp_configs_policy, mcp_servers_policy) to explicitly require only roles ['owner','admin'] in USING/WITH CHECK expressions (instead of relying on the function default) and for INSERT/UPDATE WITH CHECK clauses add checks that audit columns (uploaded_by, user_id, imported_by, connected_by, created_by, setup_by, etc.) equal auth.uid() (or the appropriate session principal) so callers cannot spoof those fields; ensure EXISTS-based policies propagate the tightened role check when joining related tables.packages/platform-extensions/src/assets/service.ts-118-135 (1)
118-135:⚠️ Potential issue | 🟠 MajorUse the correct asset path property for generating bundler-compatible imports.
generateImportStatement()currently passesasset.url(a full HTTPS URL from cloud storage) tocalculateRelativePath(), which returns it unchanged. This generates invalid imports likeimport logo from 'https://assets.onlook.com/assets/...'that bundlers cannot resolve.Use
asset.originalPathinstead—it contains the local project path—and improvecalculateRelativePath()to calculate proper relative paths from the target file to the asset location.Proposed direction
- const relativePath = this.calculateRelativePath(filePath, asset.url); + const relativePath = this.calculateRelativePath(filePath, asset.originalPath);private calculateRelativePath(fromPath: string, toPath: string): string { - // Simple relative path calculation - return toPath; + // TODO: resolve against the project root/source tree and return a bundler-safe relative import. + return toPath.startsWith('.') ? toPath : `.${toPath}`; }Also applies to: 344-347
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/platform-extensions/src/assets/service.ts` around lines 118 - 135, generateImportStatement currently feeds asset.url (a remote HTTPS URL) into calculateRelativePath which returns it unchanged and produces bundler-unresolvable imports; change generateImportStatement to use asset.originalPath (the local project path) when computing relativePath, and update calculateRelativePath to compute a proper relative path from the target filePath to the asset.originalPath (handling absolute vs relative inputs and normalizing path separators), so imports become bundler-compatible (e.g., importName from './assets/...'). Ensure references to generateImportStatement, calculateRelativePath, asset.url and asset.originalPath are updated and any callers expecting URLs continue to work by preserving asset.url elsewhere.packages/platform-extensions/src/figma/service.ts-171-182 (1)
171-182:⚠️ Potential issue | 🟠 MajorGenerate valid React style objects.
Line 174 generates CSS declarations inside a JavaScript object literal, producing invalid syntax. Figma properties like
background-colorand values like#fffwill result in malformed code that fails to execute. React inline styles require camelCase properties (backgroundColor) and proper object formatting.Convert hyphenated CSS properties to camelCase, wrap values in
JSON.stringify()for safety, and fix the object syntax:Proposed fix
+ private toReactStyleProperty(property: string): string { + return property.replace(/-([a-z])/g, (_, letter: string) => letter.toUpperCase()); + } + private generateComponentStyles(component: FigmaComponent): string { const className = this.sanitizeComponentName(component.name).toLowerCase(); const styles = component.styles - .map(style => ` ${style.property}: ${style.value};`) + .map(style => ` ${JSON.stringify(this.toReactStyleProperty(style.property))}: ${JSON.stringify(style.value)},`) .join('\n'); return ` -const styles = { +const styles: Record<string, React.CSSProperties> = { ${className}: { ${styles} - } + }, };`; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/platform-extensions/src/figma/service.ts` around lines 171 - 182, The generated code in generateComponentStyles produces invalid JS by emitting CSS-style hyphenated properties and raw values (e.g., "background-color: `#fff`;") into an object literal; update generateComponentStyles to convert each style.property to camelCase (e.g., backgroundColor) and emit each entry as a proper JS key/value pair with the value wrapped via JSON.stringify(style.value) and a trailing comma, so component.styles mapping becomes something like `${camelProp}: ${JSON.stringify(style.value)},` when building the ${className} object; use the existing sanitizeComponentName(component.name) for className and perform the hyphen-to-camel conversion for each style before joining to produce a valid JavaScript object literal returned by the function.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 27c43e7d-18eb-4a99-96c6-31987fe44993
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (66)
.gitignorePR_SUMMARY.mdapps/backend/supabase/migrations/0020_volatile_peter_parker.sqlapps/backend/supabase/migrations/0021_optimal_pyro.sqlapps/backend/supabase/migrations/meta/0020_snapshot.jsonapps/backend/supabase/migrations/meta/0021_snapshot.jsonapps/backend/supabase/migrations/meta/_journal.jsonapps/web/client/src/env.tsapps/web/client/src/server/api/root.tsapps/web/client/src/server/api/routers/assets.tsapps/web/client/src/server/api/routers/comments.tsapps/web/client/src/server/api/routers/components-panel.tsapps/web/client/src/server/api/routers/figma.tspackages/ai/src/tokens/index.tspackages/db/src/schema/asset/asset-reference.tspackages/db/src/schema/asset/asset.tspackages/db/src/schema/asset/index.tspackages/db/src/schema/comment/comment-mention.tspackages/db/src/schema/comment/comment.tspackages/db/src/schema/comment/index.tspackages/db/src/schema/figma/figma-asset.tspackages/db/src/schema/figma/figma-component.tspackages/db/src/schema/figma/figma-file.tspackages/db/src/schema/figma/index.tspackages/db/src/schema/github/github-integration.tspackages/db/src/schema/github/github-repository.tspackages/db/src/schema/github/index.tspackages/db/src/schema/index.tspackages/db/src/schema/mcp/index.tspackages/db/src/schema/mcp/mcp-config.tspackages/db/src/schema/mcp/mcp-server.tspackages/db/src/schema/project/project.tspackages/platform-extensions/eslint.config.jspackages/platform-extensions/package.jsonpackages/platform-extensions/src/assets/index.tspackages/platform-extensions/src/assets/service.tspackages/platform-extensions/src/assets/types.tspackages/platform-extensions/src/comments/index.tspackages/platform-extensions/src/comments/service.tspackages/platform-extensions/src/comments/types.tspackages/platform-extensions/src/components/index.tspackages/platform-extensions/src/components/service.tspackages/platform-extensions/src/components/types.tspackages/platform-extensions/src/figma/api-client.tspackages/platform-extensions/src/figma/asset-processor.tspackages/platform-extensions/src/figma/auth.tspackages/platform-extensions/src/figma/client.tspackages/platform-extensions/src/figma/index.tspackages/platform-extensions/src/figma/parser.tspackages/platform-extensions/src/figma/service.tspackages/platform-extensions/src/figma/token-storage.tspackages/platform-extensions/src/figma/types.tspackages/platform-extensions/src/github/index.tspackages/platform-extensions/src/github/service.tspackages/platform-extensions/src/github/types.tspackages/platform-extensions/src/index.tspackages/platform-extensions/src/mcp/__tests__/mcp.service.test.tspackages/platform-extensions/src/mcp/index.tspackages/platform-extensions/src/mcp/service.tspackages/platform-extensions/src/mcp/types.tspackages/platform-extensions/src/types/index.tspackages/platform-extensions/src/universal/index.tspackages/platform-extensions/src/universal/project-service.tspackages/platform-extensions/src/universal/style-service.tspackages/platform-extensions/src/universal/types.tspackages/platform-extensions/tsconfig.json
| export const componentsPanelRouter = createTRPCRouter({ | ||
| initialize: protectedProcedure | ||
| .input(z.object({ projectId: z.string() })) | ||
| .mutation(async ({ input }) => { | ||
| const service = new ComponentsPanelService(); | ||
| await service.initialize(input.projectId); | ||
| return { success: true }; | ||
| }), | ||
|
|
||
| getCategories: protectedProcedure | ||
| .input(z.object({ projectId: z.string() })) | ||
| .query(async ({ input }) => { | ||
| const service = new ComponentsPanelService(); | ||
| await service.initialize(input.projectId); | ||
| return service.getCategories(); | ||
| }), | ||
|
|
||
| searchComponents: protectedProcedure | ||
| .input(z.object({ | ||
| projectId: z.string(), | ||
| query: z.string() | ||
| })) | ||
| .query(async ({ input }) => { | ||
| const service = new ComponentsPanelService(); | ||
| await service.initialize(input.projectId); | ||
| return service.searchComponents(input.query); | ||
| }), | ||
|
|
||
| filterByCategory: protectedProcedure | ||
| .input(z.object({ | ||
| projectId: z.string(), | ||
| categoryId: z.string() | ||
| })) | ||
| .query(async ({ input }) => { | ||
| const service = new ComponentsPanelService(); | ||
| await service.initialize(input.projectId); | ||
| return service.filterByCategory(input.categoryId); | ||
| }), | ||
|
|
||
| insertComponent: protectedProcedure | ||
| .input(z.object({ | ||
| componentId: z.string(), | ||
| targetElementId: z.string(), | ||
| position: z.enum(['before', 'after', 'inside', 'replace']), | ||
| })) | ||
| .mutation(async ({ input }) => { | ||
| const service = new ComponentsPanelService(); | ||
| return service.insertComponent( | ||
| input.componentId, | ||
| input.targetElementId, | ||
| input.position | ||
| ); | ||
| }), | ||
|
|
||
| addCustomComponent: protectedProcedure | ||
| .input(z.object({ | ||
| projectId: z.string(), | ||
| component: z.object({ | ||
| name: z.string(), | ||
| category: z.string(), | ||
| description: z.string().optional(), | ||
| props: z.array(z.object({ | ||
| name: z.string(), | ||
| type: z.string(), | ||
| required: z.boolean(), | ||
| defaultValue: z.any().optional(), | ||
| description: z.string().optional(), | ||
| options: z.array(z.string()).optional(), | ||
| })), | ||
| code: z.string(), | ||
| framework: z.string(), | ||
| tags: z.array(z.string()), | ||
| }), | ||
| })) | ||
| .mutation(async ({ input }) => { | ||
| const service = new ComponentsPanelService(); | ||
| return service.addCustomComponent(input.projectId, input.component); | ||
| }), |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Inspect ComponentsPanelService to confirm whether initialize() is required and whether it maintains state.
fd -t f 'service.ts' packages/platform-extensions/src/components | head -20
rg -nP -C3 '\bclass\s+ComponentsPanelService\b'
rg -nP -C3 '\binitialize\s*\(' packages/platform-extensions/src/components 2>/dev/nullRepository: onlook-dev/onlook
Length of output: 1424
🏁 Script executed:
# Get more details about the initialize method and how state is used
wc -l packages/platform-extensions/src/components/service.tsRepository: onlook-dev/onlook
Length of output: 119
🏁 Script executed:
# View the complete initialize method and related methods
sed -n '1,100p' packages/platform-extensions/src/components/service.tsRepository: onlook-dev/onlook
Length of output: 3041
🏁 Script executed:
# Check how getCategories, searchComponents, and other methods use the initialized state
rg -n 'getCategories|searchComponents|filterByCategory' packages/platform-extensions/src/components/service.ts -A 5Repository: onlook-dev/onlook
Length of output: 854
🏁 Script executed:
# Check the insertComponent method specifically to confirm if it initializes
rg -n 'insertComponent' packages/platform-extensions/src/components/service.ts -A 10Repository: onlook-dev/onlook
Length of output: 527
insertComponent mutation will fail—service not initialized before searching components.
The ComponentsPanelService maintains state in private fields (components, categories) populated by initialize(). The insertComponent procedure (line 50) creates a new service instance but never calls initialize(), so service.insertComponent() searches an empty this.components array and always returns "Component not found".
Additionally, even query procedures that do call initialize() create a fresh service per request—the separate initialize mutation (line 6) initializes a discarded instance with no persistent effect.
Fix: Either call service.initialize(projectId) in the insertComponent procedure before calling service.insertComponent(), or redesign the service as stateless so state doesn't need to survive between tRPC calls.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@apps/web/client/src/server/api/routers/components-panel.ts` around lines 5 -
82, The insertComponent mutation creates a ComponentsPanelService but never
calls its initialize(projectId), so insertComponent operates on empty state;
update the insertComponent protectedProcedure to accept projectId in its input
schema and call await service.initialize(input.projectId) before calling
service.insertComponent(input.componentId, input.targetElementId,
input.position); this ensures ComponentsPanelService (private fields like
components/categories populated by initialize) has the needed state when
insertComponent runs.
| // Recursively remove child replies | ||
| const childComments = this.getChildComments(commentId); | ||
| for (const child of childComments) { | ||
| await this.deleteComment(child.id); | ||
| } |
There was a problem hiding this comment.
Critical: this.getChildComments is not defined — deleteComment will throw at runtime.
getChildComments(commentId) is called here but no such method exists on CommentService, so any call to deleteComment will fail with TypeError: this.getChildComments is not a function. Either implement it (e.g., filter this.comments.values() by parentId === commentId) or inline the lookup.
Proposed fix
- // Recursively remove child replies
- const childComments = this.getChildComments(commentId);
- for (const child of childComments) {
- await this.deleteComment(child.id);
- }
+ // Recursively remove child replies
+ const childComments = Array.from(this.comments.values()).filter(
+ c => c.parentId === commentId,
+ );
+ for (const child of childComments) {
+ await this.deleteComment(child.id);
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Recursively remove child replies | |
| const childComments = this.getChildComments(commentId); | |
| for (const child of childComments) { | |
| await this.deleteComment(child.id); | |
| } | |
| // Recursively remove child replies | |
| const childComments = Array.from(this.comments.values()).filter( | |
| c => c.parentId === commentId, | |
| ); | |
| for (const child of childComments) { | |
| await this.deleteComment(child.id); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/platform-extensions/src/comments/service.ts` around lines 204 - 208,
deleteComment currently calls this.getChildComments(commentId) but no such
method exists, causing a runtime TypeError; fix by either adding a
getChildComments(commentId) method on CommentService that returns
[...this.comments.values()].filter(c => c.parentId === commentId) or replace the
call in deleteComment with an inline lookup that iterates over
this.comments.values() and selects children by parentId === commentId, then
proceed to await this.deleteComment(child.id) as before; ensure the new helper
or inline code uses the same CommentService storage (this.comments) and
preserves existing delete recursion behavior.
| private async storeAsset(buffer: Buffer, path: string): Promise<void> { | ||
| // Placeholder for storage implementation | ||
| // In production, this would: | ||
| // - Upload to cloud storage (S3, GCS, etc.) | ||
| // - Store in local file system | ||
| // - Update database records | ||
| console.log(`Storing asset to ${path}, size: ${buffer.length} bytes`); | ||
| } | ||
|
|
||
| /** | ||
| * Generate public URL for asset | ||
| */ | ||
| private generatePublicUrl(path: string): string { | ||
| // In production, this would generate proper URLs based on storage system | ||
| return `https://assets.onlook.com${path}`; |
There was a problem hiding this comment.
Fail closed until asset storage is implemented.
storeAsset only logs, but processAsset returns public URLs as if uploads succeeded. This will produce broken imported assets and persist misleading paths/URLs downstream.
🛑 Minimal fail-closed change
private async storeAsset(buffer: Buffer, path: string): Promise<void> {
- // Placeholder for storage implementation
- // In production, this would:
- // - Upload to cloud storage (S3, GCS, etc.)
- // - Store in local file system
- // - Update database records
- console.log(`Storing asset to ${path}, size: ${buffer.length} bytes`);
+ throw new Error(`Asset storage is not configured for ${path} (${buffer.length} bytes)`);
}
@@
private generatePublicUrl(path: string): string {
- // In production, this would generate proper URLs based on storage system
- return `https://assets.onlook.com${path}`;
+ throw new Error(`Asset public URL generation is not configured for ${path}`);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| private async storeAsset(buffer: Buffer, path: string): Promise<void> { | |
| // Placeholder for storage implementation | |
| // In production, this would: | |
| // - Upload to cloud storage (S3, GCS, etc.) | |
| // - Store in local file system | |
| // - Update database records | |
| console.log(`Storing asset to ${path}, size: ${buffer.length} bytes`); | |
| } | |
| /** | |
| * Generate public URL for asset | |
| */ | |
| private generatePublicUrl(path: string): string { | |
| // In production, this would generate proper URLs based on storage system | |
| return `https://assets.onlook.com${path}`; | |
| private async storeAsset(buffer: Buffer, path: string): Promise<void> { | |
| throw new Error(`Asset storage is not configured for ${path} (${buffer.length} bytes)`); | |
| } | |
| /** | |
| * Generate public URL for asset | |
| */ | |
| private generatePublicUrl(path: string): string { | |
| throw new Error(`Asset public URL generation is not configured for ${path}`); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/platform-extensions/src/figma/asset-processor.ts` around lines 313 -
327, storeAsset currently only logs and generatePublicUrl always returns a URL
so processAsset will claim success for assets that were never stored; change
storeAsset to fail-closed (throw an Error or return a rejected Promise) when
actual storage is not implemented, and update processAsset to only call
generatePublicUrl and return a public URL after storeAsset resolves
successfully; reference the private methods storeAsset, processAsset, and
generatePublicUrl—if you prefer a softer approach, have storeAsset return a
boolean and have processAsset throw or propagate an error when storeAsset
indicates failure so imports cannot produce misleading URLs.
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 12262
Note: The agent should treat your existing platform-extensions code as reference only and re-implement cleanly where needed. No stubs; UI wired; safe error handling; strict types. If you want, I can also generate per-PR commit message templates or a small shell helper to scaffold the branches and PR bodies. |
|
noo like the current states of the repo like without this pr @coderabbitai |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain
Copy-paste into your AI coding agent (Cursor, Windsurf, Copilot Workspace, Claude, etc.): If you want, I can also generate the router scaffolds and minimal page.tsx templates for each feature so your agent has concrete starting files. 🏁 Script executed: Length of output: 13785
What I verified from your script output
Copy-paste agent brief (main-only, no PR code assumed) Goal
Guardrails
PR plan (in this order) PR 1 — Foundation utilities (no feature logic)
PR 2 — MCP router + minimal UI
PR 3 — Assets router + UI
PR 4 — Comments router + UI
PR 5 — Components panel router + UI
PR 6 — Figma router + import modal
Cross-cutting in each PR
If you want scaffolds next
Tell me which feature to scaffold first (MCP, Assets, Comments, Components, or Figma), and I’ll drop in the exact files. |
Description
I add github and figma intergration etc./
Related Issues
N/A
Type of Change
Testing
I tested the app and it looked good
Screenshots (if applicable)
Additional Notes
Summary by CodeRabbit
New Features
Chores
Important
This pull request adds Figma and GitHub integrations, MCP server simulation, and universal project services, enhancing platform capabilities with new classes and interfaces for comprehensive API interactions and project management.
FigmaApiClient,FigmaAuthService,FigmaFileParser,FigmaIntegrationService, andFigmaTokenStoragefor handling Figma API interactions, authentication, file parsing, and token storage.GitHubIntegrationServicefor managing GitHub authentication, repository access, branch creation, and pull request handling.MCPIntegrationServiceto simulate MCP server lifecycle, tool execution, and server restarts.UniversalProjectServiceandUniversalStyleServicefor framework detection, code generation, and style conversion (not yet implemented).types,figma/types,github/types, andmcp/typesfor structured data handling.This description was created by
for 565d62d. You can customize this summary. It will automatically update as commits are pushed.