Skip to content
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 27 additions & 0 deletions .github/workflows/n8n-ci.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
name: n8n Node CI

on:
pull_request:
push:
branches: [main]

jobs:
build:
runs-on: ubuntu-latest
defaults:
run:
working-directory: n8n
steps:
- uses: actions/checkout@v4

- uses: actions/setup-node@v4
with:
node-version: '22'
cache: 'npm'
cache-dependency-path: n8n/package-lock.json

- run: npm ci

- run: npm run lint

- run: npm run build
33 changes: 33 additions & 0 deletions .github/workflows/n8n-publish.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
name: Publish n8n Node to npm

on:
release:
types: [published]
workflow_dispatch:
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

workflow_dispatch from a branch is silently skipped by the tag guard.

When manually dispatched from a branch (e.g., main), github.ref resolves to refs/heads/main, causing the startsWith(github.ref, 'refs/tags/n8n-') guard to evaluate to false. The job is silently skipped — no error, no publish. Manual dispatch only works when the workflow is triggered from the tag ref directly (via the GitHub UI "Run workflow" drop-down with the tag selected).

This is subtle enough to confuse operators expecting a manual publish to "just work" from the default branch. Consider either documenting this constraint or restructuring the guard:

🔧 Option: make workflow_dispatch always publish (bypass guard)
-    if: startsWith(github.ref, 'refs/tags/n8n-')
+    if: github.event_name == 'workflow_dispatch' || startsWith(github.ref, 'refs/tags/n8n-')

Also applies to: 11-11

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/n8n-publish.yml at line 6, The manual-dispatch guard
silently skips publishes because workflow_dispatch sets github.ref to a branch
(refs/heads/...), so change the condition that currently uses
startsWith(github.ref, 'refs/tags/n8n-') to allow workflow_dispatch to bypass it
by OR-ing with github.event_name == 'workflow_dispatch' (or explicitly check
github.event.action if you use a custom dispatch payload); update the job
condition(s) referencing startsWith(github.ref, 'refs/tags/n8n-') (and the
duplicate at the second occurrence) so manual runs from branches will proceed
instead of being silently skipped.


jobs:
publish:
runs-on: ubuntu-latest
Comment thread
coderabbitai[bot] marked this conversation as resolved.
permissions:
contents: read
id-token: write
defaults:
run:
working-directory: n8n
steps:
- uses: actions/checkout@v4

- uses: actions/setup-node@v4
with:
node-version: '22'
registry-url: 'https://registry.npmjs.org'

- run: npm ci

- run: npm run lint

- run: npm run build

- run: npm publish --provenance --access public
env:
NODE_AUTH_TOKEN: ${{ secrets.NPM_TOKEN }}
12 changes: 12 additions & 0 deletions n8n/.prettierrc.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
module.exports = {
semi: true,
trailingComma: 'all',
bracketSpacing: true,
useTabs: true,
tabWidth: 2,
arrowParens: 'always',
singleQuote: true,
quoteProps: 'as-needed',
endOfLine: 'lf',
printWidth: 100,
};
3 changes: 3 additions & 0 deletions n8n/eslint.config.mjs
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
import { config } from '@n8n/node-cli/eslint';

export default config;
149 changes: 72 additions & 77 deletions n8n/nodes/Tinyfish/GenericFunctions.ts
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,6 @@ function getActionableMessage(error: unknown): string | undefined {

/**
* Make an authenticated request to the TinyFish API.
* Retries on 429/5xx with exponential backoff (max 3 retries).
*/
export async function tinyfishApiRequest(
this: IExecuteFunctions,
Expand Down Expand Up @@ -134,93 +133,89 @@ export async function consumeSseStream(

let lastProgress = '';

try {
const response = await fetch(`${API_BASE_URL}/v1/automation/run-sse`, {
method: 'POST',
headers: {
'X-API-Key': apiKey,
'Content-Type': 'application/json',
},
body: JSON.stringify(payload),
});

if (!response.ok) {
const errorText = await response.text();
throw new NodeOperationError(this.getNode(), `API request failed with status ${response.status}: ${errorText}`);
}
const response = await fetch(`${API_BASE_URL}/v1/automation/run-sse`, {
method: 'POST',
headers: {
'X-API-Key': apiKey,
'Content-Type': 'application/json',
},
body: JSON.stringify(payload),
});
Comment on lines +137 to +144
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

find . -type f -name "GenericFunctions.ts" | grep -i tinyfish

Repository: tinyfish-io/tinyfish-web-agent-integrations

Length of output: 127


🏁 Script executed:

cat -n ./n8n/nodes/Tinyfish/GenericFunctions.ts | sed -n '130,150p'

Repository: tinyfish-io/tinyfish-web-agent-integrations

Length of output: 811


🏁 Script executed:

cat -n ./n8n/nodes/Tinyfish/GenericFunctions.ts | sed -n '120,180p'

Repository: tinyfish-io/tinyfish-web-agent-integrations

Length of output: 2073


🏁 Script executed:

cat -n ./n8n/nodes/Tinyfish/GenericFunctions.ts | sed -n '180,220p'

Repository: tinyfish-io/tinyfish-web-agent-integrations

Length of output: 1318


🏁 Script executed:

cat -n ./n8n/nodes/Tinyfish/GenericFunctions.ts | grep -A 10 "fetch("

Repository: tinyfish-io/tinyfish-web-agent-integrations

Length of output: 874


🏁 Script executed:

wc -l ./n8n/nodes/Tinyfish/GenericFunctions.ts

Repository: tinyfish-io/tinyfish-web-agent-integrations

Length of output: 131


🏁 Script executed:

cat -n ./n8n/nodes/Tinyfish/GenericFunctions.ts | sed -n '155,170p'

Repository: tinyfish-io/tinyfish-web-agent-integrations

Length of output: 614


🏁 Script executed:

grep -n "AbortController\|signal\|timeout\|setTimeout" ./n8n/nodes/Tinyfish/GenericFunctions.ts

Repository: tinyfish-io/tinyfish-web-agent-integrations

Length of output: 69


Add timeout protection to the SSE fetch call and streaming reader.

At Line 137, the fetch request lacks a timeout (signal), and the subsequent streaming reader loop starting at Line 162 has no protection against stalled streams. This can cause workflow execution to hang indefinitely if the SSE stream stalls or the server becomes unresponsive.

💡 Proposed fix
+	const controller = new AbortController();
+	const timeoutMs = 60_000;
+	const timeoutId = setTimeout(() => controller.abort(), timeoutMs);
+
-	const response = await fetch(`${API_BASE_URL}/v1/automation/run-sse`, {
-		method: 'POST',
-		headers: {
-			'X-API-Key': apiKey,
-			'Content-Type': 'application/json',
-		},
-		body: JSON.stringify(payload),
-	});
+	let response: Response;
+	try {
+		response = await fetch(`${API_BASE_URL}/v1/automation/run-sse`, {
+			method: 'POST',
+			headers: {
+				'X-API-Key': apiKey,
+				'Content-Type': 'application/json',
+			},
+			body: JSON.stringify(payload),
+			signal: controller.signal,
+		});
+	} catch (error) {
+		if ((error as Error).name === 'AbortError') {
+			throw new NodeOperationError(this.getNode(), `SSE request timed out after ${timeoutMs}ms`);
+		}
+		throw error;
+	} finally {
+		clearTimeout(timeoutId);
+	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const response = await fetch(`${API_BASE_URL}/v1/automation/run-sse`, {
method: 'POST',
headers: {
'X-API-Key': apiKey,
'Content-Type': 'application/json',
},
body: JSON.stringify(payload),
});
const controller = new AbortController();
const timeoutMs = 60_000;
const timeoutId = setTimeout(() => controller.abort(), timeoutMs);
let response: Response;
try {
response = await fetch(`${API_BASE_URL}/v1/automation/run-sse`, {
method: 'POST',
headers: {
'X-API-Key': apiKey,
'Content-Type': 'application/json',
},
body: JSON.stringify(payload),
signal: controller.signal,
});
} catch (error) {
if ((error as Error).name === 'AbortError') {
throw new NodeOperationError(this.getNode(), `SSE request timed out after ${timeoutMs}ms`);
}
throw error;
} finally {
clearTimeout(timeoutId);
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@n8n/nodes/Tinyfish/GenericFunctions.ts` around lines 137 - 144, The fetch
call for the SSE stream in GenericFunctions.ts (the POST to
`${API_BASE_URL}/v1/automation/run-sse`) needs timeout protection and the
streaming reader loop must guard against stalled reads: create an
AbortController with a configurable timeout, pass controller.signal into fetch,
and clear/reset a timer while the ReadableStream reader (the loop that reads
chunks starting around the streaming reader logic) is making progress; if the
timer elapses (no chunks within the timeout) call controller.abort() to stop the
fetch and throw a descriptive error. Apply this pattern to both the initial
fetch and to per-read stalls (reset timeout on each successful read) so the
workflow will not hang indefinitely.


if (!response.ok) {
const errorText = await response.text();
throw new NodeOperationError(this.getNode(), `API request failed with status ${response.status}: ${errorText}`);
}

if (!response.body) {
throw new NodeOperationError(this.getNode(), 'Response body is empty');
}
if (!response.body) {
throw new NodeOperationError(this.getNode(), 'Response body is empty');
}

const reader = response.body.getReader();
const decoder = new TextDecoder();
let buffer = '';
let finalResult: IDataObject | null = null;
let runId = '';
let streamingUrl = '';
const reader = response.body.getReader();
const decoder = new TextDecoder();
let buffer = '';
let finalResult: IDataObject | null = null;
let runId = '';
let streamingUrl = '';

while (true) {
const { done, value } = await reader.read();
while (true) {
const { done, value } = await reader.read();

buffer += decoder.decode(value, { stream: true });
if (done) {
buffer += decoder.decode();
}
const lines = buffer.split('\n');
buffer = lines.pop() ?? '';
buffer += decoder.decode(value, { stream: true });
if (done) {
buffer += decoder.decode();
}
const lines = buffer.split('\n');
buffer = lines.pop() ?? '';

Comment thread
pranavjana marked this conversation as resolved.
for (const line of lines) {
if (!line.startsWith('data: ')) continue;
for (const line of lines) {
if (!line.startsWith('data: ')) continue;

let eventData: IDataObject;
try {
eventData = JSON.parse(line.slice(6)) as IDataObject;
} catch {
continue;
}
let eventData: IDataObject;
try {
eventData = JSON.parse(line.slice(6)) as IDataObject;
} catch {
continue;
}

const eventType = eventData.type as string;

if (eventType === 'STARTED') {
runId = (eventData.runId as string) || '';
} else if (eventType === 'STREAMING_URL') {
streamingUrl = (eventData.streamingUrl as string) || '';
} else if (eventType === 'PROGRESS') {
lastProgress = (eventData.purpose as string) || '';
} else if (eventType === 'COMPLETE') {
const status = eventData.status as string;
if (status === 'COMPLETED') {
finalResult = {
status: 'COMPLETED',
runId,
streamingUrl,
lastProgress,
resultJson: eventData.resultJson || {},
};
} else {
finalResult = {
status: status || 'FAILED',
runId,
lastProgress,
error: eventData.error || 'Unknown error',
};
}
const eventType = eventData.type as string;

if (eventType === 'STARTED') {
runId = (eventData.runId as string) || '';
} else if (eventType === 'STREAMING_URL') {
streamingUrl = (eventData.streamingUrl as string) || '';
} else if (eventType === 'PROGRESS') {
lastProgress = (eventData.purpose as string) || '';
} else if (eventType === 'COMPLETE') {
const status = eventData.status as string;
if (status === 'COMPLETED') {
finalResult = {
status: 'COMPLETED',
runId,
streamingUrl,
lastProgress,
resultJson: eventData.resultJson || {},
};
} else {
finalResult = {
status: status || 'FAILED',
runId,
lastProgress,
error: eventData.error || 'Unknown error',
};
}
}

if (done) break;
}

if (!finalResult) {
throw new NodeOperationError(
this.getNode(),
'SSE stream ended without a COMPLETE event',
);
}
if (done) break;
}

return finalResult;
} catch (error) {
throw error;
if (!finalResult) {
throw new NodeOperationError(
this.getNode(),
'SSE stream ended without a COMPLETE event',
);
}

return finalResult;
}
6 changes: 3 additions & 3 deletions n8n/nodes/Tinyfish/Tinyfish.node.ts
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ import type {
INodeType,
INodeTypeDescription,
} from 'n8n-workflow';
import { NodeOperationError } from 'n8n-workflow';
import { NodeConnectionTypes, NodeOperationError } from 'n8n-workflow';

import {
operationField,
Expand Down Expand Up @@ -34,8 +34,8 @@ export class Tinyfish implements INodeType {
defaults: {
name: 'TinyFish Web Agent',
},
inputs: ['main'],
outputs: ['main'],
inputs: [NodeConnectionTypes.Main],
outputs: [NodeConnectionTypes.Main],
usableAsTool: true,
credentials: [
{
Expand Down
37 changes: 19 additions & 18 deletions n8n/nodes/Tinyfish/TinyfishDescription.ts
Original file line number Diff line number Diff line change
Expand Up @@ -4,20 +4,20 @@ export const operationField: INodeProperties = {
displayName: 'Operation',
name: 'operation',
type: 'options',
noDataExpression: true,
default: 'runSse',
options: [
{
name: 'Run (SSE Streaming)',
value: 'runSse',
action: 'Run automation with SSE streaming',
description:
'Recommended for most tasks. Streams real-time progress events and returns the final result. Best for tasks that may take 30+ seconds.',
name: 'Get Run',
value: 'getRun',
action: 'Get run details',
description: 'Retrieve the status and result of a previously started async run by its ID',
},
{
name: 'Run (Sync)',
value: 'runSync',
action: 'Run automation synchronously',
description: 'Execute and wait for the complete result in a single response. Use for quick extractions under 60 seconds.',
name: 'List Runs',
value: 'listRuns',
action: 'List automation runs',
description: 'List past automation runs with optional status filter. Useful for monitoring or retrieving results.',
},
{
name: 'Run (Async)',
Expand All @@ -26,16 +26,17 @@ export const operationField: INodeProperties = {
description: 'Returns a run ID immediately without waiting. Use with Get Run to poll for results. Best for batch processing multiple URLs in parallel.',
},
{
name: 'Get Run',
value: 'getRun',
action: 'Get run details',
description: 'Retrieve the status and result of a previously started async run by its ID',
name: 'Run (SSE Streaming)',
value: 'runSse',
action: 'Run automation with SSE streaming',
description:
'Recommended for most tasks. Streams real-time progress events and returns the final result. Best for tasks that may take 30+ seconds.',
},
{
name: 'List Runs',
value: 'listRuns',
action: 'List automation runs',
description: 'List past automation runs with optional status filter. Useful for monitoring or retrieving results.',
name: 'Run (Sync)',
value: 'runSync',
action: 'Run automation synchronously',
description: 'Execute and wait for the complete result in a single response. Use for quick extractions under 60 seconds.',
},
],
};
Expand Down Expand Up @@ -171,7 +172,7 @@ export const listRunsFields: INodeProperties[] = [
displayName: 'Limit',
name: 'limit',
type: 'number',
default: 20,
default: 50,
description: 'Max number of results to return',
typeOptions: {
minValue: 1,
Expand Down
Loading
Loading