-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathindex.json
More file actions
1 lines (1 loc) · 115 KB
/
index.json
File metadata and controls
1 lines (1 loc) · 115 KB
1
[{"content":"The Problem If you use multiple GitHub accounts—for example, a work account and a personal account—you may encounter a frustrating issue when working from the command line:\nYou try to push code to your personal repository, but GitHub rejects it with a permission denied error because it is authenticating as your work account.\nThis happens even though you are the owner of the personal repository.\nWhy This Happens GitHub allows multiple accounts, but Git does not automatically switch identities per repository The CLI (SSH or HTTPS) reuses a cached or default identity As a result, all pushes to github.com may be attempted using the same account The Solution: Separate SSH Keys per GitHub Account The most reliable solution is to use separate SSH keys for each GitHub account and configure Git to select the correct one per repository.\nThis approach is:\nSecure One-time setup Scales across many repositories Widely used in professional environments Step-by-Step Setup 1. Create SSH Keys ssh-keygen -t ed25519 -C \u0026#34;work-email@company.com\u0026#34; -f ~/.ssh/id_ed25519_work ssh-keygen -t ed25519 -C \u0026#34;personal-email@gmail.com\u0026#34; -f ~/.ssh/id_ed25519_personal 2. Add Keys to the SSH Agent eval \u0026#34;$(ssh-agent -s)\u0026#34; ssh-add ~/.ssh/id_ed25519_work ssh-add ~/.ssh/id_ed25519_personal 3. Configure SSH Host Aliases Edit ~/.ssh/config:\n# Work GitHub Host github-work HostName github.com User git IdentityFile ~/.ssh/id_ed25519_work IdentitiesOnly yes # Personal GitHub Host github-personal HostName github.com User git IdentityFile ~/.ssh/id_ed25519_personal IdentitiesOnly yes 4. Add Public Keys to GitHub Add each public key (*.pub) to the corresponding GitHub account:\nGitHub → Settings → SSH and GPG Keys 5. Update Repository Remotes This step ensures Git uses the correct account.\n# Personal repository git remote set-url origin git@github-personal:your-username/repo-name.git # Work repository git remote set-url origin git@github-work:org-name/repo-name.git 6. Verify Authentication ssh -T git@github-personal ssh -T git@github-work Each command should authenticate as the correct GitHub user.\nOptional: Set Commit Author Per Repository git config user.name \u0026#34;Your Name\u0026#34; git config user.email \u0026#34;personal-email@gmail.com\u0026#34; Repeat with work credentials in work repositories.\nConclusion If you regularly work with multiple GitHub accounts, separating SSH keys per account is the cleanest and safest approach. Once configured, Git automatically selects the correct identity, eliminating permission errors and manual switching.\nOne setup. Zero friction.\n","permalink":"https://vishnuprasad.blog/posts/github-multiple-account-management-cli/","summary":"The Problem If you use multiple GitHub accounts—for example, a work account and a personal account—you may encounter a frustrating issue when working from the command line:\nYou try to push code to your personal repository, but GitHub rejects it with a permission denied error because it is authenticating as your work account.\nThis happens even though you are the owner of the personal repository.\nWhy This Happens GitHub allows multiple accounts, but Git does not automatically switch identities per repository The CLI (SSH or HTTPS) reuses a cached or default identity As a result, all pushes to github.","title":"Managing Multiple GitHub Accounts from the CLI"},{"content":"Introduction If you\u0026rsquo;ve built REST APIs with AWS API Gateway, you know how nice it is to return structured error responses with proper HTTP status codes, error types, and detailed context. Then you switch to GraphQL with AppSync, and suddenly your beautiful error handling becomes\u0026hellip; generic.\n{ \u0026#34;errors\u0026#34;: [{ \u0026#34;message\u0026#34;: \u0026#34;Error\u0026#34; }] } That\u0026rsquo;s it. No error types. No structured context. Just a string.\nThis doesn\u0026rsquo;t have to be the case.\nIn this post, I\u0026rsquo;ll show you how to bring API Gateway-style error handling to your GraphQL APIs using AppSync JavaScript resolvers and custom exception classes. We\u0026rsquo;ll transform generic GraphQL errors into structured, type-safe responses that your clients can actually use.\nBy the end, you\u0026rsquo;ll have:\nCustom exception classes with type-safe context Middleware that automatically transforms errors AppSync JS resolvers that propagate structured errors Client-friendly error responses that rival REST APIs All with real, production-ready code you can implement today.\nThe Problem: GraphQL\u0026rsquo;s Generic Error Responses If you\u0026rsquo;ve worked with REST APIs and AWS API Gateway, you\u0026rsquo;re probably familiar with structured error responses:\n// REST API error response (what we want) { \u0026#34;statusCode\u0026#34;: 400, \u0026#34;errorType\u0026#34;: \u0026#34;ValidationException\u0026#34;, \u0026#34;message\u0026#34;: \u0026#34;Invalid email format\u0026#34;, \u0026#34;errorInfo\u0026#34;: { \u0026#34;field\u0026#34;: \u0026#34;email\u0026#34;, \u0026#34;constraint\u0026#34;: \u0026#34;email\u0026#34;, \u0026#34;receivedValue\u0026#34;: \u0026#34;not-an-email\u0026#34; } } This is great! The client knows exactly what went wrong, which field failed validation, and why.\nNow let\u0026rsquo;s look at what you typically get with GraphQL/AppSync when using generic error handling:\n// Generic GraphQL error (what we\u0026#39;re replacing) { \u0026#34;data\u0026#34;: null, \u0026#34;errors\u0026#34;: [ { \u0026#34;message\u0026#34;: \u0026#34;Error\u0026#34;, \u0026#34;path\u0026#34;: [\u0026#34;getUser\u0026#34;], \u0026#34;locations\u0026#34;: [{ \u0026#34;line\u0026#34;: 2, \u0026#34;column\u0026#34;: 3 }] } ] } See the problem? You lose all that rich context. No error types, no additional information, just a vague message.\nWhy This Happens When you throw errors in Lambda and don\u0026rsquo;t handle them properly:\n// The problematic approach export const handler = async (event) =\u0026gt; { try { const user = await database.getUser(id); } catch (error) { // This becomes a generic GraphQL error throw new Error(\u0026#39;Something went wrong\u0026#39;); } }; AppSync receives the raw error and converts it to a generic GraphQL error. You lose:\nError types: Can\u0026rsquo;t differentiate between validation, auth, or server errors Structured context: No field names, constraint information, or additional data Client actionability: Frontend can\u0026rsquo;t handle different error scenarios appropriately Type safety: No TypeScript interfaces, just string messages Debugging context: Production errors are nearly impossible to diagnose What We\u0026rsquo;re Building Our goal is to bring API Gateway-style error handling to GraphQL using:\nCustom exception classes (replacing generic Error objects) Middleware-based transformation (converting exceptions to structured responses) AppSync JS resolvers (propagating structured errors through GraphQL) Type-safe error schemas (making errors as reliable as successful responses) The result? GraphQL errors that look like this:\n{ \u0026#34;data\u0026#34;: null, \u0026#34;errors\u0026#34;: [ { \u0026#34;message\u0026#34;: \u0026#34;Invalid email format\u0026#34;, \u0026#34;errorType\u0026#34;: \u0026#34;VALIDATION_EXCEPTION\u0026#34;, \u0026#34;errorInfo\u0026#34;: { \u0026#34;field\u0026#34;: \u0026#34;email\u0026#34;, \u0026#34;constraint\u0026#34;: \u0026#34;email\u0026#34;, \u0026#34;receivedValue\u0026#34;: \u0026#34;not-an-email\u0026#34; }, \u0026#34;path\u0026#34;: [\u0026#34;getUser\u0026#34;], \u0026#34;locations\u0026#34;: [{ \u0026#34;line\u0026#34;: 2, \u0026#34;column\u0026#34;: 3 }] } ] } Now we have the best of both worlds: GraphQL\u0026rsquo;s query flexibility with REST API\u0026rsquo;s structured error responses.\n📌 TL;DR: We\u0026rsquo;re replacing generic GraphQL errors with API Gateway-style structured responses by:\nUsing custom exception classes instead of generic Error objects Transforming exceptions to response objects via middleware Using AppSync JS resolvers to propagate the structure through GraphQL\u0026rsquo;s util.error() Result: Clients get the same rich error context they\u0026rsquo;d expect from a REST API.\nOur Solution: A Three-Layer Approach Our error handling system operates across three distinct layers:\nLayer 1: Lambda (Where Errors Originate) This is where your business logic lives and where most errors occur. We use:\nCustom exception classes for type-safe error creation Middy middleware to intercept and transform errors Structured error responses that flow to AppSync Layer 2: AppSync (Error Transformation) AppSync sits between your Lambda functions and clients, transforming errors into GraphQL-compliant responses:\nJavaScript resolvers detect error responses Built-in utilities create proper GraphQL errors Automatic enrichment adds path and location information Layer 3: Client (Error Consumption) Your frontend receives structured errors that are easy to handle:\nType-safe error interfaces in TypeScript Centralized error handling logic User-friendly error messages The Complete Flow Here\u0026rsquo;s how an error travels through the system:\n1️⃣ Lambda Handler throw new ValidationException(\u0026#39;Invalid email\u0026#39;, { field: \u0026#39;email\u0026#39; }) ↓ 2️⃣ Middy Middleware Catches exception → Returns structured response { errorMessage: \u0026#39;...\u0026#39;, errorType: \u0026#39;...\u0026#39;, errorInfo: {...} } ↓ 3️⃣ AppSync JS Resolver Detects error response → Calls util.error() Transforms to GraphQL error format ↓ 4️⃣ GraphQL Response { \u0026#34;errors\u0026#34;: [{ \u0026#34;message\u0026#34;: \u0026#34;Invalid email\u0026#34;, \u0026#34;errorType\u0026#34;: \u0026#34;VALIDATION_EXCEPTION\u0026#34;, \u0026#34;errorInfo\u0026#34;: { \u0026#34;field\u0026#34;: \u0026#34;email\u0026#34; } }] } ↓ 5️⃣ Client Receives structured error → Shows appropriate UI Key insight: We never let AppSync see a thrown error. We always return structured response objects that the JS resolver transforms into GraphQL errors with full context.\nLet\u0026rsquo;s build each layer.\nBuilding Custom Exception Classes First, we create a base exception class that all our custom errors will extend:\nexport class CustomException\u0026lt;TInfo extends Record\u0026lt;string, unknown\u0026gt; | null = null\u0026gt; extends Error { readonly info: TInfo | null; constructor(message: string, info: TInfo | null = null) { super(message); this.info = info; Error.captureStackTrace(this, this.constructor); } } The magic here is the generic TInfo parameter. This lets us attach type-safe context to any error:\n// Define what information this error carries interface UserValidationError { field: string; constraint: string; receivedValue: unknown; allowedValues?: string[]; } // Throw with type-safe context throw new ValidationException\u0026lt;UserValidationError\u0026gt;(\u0026#39;Invalid user role\u0026#39;, { field: \u0026#39;role\u0026#39;, constraint: \u0026#39;enum\u0026#39;, receivedValue: \u0026#39;super_admin\u0026#39;, allowedValues: [\u0026#39;user\u0026#39;, \u0026#39;admin\u0026#39;, \u0026#39;moderator\u0026#39;], }); TypeScript now knows exactly what\u0026rsquo;s in error.info—no guessing, no casting.\nThe Four Exception Types We define four core exception types that cover most scenarios:\n// 1. ValidationException - Bad input from client export class ValidationException\u0026lt;TInfo extends Record\u0026lt;string, unknown\u0026gt; | null = null\u0026gt; extends CustomException\u0026lt;TInfo\u0026gt; {} // 2. UnauthorizedException - Missing or invalid auth export class UnauthorizedException\u0026lt;TInfo extends Record\u0026lt;string, unknown\u0026gt; | null = null\u0026gt; extends CustomException\u0026lt;TInfo\u0026gt; {} // 3. ForbiddenException - Valid auth but insufficient permissions export class ForbiddenException\u0026lt;TInfo extends Record\u0026lt;string, unknown\u0026gt; | null = null\u0026gt; extends CustomException\u0026lt;TInfo\u0026gt; {} // 4. InternalServiceError - Server-side failures export class InternalServiceError\u0026lt;TInfo extends Record\u0026lt;string, unknown\u0026gt; | null = null\u0026gt; extends CustomException\u0026lt;TInfo\u0026gt; {} These map nicely to HTTP status codes (400, 401, 403, 500) while remaining GraphQL-friendly.\nMiddleware: The Error Transformer Now comes the key piece: middleware that intercepts exceptions and transforms them into structured responses.\nWe use Middy, a popular middleware framework for Lambda:\nimport { MiddlewareObj } from \u0026#39;@middy/core\u0026#39;; export const exceptionHandlerMiddleware = (): MiddlewareObj =\u0026gt; { return { onError: (request) =\u0026gt; { const { error } = request; // Handle ValidationException if (error instanceof ValidationException) { return (request.response = { errorMessage: error.message, errorType: \u0026#39;VALIDATION_EXCEPTION\u0026#39;, errorInfo: error.info || null, }); } // Handle UnauthorizedException if (error instanceof UnauthorizedException) { return (request.response = { errorMessage: error.message, errorType: \u0026#39;UNAUTHORIZED_EXCEPTION\u0026#39;, errorInfo: error.info || null, }); } // Handle ForbiddenException if (error instanceof ForbiddenException) { return (request.response = { errorMessage: error.message, errorType: \u0026#39;FORBIDDEN_EXCEPTION\u0026#39;, errorInfo: error.info || null, }); } // Handle unknown errors return (request.response = { errorMessage: error?.message || \u0026#39;INTERNAL_SERVER_ERROR\u0026#39;, }); }, }; }; The beauty of this approach: your business logic just throws exceptions naturally, and the middleware handles the transformation. No try/catch blocks everywhere, no manual response formatting.\nPutting It Together: Lambda Handler Here\u0026rsquo;s what a complete Lambda handler looks like:\nimport middy from \u0026#39;@middy/core\u0026#39;; import { AppSyncResolverEvent } from \u0026#39;aws-lambda\u0026#39;; import { exceptionHandlerMiddleware } from \u0026#39;../utils/middlewares/exceptions\u0026#39;; import { ValidationException, UnauthorizedException } from \u0026#39;../utils/exceptions/main\u0026#39;; export const base = async (event: AppSyncResolverEvent\u0026lt;{ id: string }\u0026gt;) =\u0026gt; { // Simple validation - just throw! if (!event.arguments?.id) { throw new ValidationException(\u0026#39;ID is required\u0026#39;, { field: \u0026#39;id\u0026#39;, constraint: \u0026#39;required\u0026#39;, }); } // Auth check if (!event.identity?.sub) { throw new UnauthorizedException(\u0026#39;Authentication required\u0026#39;); } // Business logic const item = await fetchItemFromDatabase(event.arguments.id); if (!item) { throw new ValidationException(\u0026#39;Item not found\u0026#39;, { field: \u0026#39;id\u0026#39;, value: event.arguments.id, }); } return item; }; // Wrap with middleware export const handler = middy(base).use(exceptionHandlerMiddleware()); Notice how clean the business logic is? No error handling boilerplate—just throw and let the middleware handle it.\nAppSync JS Resolvers: The Critical Bridge Here\u0026rsquo;s where the magic happens. This is what transforms our structured Lambda responses into proper GraphQL errors—without this piece, we\u0026rsquo;d still have generic errors.\nThe key insight: Instead of letting AppSync convert Lambda errors into generic GraphQL errors, we:\nReturn structured error responses from Lambda (not thrown errors) Use AppSync JS resolvers to detect these error responses Transform them into GraphQL errors with util.error(), preserving all our structure This is how we replicate API Gateway\u0026rsquo;s error handling in GraphQL:\nimport { util } from \u0026#39;@aws-appsync/utils\u0026#39;; export function request(ctx) { return { operation: \u0026#39;Invoke\u0026#39;, invocationType: \u0026#39;RequestResponse\u0026#39;, payload: { arguments: ctx.arguments, identity: ctx.identity, info: ctx.info, source: ctx.source, stash: ctx.stash, request: ctx.request, }, }; } export function response(ctx) { const { result } = ctx; // Handle Lambda execution errors (timeout, crash, etc.) if (ctx.error) { util.error(\u0026#39;Internal Error\u0026#39;, \u0026#39;InternalError\u0026#39;); } // HERE\u0026#39;S THE KEY: Detect our structured error responses // Lambda returned: { errorMessage, errorType, errorInfo } // We transform it to a GraphQL error while preserving structure if (result \u0026amp;\u0026amp; result.errorMessage) { util.error( result.errorMessage, // Human-readable message result.errorType, // Error type (VALIDATION_EXCEPTION, etc.) result.data || null, // Partial response can be provided in case of errors result.errorInfo // Our custom context object! ); } // Success case - return the result return result; } Why This Works Without JS Resolvers (Generic GraphQL):\nLambda throws Error → AppSync catches → Generic GraphQL error Result: { \u0026#34;errors\u0026#34;: [{ \u0026#34;message\u0026#34;: \u0026#34;Error\u0026#34; }] } With JS Resolvers (Structured Errors):\nLambda returns { errorMessage, errorType, errorInfo } → JS Resolver detects error response → util.error() creates GraphQL error with full context Result: { \u0026#34;errors\u0026#34;: [{ \u0026#34;message\u0026#34;: \u0026#34;...\u0026#34;, \u0026#34;errorType\u0026#34;: \u0026#34;...\u0026#34;, \u0026#34;errorInfo\u0026#34;: {...} }] } The util.error() Function This is AppSync\u0026rsquo;s built-in function for creating GraphQL errors. It accepts four parameters:\nutil.error( message, // string: Human-readable error message errorType, // string: Error type identifier (like HTTP status) data, // any: Optional data to return (rarely used) errorInfo // object: Custom structured context (our secret sauce!) ); When called, it:\nImmediately stops resolver execution Sets data: null in the GraphQL response Creates an error object in the errors array Preserves our errorType and errorInfo fields This is how we get API Gateway-style structured errors in GraphQL.\nThe Final Result: API Gateway-Style GraphQL Errors After all this processing, here\u0026rsquo;s what your client receives. Notice how it combines GraphQL\u0026rsquo;s structure with REST API\u0026rsquo;s rich error context:\n{ \u0026#34;data\u0026#34;: null, \u0026#34;errors\u0026#34;: [ { \u0026#34;message\u0026#34;: \u0026#34;Invalid email format\u0026#34;, \u0026#34;errorType\u0026#34;: \u0026#34;VALIDATION_EXCEPTION\u0026#34;, \u0026#34;errorInfo\u0026#34;: { \u0026#34;field\u0026#34;: \u0026#34;email\u0026#34;, \u0026#34;constraint\u0026#34;: \u0026#34;email\u0026#34;, \u0026#34;receivedValue\u0026#34;: \u0026#34;not-an-email\u0026#34; }, \u0026#34;path\u0026#34;: [\u0026#34;getUser\u0026#34;], \u0026#34;locations\u0026#34;: [{ \u0026#34;line\u0026#34;: 2, \u0026#34;column\u0026#34;: 3 }] } ] } Compare this to a REST API Gateway response:\n{ \u0026#34;statusCode\u0026#34;: 400, \u0026#34;errorType\u0026#34;: \u0026#34;ValidationException\u0026#34;, \u0026#34;message\u0026#34;: \u0026#34;Invalid email format\u0026#34;, \u0026#34;errorInfo\u0026#34;: { \u0026#34;field\u0026#34;: \u0026#34;email\u0026#34;, \u0026#34;constraint\u0026#34;: \u0026#34;email\u0026#34;, \u0026#34;receivedValue\u0026#34;: \u0026#34;not-an-email\u0026#34; } } We\u0026rsquo;ve replicated the structure! The client gets:\n✅ Error type (errorType) - Just like REST status codes, identify error categories ✅ Human-readable message (message) - Clear explanation of what went wrong ✅ Structured context (errorInfo) - Field-level details, constraints, and debugging data ✅ GraphQL metadata (path, locations) - Bonus: know exactly where in the query it failed This is no longer a generic { \u0026quot;message\u0026quot;: \u0026quot;Error\u0026quot; }. This is a production-grade error response.\nClient-Side Error Handling With structured errors, client-side handling becomes straightforward:\n// Define error types type ErrorType = \u0026#39;VALIDATION_EXCEPTION\u0026#39; | \u0026#39;UNAUTHORIZED_EXCEPTION\u0026#39; | \u0026#39;FORBIDDEN_EXCEPTION\u0026#39;; interface GraphQLError { message: string; errorType?: ErrorType; errorInfo?: Record\u0026lt;string, unknown\u0026gt;; path: string[]; locations: Array\u0026lt;{ line: number; column: number }\u0026gt;; } // Centralized error handler function handleGraphQLError(error: GraphQLError) { const { errorType, message, errorInfo } = error; switch (errorType) { case \u0026#39;VALIDATION_EXCEPTION\u0026#39;: // Show specific field errors showToast({ type: \u0026#39;error\u0026#39;, title: \u0026#39;Validation Error\u0026#39;, message: message, details: errorInfo, }); break; case \u0026#39;UNAUTHORIZED_EXCEPTION\u0026#39;: // Redirect to login redirectToLogin(); break; case \u0026#39;FORBIDDEN_EXCEPTION\u0026#39;: // Show access denied message showToast({ type: \u0026#39;error\u0026#39;, title: \u0026#39;Access Denied\u0026#39;, message: message, }); break; default: // Generic error handling showToast({ type: \u0026#39;error\u0026#39;, title: \u0026#39;Error\u0026#39;, message: message || \u0026#39;An unexpected error occurred\u0026#39;, }); } } Production Considerations Security: Don\u0026rsquo;t Leak Sensitive Information Never expose sensitive data in error messages:\n// BAD - Exposes internal details throw new ValidationException( `Database query failed: SELECT * FROM users WHERE api_key=\u0026#39;sk_live_abc123\u0026#39;` ); // GOOD - Generic message with safe context throw new InternalServiceError(\u0026#39;Database operation failed\u0026#39;, { operation: \u0026#39;query\u0026#39;, table: \u0026#39;users\u0026#39; }); Consider sanitizing error info in production:\nexport const exceptionHandlerMiddleware = (): MiddlewareObj =\u0026gt; { return { onError: (request) =\u0026gt; { const { error } = request; const isProd = process.env.NODE_ENV === \u0026#39;production\u0026#39;; if (error instanceof ValidationException) { return (request.response = { errorMessage: error.message, errorType: \u0026#39;VALIDATION_EXCEPTION\u0026#39;, errorInfo: isProd ? sanitizeErrorInfo(error.info) : error.info, }); } // ... other handlers }, }; }; function sanitizeErrorInfo(info: any) { const sanitized = { ...info }; delete sanitized.internalId; delete sanitized.stackTrace; delete sanitized.apiKeys; return sanitized; } Benefits of This Approach After implementing this system, you\u0026rsquo;ll notice several improvements:\nConsistency: Every error follows the same structure Type Safety: TypeScript catches error handling mistakes at compile time Debuggability: Rich context makes production debugging easier Developer Experience: Clear error messages and types make the API a joy to use Separation of Concerns: Business logic stays clean, error handling is centralized Client-Friendly: Structured errors are easy to handle in frontend code Common Patterns and Examples Validation Errors // Required field if (!input.email) { throw new ValidationException(\u0026#39;Email is required\u0026#39;, { field: \u0026#39;email\u0026#39;, constraint: \u0026#39;required\u0026#39;, }); } // Format validation if (!isValidEmail(input.email)) { throw new ValidationException(\u0026#39;Invalid email format\u0026#39;, { field: \u0026#39;email\u0026#39;, constraint: \u0026#39;email\u0026#39;, receivedValue: input.email, }); } // Range validation if (input.age \u0026lt; 18 || input.age \u0026gt; 120) { throw new ValidationException(\u0026#39;Age must be between 18 and 120\u0026#39;, { field: \u0026#39;age\u0026#39;, constraint: \u0026#39;range\u0026#39;, receivedValue: input.age, min: 18, max: 120, }); } Authorization Errors // Missing authentication if (!ctx.identity?.sub) { throw new UnauthorizedException(\u0026#39;Authentication required\u0026#39;, { reason: \u0026#39;missing_token\u0026#39;, }); } // Expired token if (isTokenExpired(token)) { throw new UnauthorizedException(\u0026#39;Token has expired\u0026#39;, { reason: \u0026#39;token_expired\u0026#39;, expiredAt: token.expiresAt, }); } // Insufficient permissions const hasPermission = await checkPermission(userId, \u0026#39;delete:items\u0026#39;); if (!hasPermission) { throw new ForbiddenException(\u0026#39;Insufficient permissions\u0026#39;, { requiredPermission: \u0026#39;delete:items\u0026#39;, userRole: userRole, }); } Service Errors // Database errors try { await database.query(sql); } catch (error) { throw new InternalServiceError(\u0026#39;Database operation failed\u0026#39;, { operation: \u0026#39;query\u0026#39;, table: \u0026#39;users\u0026#39;, retryable: true, }); } // External API failures try { const response = await externalAPI.fetch(url); } catch (error) { throw new InternalServiceError(\u0026#39;External service unavailable\u0026#39;, { service: \u0026#39;PaymentProvider\u0026#39;, statusCode: error.statusCode, retryable: error.statusCode \u0026gt;= 500, }); } Why This Matters Building a robust error handling system takes effort upfront, but pays dividends as your application grows. You\u0026rsquo;ll spend less time debugging vague errors and more time building features. More importantly, your clients get the same quality of error handling they\u0026rsquo;d expect from a REST API, but with GraphQL\u0026rsquo;s query flexibility.\nFurther Reading AWS AppSync Documentation Middy Middleware Framework GraphQL Error Handling Best Practices AWS Lambda Error Handling Have questions or suggestions? Found this helpful? Let me know in the comments below!\n","permalink":"https://vishnuprasad.blog/posts/appsync-error-handling/","summary":"Introduction If you\u0026rsquo;ve built REST APIs with AWS API Gateway, you know how nice it is to return structured error responses with proper HTTP status codes, error types, and detailed context. Then you switch to GraphQL with AppSync, and suddenly your beautiful error handling becomes\u0026hellip; generic.\n{ \u0026#34;errors\u0026#34;: [{ \u0026#34;message\u0026#34;: \u0026#34;Error\u0026#34; }] } That\u0026rsquo;s it. No error types. No structured context. Just a string.\nThis doesn\u0026rsquo;t have to be the case.","title":"Building a Robust Error Handling System for AWS AppSync APIs with Appsync JS Resolvers"},{"content":"Introduction to Amazon CloudWatch Logs Insights Amazon CloudWatch Logs Insights is a powerful tool designed to help developers, DevOps engineers, and cloud administrators extract actionable intelligence from their log data. Whether you’re troubleshooting application errors, monitoring system health, or auditing security events, CloudWatch Logs Insights enables you to query logs in real time using a purpose-built query language.\nQuery Methods: CloudWatch Logs Insights uses a SQL-like syntax with support for commands like:\nfields to select specific log fields. filter to apply conditions (e.g., filter @message LIKE \u0026ldquo;ERROR\u0026rdquo;). stats to aggregate data (e.g., stats count() by @logStream). sort and limit to refine results. Key Use Cases:\nTroubleshooting: Quickly identify errors, exceptions, or latency spikes. Operational Monitoring: Track metrics like request rates, response times, or resource utilization. Security Analysis: Detect suspicious patterns, unauthorized access, or compliance violations. Cost Optimization: Pinpoint underutilized resources or inefficient processes. Field-Level Indexing: A Game-Changer for Query Performance AWS recently launched field-level indexing for CloudWatch Logs, a feature that dramatically improves query speed and cost efficiency. By indexing specific log fields, you enable CloudWatch to skip full log scans and retrieve data directly from optimized indexes.\nHow Indexing Works Traditionally, querying logs required scanning every log entry in the specified time range—a process that could be slow and expensive for large datasets. With field-level indexing, CloudWatch creates structured metadata for selected fields (e.g., requestId, statusCode, userId), allowing the engine to:\nLocate Data Faster: Directly access indexed fields without scanning irrelevant logs. Reduce Scanned Data Volume: Minimize the amount of data processed per query. Benefits of Field-Level Indexing Improved Query Performance: Indexed queries execute up to 10x faster by bypassing full log scans. For example, filtering logs by a high-cardinality field like requestId becomes instantaneous. Lower Costs: CloudWatch charges based on the amount of data scanned. Indexing reduces scanned data, directly lowering costs—especially for frequent or complex queries. Lower Latency for Critical Workloads: Real-time applications (e.g., fraud detection, live monitoring) benefit from near-instantaneous results Implementing Field-Level Indexing 1. Choosing Fields to Index Not all fields need indexing. Prioritize:\nHigh-Cardinality Fields (e.g., unique IDs, transaction IDs). Frequently Queried Fields (e.g., statusCode, userId). 2. Creating Indexes You create field indexes by creating field index policies. You can create account-level index policies that apply to your whole account, and you can also create policies that apply to only a single log group. For account-wide index policies, you can have one that applies to all log groups in the account. You can also create account-level index policies that apply to a subset of log groups in the account, selected by the prefixes of their log group names. If you have multiple account-level policies in the same account, the log group name prefixes for these policies can\u0026rsquo;t overlap.\nBy default Cloudwatch create indexes for system generated logs provided below\n@logStream @ingestionTime @requestId @type @initDuration @duration @billedDuration @memorySize @maxMemoryUsed @xrayTraceId @xraySetmentId 3. How to query the Indexed Fields { \u0026#34;eventName\u0026#34;: \u0026#34;OrderPlaced\u0026#34;, \u0026#34;sourceIPAddress\u0026#34;: \u0026#34;203.0.113.42\u0026#34;, \u0026#34;userAgent\u0026#34;: \u0026#34;Mozilla/5.0 (Windows NT 10.0; Win64; x64)\u0026#34;, \u0026#34;requestParameters\u0026#34;: { \u0026#34;orderId\u0026#34;: \u0026#34;ORD123456789\u0026#34;, \u0026#34;userId\u0026#34;: \u0026#34;USR987654321\u0026#34;, \u0026#34;orderStatus\u0026#34;: \u0026#34;Processing\u0026#34;, \u0026#34;paymentDetails\u0026#34;: { \u0026#34;paymentMethod\u0026#34;: \u0026#34;Credit Card\u0026#34;, \u0026#34;transactionId\u0026#34;: \u0026#34;TXN654321ABC\u0026#34;, \u0026#34;amount\u0026#34;: 129.99, \u0026#34;currency\u0026#34;: \u0026#34;USD\u0026#34; }, \u0026#34;items\u0026#34;: [ { \u0026#34;itemId\u0026#34;: \u0026#34;ITM12345\u0026#34;, \u0026#34;productName\u0026#34;: \u0026#34;Wireless Headphones\u0026#34;, \u0026#34;quantity\u0026#34;: 1, \u0026#34;price\u0026#34;: 99.99 }, { \u0026#34;itemId\u0026#34;: \u0026#34;ITM67890\u0026#34;, \u0026#34;productName\u0026#34;: \u0026#34;USB-C Charging Cable\u0026#34;, \u0026#34;quantity\u0026#34;: 2, \u0026#34;price\u0026#34;: 15.00 } ], \u0026#34;shippingAddress\u0026#34;: { \u0026#34;name\u0026#34;: \u0026#34;Alice Johnson\u0026#34;, \u0026#34;street\u0026#34;: \u0026#34;123 Main St\u0026#34;, \u0026#34;city\u0026#34;: \u0026#34;New York\u0026#34;, \u0026#34;state\u0026#34;: \u0026#34;NY\u0026#34;, \u0026#34;zipCode\u0026#34;: \u0026#34;10001\u0026#34;, \u0026#34;country\u0026#34;: \u0026#34;USA\u0026#34; } } } Take the example above,\nHere we can create index on root level field, nested json fields, also array fields.\nRoot Field Create an index with route path eventName\nfields @timestamp, @message | filterIndex eventName = \u0026#39;eventName\u0026#39; Nested JSON field Create an index with route path requestParameters.orderStatus\nfields @timestamp, @message | filterIndex requestParameters.orderStatus = \u0026#39;pending\u0026#39; Array fields Create an index with route path requestParameters.orderStatus.items.0.itemId\nfields @timestamp, @message | filterIndex requestParameters.orderStatus.items.0.itemId = \u0026#39;1234\u0026#39; Conclusion Field-level indexing transforms CloudWatch Logs Insights into a faster, more cost-effective tool for log analysis. By strategically indexing high-value fields, teams can accelerate troubleshooting, optimize costs, and scale their monitoring workflows.\nReady to Get Started?\nExplore the CloudWatch Logs Insights Query Syntax. Dive into the Field Indexing Documentation to configure your first index. Unlock the full potential of your logs today—AWS has just made it faster and cheaper than ever! 🚀\n","permalink":"https://vishnuprasad.blog/posts/cloud-watch-logs-field-level-indexes/","summary":"Introduction to Amazon CloudWatch Logs Insights Amazon CloudWatch Logs Insights is a powerful tool designed to help developers, DevOps engineers, and cloud administrators extract actionable intelligence from their log data. Whether you’re troubleshooting application errors, monitoring system health, or auditing security events, CloudWatch Logs Insights enables you to query logs in real time using a purpose-built query language.\nQuery Methods: CloudWatch Logs Insights uses a SQL-like syntax with support for commands like:","title":"Optimizing Amazon CloudWatch Insights Queries With Field Level Indexes for Efficient Log Analytics"},{"content":"This article is the continuation of one (link) of the previous articles, where I explained how to handle partial batch failures in SQS when using it with AWS Lambda. At the time of writing that article, there was no native way of handling this. Two feasible methods were, either using a batch size of one or deleting each successful message after processing.\nQuick Recap On, What Happens If One Of The Messages In The Batch Failed To Process.\nIf you don’t throw an error on your function and one of the messages doesn’t process correctly the message will be lost If you catch and throw an error, the whole batch will be sent back to the queue including the ones which were processed successfully. This batch will be retried again multiple times based on the maxReceiveCount configuration if the error is not resolved. This will lead to the reprocessing of successful messages multiple times if you have configured a Dead letter Queue with your SQS Queue the failed batch will end up there once the ReceiveCount for a message exceeds the maxReceiveCount. The successfully processed message will also end up in the DLQ. If the consumer of this DLQ can differentiate between failure and success messages in the batch we are good to go. Now, let\u0026rsquo;s explore how we can handle partial batches in a more effective and refined manner.\nAWS Lambda now supports partial batch response for SQS as an event source. What does this mean?\nNow when you configure SQS as an event source with AWS Lambda, you can add a config called functionResponseType = ReportBatchItemFailures.\nEnabling this configuration instructs Lambda to abstain from retrying messages from processed batches, opting to delete such messages from the queue. Instead, it concentrates on handling only the messages that have experienced failures.\nHowever, there\u0026rsquo;s a caveat: simply enabling this configuration doesn\u0026rsquo;t achieve our desired outcome. We need to make changes at the code level within the Lambda function.\nHere’s How we do it.\nimport type { SQSHandler } from \u0026#39;aws-lambda\u0026#39;; const main: SQSHandler = async (event, context): Promise\u0026lt;SQSBatchResponse\u0026gt; =\u0026gt; { const failedMessageIds: { id: string }[] = []; for (const record of event.Records) { try { await processMessageAsync(record, context); } catch (error) { failedMessageIds.push(event.messageId); } } return { batchItemFailures: failedMessageIds.map(id =\u0026gt; { return { itemIdentifier: id } }) } }; async function processMessageAsync(record: any, context: Context): Promise\u0026lt;void\u0026gt; { if (!record.body) { throw new Error(\u0026#39;No Body in SQS Message.\u0026#39;); } context.log(`Processed message ${record.body}`); } In the provided code snippet, our approach involves iterating through the batch of messages received from Lambda. We catch all exceptions using a try-catch block, retrieve the IDs of the failed messages, and return them at the end of the invocation in the following manner.\n{ \u0026#34;batchItemFailures\u0026#34;: [ { \u0026#34;itemIdentifier\u0026#34;: \u0026#34;messageId4\u0026#34; }, { \u0026#34;itemIdentifier\u0026#34;: \u0026#34;messageId8\u0026#34; } ] } In conclusion, while discussing the integration of SQS with AWS Lambda, we\u0026rsquo;ve explored the challenge of handling partial batch failures. In previous articles, we outlined two feasible methods: adjusting the batch size to one or individually deleting successfully processed messages. Despite the absence of a native solution at the time, these approaches provided effective workarounds. By addressing this issue, we aim to streamline and enhance the reliability of message-processing workflows within AWS Lambda and SQS\n","permalink":"https://vishnuprasad.blog/posts/aws-sqs-lambda-partial-batch-failure-improved-way/","summary":"This article is the continuation of one (link) of the previous articles, where I explained how to handle partial batch failures in SQS when using it with AWS Lambda. At the time of writing that article, there was no native way of handling this. Two feasible methods were, either using a batch size of one or deleting each successful message after processing.\nQuick Recap On, What Happens If One Of The Messages In The Batch Failed To Process.","title":"AWS SQS With Lambda, Partial Batch Failure Handling: Improved Way"},{"content":"Recently I had a requirement at work to run a cron job every 10 sec or 30 sec to poll some third-party API to pull some data. There will be more than 40 of these cron parallelly to fetch different sets of data from different APIs. The first obvious option would come to a serverless first mindset which I have is to run these on lambda functions.\nThe only native way in AWS to run the Lambda function is to have an Event bridge trigger with Cron expressions. The problem with this is event bridge only supports cron jobs as low as 1 minute. So for my use case, I cannot use Eventbridge.\nSo how can we solve this in the proper serverless way?\nAfter googling around different solutions people tried to achieve this with step functions and SQS etc. I decided to build something with AWS StepFunctions. The main reason to choose this is it can be a low-code solution, actually 0 code. The only thing we would need is the IaC code to build the step function.\nThe Solution How it works The above state-machine will be triggered by the event bridge every minute and will pass an input {counter: 0}\n1. The first state in the above state machine(Pass state), at this stage we will transform the input by adding 1 to the input of the state. Using step-functions intrinsic function MathAdd in this case.\n2. The next stage is a choice state, where we check the value of the counter, If the value of the counter is less than 6, the choice state goes to the next state and triggers a lambda function(a function that needs to be run every 10 sec)\n3. Once the lambda function is triggered it enters a wait state. where it waits for 10 sec(the period we want to trigger lambda).\n4. After the wait state it goes back to the start state and continues processing. On the first Pass state, it keeps incrementing the counter.\n5. The choice state will check the counter every time and if the value of the counter is equal to 6, which is 10 * 6 (1 minute), it goes to the last pass state and ends the step-function process.\nSince I have to run 40+ lambda to run every 10 sec I could have to add those 40 in a parallel step in the step functions. But I decided to have a single function in the state machine, which will be responsible to trigger all the other lambdas using AWS SDK asynchronously. The main reason for choosing this way is to disable or enable a single cron by keeping a list of functions on persistent storage/ env vars in case of downtime in the third-party API failures which functions rely on to get data.\nSince the state machine is triggered using an event bridge rule, we can enable or disable all the crons in case of an emergency ( As a kill switch).\nThe source code for the above solution can be found here.\nConclusion\nThe above solution is built for the very specific use case I had. So it may or may not work for all.\n","permalink":"https://vishnuprasad.blog/posts/how-to-run-aws-lambda-every-10-sec/","summary":"Recently I had a requirement at work to run a cron job every 10 sec or 30 sec to poll some third-party API to pull some data. There will be more than 40 of these cron parallelly to fetch different sets of data from different APIs. The first obvious option would come to a serverless first mindset which I have is to run these on lambda functions.\nThe only native way in AWS to run the Lambda function is to have an Event bridge trigger with Cron expressions.","title":"How to Run AWS Lambda every 10 sec"},{"content":"The OSI Model defines a networking framework to implement protocols in seven layers. OSI stands for open system interconnection. It was introduced in 1984. Designed to be an abstract model and teaching tool, the OSI Model remains a useful tool for learning about today\u0026rsquo;s network technologies such as Ethernet and protocols like IP.\nThis model is divided into 7 Layers.\nThe data communication on the OSI model starts at the Application layer of the sender side and goes up to the physical layer. From there the data will be sent to the physical layer of the receiver side and goes up to the application layer.\nThese 7 layers are grouped into two groups.\nThe first four layers(Application, Presentation, Session, Transport) are grouped and called Host layers. And it contains application-level data and is responsible for accurate data delivery between devices. The host layers interoperate end to end.\nThe last 2 layers are grouped together and called Media Layers. These layers make sure the data is transmitted correctly to the destination. These layers communicate peer to peer.\nLet\u0026rsquo;s get to know these layers in detail,\nLayer 7: Application The Application layer helps applications to talk to the network services or it acts as an interface between applications and the network.\nApplications like browsers, Email Clients, and Mobile apps use this layer to initiate a network connection.\nSome of the Application layer protocols are HTTP, FTP, SMTP, NFC, MQTT, RPC, RTMP, etc\nLayer 6: Presentation This layer operates as a data translator. it is responsible for making the data readable/presentable to and from the application layer.\nThe presentation layer has the following functionalities,\nTranslation\nCharacter code translation, converting ASCII to and from other formats, etc\nEncryption\nEncryption at the sender side and Decryption at the Receiver side\nCompression\nCompress the data received from the application layer before sending it to improve the speed of data transfer\nSome of the Formats and Encoding managed by the presentation layer are ASCI, JPEG, MPEG, MIDI, TLS, SSL, etc\nLayer 5: Session Layer The session layer is responsible for managing the Authentication, Authorization, and Session restoration. The time between when the communication is opened and closed is known as the session. The session layer provides the mechanism for opening, closing, and managing a session between end-user application processes. It also ensures that the session stays open long enough to transfer all the data being exchanged, and then promptly closes the session in order to avoid wasting resources.\nSome of the popular session layer protocols are, RPC, RTCP, SCP, L2TP, SMPP, etc\nLayer 4: Transport Layer Layer 4 is responsible for end-to-end communication between the two devices. This includes taking data from the session layer and breaking it up into chunks called segments before sending it to layer 3.The transport layer on the receiving device is responsible for reassembling the segments into data the session layer can consume.\nThe transport layer is also responsible for flow control and error control. Flow control determines an optimal speed of transmission to ensure that a sender with a fast connection doesn’t overwhelm a receiver with a slow connection. The transport layer performs error control on the receiving end by ensuring that the data received is complete, and requesting retransmission if it isn’t.\nSome of the popular Transport layer protocols are: TCP, UDP, RDP\nLayer 3: Network layer The network layer is responsible for packet forwarding including routing through intermediate routers, since it knows the address of neighboring network nodes, and it also manages the quality of service (QoS) and recognizes and forwards local host domain messages to the Transport layer (layer 4).\nThe network layer breaks up segments from the transport layer into smaller units, called packets, on the sender’s device, and reassembles these packets on the receiving device. The network layer also finds the best physical path for the data to reach its destination; this is known as routing.\nThis is where IP source and destination addressing is defined and rooting protocols are used to carry packets from source to destination across intermediate routers.\nSome of the popular Network layer protocols are: IPv4/IPv6, ICMP, IPsec\nLayer 2: DataLink This layer is the protocol layer that transfers data between adjacent network nodes in a wide area network (WAN) or between nodes on the same local area network (LAN) segment. In other words data link layer is very similar to the network layer, except the data link layer facilitates data transfer between two devices on the SAME network. The data link layer takes packets from the network layer and breaks them into smaller pieces called frames. Like the network layer, the data link layer is also responsible for flow control and error control in intra-network communication (The transport layer only does flow control and error control for inter-network communications).\nThe data link layer is composed to two sub layers. The data link layer\u0026rsquo;s first sub layer is the media access control, MAC layer. It\u0026rsquo;s used to acquire source and destination addresses(like MAC Address of the destination machine) which are inserted into the frame.\nSome of the popular DataLink layer protocols are: ARP, Ethernet, PPP\\\nLayer 1: Physical Layer The physical layer translates logical communications requests from the data link layer into hardware-specific operations to affect transmission or reception of electronic signals.The Physical layer includes the physical equipment involved in the data transfer, such as the cables and switches. This is also the layer where the data gets converted into a bit stream, which is a string of 1s and 0s. The physical layer of both devices must also agree on a signal convention so that the 1s can be distinguished from the 0s on both devices.\nSome of the popular Physical layer technologies are: USB, Network modems, Ethernet Physical Layer, GSM, Bluetooth Physical Layer\n","permalink":"https://vishnuprasad.blog/posts/understanding-osi-and-tcp-ip-networking-model/","summary":"The OSI Model defines a networking framework to implement protocols in seven layers. OSI stands for open system interconnection. It was introduced in 1984. Designed to be an abstract model and teaching tool, the OSI Model remains a useful tool for learning about today\u0026rsquo;s network technologies such as Ethernet and protocols like IP.\nThis model is divided into 7 Layers.\nThe data communication on the OSI model starts at the Application layer of the sender side and goes up to the physical layer.","title":"Understanding The OSI Networking Model"},{"content":"What is RDS Proxy Many applications, including those built on modern serverless architectures, can have many open connections to the database server and may open and close database connections at a high rate, exhausting database memory and compute resources. Amazon RDS Proxy allows applications to pool and share connections established with the database, improving database efficiency and application scalability. With RDS Proxy, failover times for Aurora and RDS databases are reduced by up to 66%, and database credentials, authentication, and access can be managed through integration with AWS Secrets Manager and AWS Identity and Access Management (IAM).\nIn this article, we will see how we can set up an RDS Proxy with IAM authentication enabled and connect to an Aurora Serverless V2 Cluster.\nAll the IaC for this tutorial is written in Terraform. You can use your own choice of IaC.\nNote: RDS proxy cannot be used with Aurora Serverless V1 Cluster. Use RDS Data API instead\n1. Setup RDS AuroraV2 Cluster RDS Cluster Security Group\nMake sure the RDS Cluster Security group will accept the traffic from the RDS Proxy Security group.\nYou can also use the same security group for the RDS cluster and RDS Proxy. In this case, the security group should accept traffic from itself. With terraform, we should add self = true in the ingress rule. In this session, we are using different security groups for Cluster and Proxy.\nresource \u0026quot;aws_security_group\u0026quot; \u0026quot;rds_cluster\u0026quot; { name = \u0026quot;rds-postgres-cluster\u0026quot; description = \u0026quot;Allow Postgres traffic to rds\u0026quot; vpc_id = data.aws_ssm_parameter.vpc_id.value ingress { description = \u0026quot;TLS from VPC\u0026quot; from_port = 5432 to_port = 5432 protocol = \u0026quot;tcp\u0026quot; cidr_blocks = [data.aws_ssm_parameter.vpc_cidr.value] security_groups = [aws_security_group.rds_proxy.id] // Cluster should accept traffic from RDS proxy Security group } egress { from_port = 0 to_port = 0 protocol = \u0026quot;-1\u0026quot; cidr_blocks = [\u0026quot;0.0.0.0/0\u0026quot;] ipv6_cidr_blocks = [\u0026quot;::/0\u0026quot;] } } RDS Cluster\nThis will create an RDS Aurora V2 Postgress Cluster.\nMake sure IAM authentication is enabled\nresource \u0026quot;aws_rds_cluster\u0026quot; \u0026quot;demo\u0026quot; { cluster_identifier = \u0026quot;${var.service}-${var.stage}-demo\u0026quot; database_name = \u0026quot;demo\u0026quot; master_username = \u0026quot;hyatt_main\u0026quot; master_password = random_password.rds_demo.result engine = \u0026quot;aurora-postgresql\u0026quot; engine_mode = \u0026quot;provisioned\u0026quot; engine_version = \u0026quot;13.6\u0026quot; storage_encrypted = true kms_key_id = module.main_kms.alias_target_key_arn vpc_security_group_ids = [aws_security_group.rds_cluster.id] db_subnet_group_name = aws_db_subnet_group.cortex_back.id availability_zones = [\u0026quot;us-east-2a\u0026quot;, \u0026quot;us-east-2b\u0026quot;, \u0026quot;us-east-2c\u0026quot;] iam_database_authentication_enabled = true serverlessv2_scaling_configuration { max_capacity = 2.0 min_capacity = 1.0 } } #RDS Cluster Instance resource \u0026quot;aws_rds_cluster_instance\u0026quot; \u0026quot;demo\u0026quot; { identifier = \u0026quot;${var.stage}-demo-1\u0026quot; cluster_identifier = aws_rds_cluster.demo.id instance_class = \u0026quot;db.serverless\u0026quot; engine = aws_rds_cluster.demo.engine engine_version = aws_rds_cluster.demo.engine_version publicly_accessible = false } Create Secret\nStore the DB credentials and endpoint details on AWS Secret Manager. This secret will be accessed by the RDS proxy to connect to the RDS cluster\nresource \u0026quot;aws_secretsmanager_secret_version\u0026quot; \u0026quot;rds_credentials\u0026quot; { secret_id = aws_secretsmanager_secret.rds_credentials.id secret_string = \u0026lt;\u0026lt;EOF { \u0026quot;username\u0026quot;: \u0026quot;${aws_rds_cluster.demo.master_username}\u0026quot;, \u0026quot;password\u0026quot;: \u0026quot;${random_password.rds_demo.result}\u0026quot;, \u0026quot;engine\u0026quot;: \u0026quot;postgres\u0026quot;, \u0026quot;host\u0026quot;: \u0026quot;${aws_rds_cluster.demo.endpoint}\u0026quot;, \u0026quot;port\u0026quot;: ${aws_rds_cluster.demo.port}, \u0026quot;dbClusterIdentifier\u0026quot;: \u0026quot;${aws_rds_cluster.demo.cluster_identifier}\u0026quot; } EOF } 2. Setup RDS Proxy Create an IAM Role\nCreate an IAM role for the RDS proxy, with permissions to access the secret from the secret manager that we created in the previous step. So the proxy can get the details to connect to the cluster.\nresource \u0026quot;aws_iam_role\u0026quot; \u0026quot;rds_proxy_secrets_access\u0026quot; { name = \u0026quot;rds_proxy_secrets_access\u0026quot; assume_role_policy = jsonencode({ Version = \u0026quot;2012-10-17\u0026quot; Statement = [ { Action = \u0026quot;sts:AssumeRole\u0026quot; Effect = \u0026quot;Allow\u0026quot; Sid = \u0026quot;\u0026quot; Principal = { Service = \u0026quot;rds.amazonaws.com\u0026quot; } }, ] }) inline_policy { name = \u0026quot;my_inline_policy\u0026quot; policy = jsonencode({ Version = \u0026quot;2012-10-17\u0026quot; Statement = [ { Action = [\u0026quot;secretsmanager:GetSecretValue\u0026quot;] Effect = \u0026quot;Allow\u0026quot; Resource = \u0026quot;${aws_secretsmanager_secret.rds_credentials.arn}\u0026quot; }, { Action = [\u0026quot;kms:Decrypt\u0026quot;] Effect = \u0026quot;Allow\u0026quot; Resource = \u0026quot;*\u0026quot; Condition = { StringEquals = { \u0026quot;kms:ViaService\u0026quot; = \u0026quot;secretsmanager.us-east-2.amazonaws.com\u0026quot; } } }, ] }) } } Security Group for RDS Proxy\nThis security group will allow traffic from clients like AWS Lambda Functions, Containers, EC2, etc to the RDS proxy. The egress rule will also allow the RDS proxy to talk to the Secrets Manager.\n# RDS Proxy Security Group resource \u0026quot;aws_security_group\u0026quot; \u0026quot;rds_proxy\u0026quot; { name = \u0026quot;rds-postgres-proxy-sg\u0026quot; description = \u0026quot;Allow Postgres traffic to rds from proxy\u0026quot; vpc_id = data.aws_ssm_parameter.vpc_id.value ingress { description = \u0026quot;TLS from VPC\u0026quot; from_port = 5432 to_port = 5432 protocol = \u0026quot;tcp\u0026quot; cidr_blocks = [data.aws_ssm_parameter.vpc_cidr.value] } egress { from_port = 0 to_port = 0 protocol = \u0026quot;-1\u0026quot; cidr_blocks = [\u0026quot;0.0.0.0/0\u0026quot;] ipv6_cidr_blocks = [\u0026quot;::/0\u0026quot;] } } Create RDS Proxy\n# RDS Proxy resource \u0026quot;aws_db_proxy\u0026quot; \u0026quot;demo\u0026quot; { name = \u0026quot;demo\u0026quot; debug_logging = true engine_family = \u0026quot;POSTGRESQL\u0026quot; idle_client_timeout = 1800 require_tls = true role_arn = aws_iam_role.rds_proxy_secrets_access.arn vpc_security_group_ids = [aws_security_group.rds_proxy.id] vpc_subnet_ids = [data.aws_ssm_parameter.private_subneta.value, data.aws_ssm_parameter.private_subnetb.value, data.aws_ssm_parameter.private_subnetc.value] auth { auth_scheme = \u0026quot;SECRETS\u0026quot; description = \u0026quot;example\u0026quot; iam_auth = \u0026quot;REQUIRED\u0026quot; secret_arn = aws_secretsmanager_secret.rds_credentials.arn } tags = { Name = \u0026quot;Name\u0026quot; Key = \u0026quot;demo\u0026quot; } } # RDS Proxy Target Group resource \u0026quot;aws_db_proxy_default_target_group\u0026quot; \u0026quot;demo\u0026quot; { db_proxy_name = aws_db_proxy.demo.name connection_pool_config { connection_borrow_timeout = 120 max_connections_percent = 100 max_idle_connections_percent = 50 } } # RDS Proxy Target resource \u0026quot;aws_db_proxy_target\u0026quot; \u0026quot;demo\u0026quot; { db_instance_identifier = aws_rds_cluster_instance.demo.id db_proxy_name = aws_db_proxy.demo.name target_group_name = aws_db_proxy_default_target_group.demo.name } Now we have an RDS cluster and Proxy setup. The next step is to write a lambda function to connect to the cluster.\n3. Lambda Function This Lambda will use IAM authentication to connect to the RDS proxy, and the proxy will manage the connection to the cluster.\nThis function needs to be in the same VPC and subnets that the RDS proxy and Cluster are in.\nAnd also should have a security group that can talk to the RDS proxy. You could also use the same security group of RDS proxy.\n// rds.ts import { Signer } from '@aws-sdk/rds-signer'; import { Client } from 'pg'; // Get an IAM token, This will be used as a password to connect to RDS Proxy const signer = new Signer({ hostname: 'Your RDS Proxy Endpoint', port: 5432, username: 'RDS Cluster Username', region: 'us-east-2', }); const token = await signer.getAuthToken(); const client = new Client({ user: 'RDS Cluster Username', host: 'Your RDS Proxy Endpoint', database: 'DB name', password: token, // IAM token port: 5432, ssl: true, }); await client.connect(); await client.query(select * from 'your table name'); Ensure that Lambda execution role includes rds-db:connect permissions as follows.\n{ \u0026quot;Version\u0026quot;: \u0026quot;2012-10-17\u0026quot;, \u0026quot;Statement\u0026quot;: [ { \u0026quot;Effect\u0026quot;: \u0026quot;Allow\u0026quot;, \u0026quot;Action\u0026quot;: [ \u0026quot;rds-db:connect\u0026quot; ], \u0026quot;Resource\u0026quot;: [ \u0026quot;arn:aws:rds-db:region:awsaccountnumber:dbuser:{proxyIdentifier from your rds proxy arn}/*\u0026quot; ] } ] } Conclusion With the above steps, you should be able to set up an RDS Cluster, RDS Proxy which can connect to the cluster, then a Lambda function that can connect to RDS Proxy via IAM authentication. If you face any issues while connecting to the Proxy, follow the troubleshooting guide below.\nTroubleshooting Steps to double check if you run to issue while connecting to the Proxy.\nSecurity Groups: Check RDS proxy security group can access RDS Cluster. If you are using the same security group for both proxy and cluster, ensure the security group has the rule to access itself. RDS proxy security group outbound rule should be able to call Secrets Manager IAM Role Make sure the IAM role used in RDS Proxy has correct access roles to call Secrets Manager to get RDS cluster credentials Client Code Make sure to use the RDS Proxy endpoint as hostname on both RDS Signer and DB Client. Enable SSL on DB Client Debug Logs Enable debug logs on the Proxy. This would add debug logs to cloudwatch, which can give insights into what is causing the problem. Useful Resources https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-proxy.html https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/rds-proxy.troubleshooting.html ","permalink":"https://vishnuprasad.blog/posts/how-to-set-up-aws-rds-proxy-with-iam-authentication-enabled-to-aurora-serverless-v2-cluster/","summary":"What is RDS Proxy Many applications, including those built on modern serverless architectures, can have many open connections to the database server and may open and close database connections at a high rate, exhausting database memory and compute resources. Amazon RDS Proxy allows applications to pool and share connections established with the database, improving database efficiency and application scalability. With RDS Proxy, failover times for Aurora and RDS databases are reduced by up to 66%, and database credentials, authentication, and access can be managed through integration with AWS Secrets Manager and AWS Identity and Access Management (IAM).","title":"How to set up AWS RDS Proxy with IAM Authentication enabled to Aurora Serverless V2 Cluster"},{"content":"Introduction Cloudwatch is an integral part of the AWS ecosystem. Every service in AWS reports to cloudwatch for the service logs, application logs, metrics, etc.\nIn this article let\u0026rsquo;s discuss the cloudwatch custom metrics in detail.\nMetrics help us with finding the performance of the AWS services and the applications we run using these services. It also allows us to visualize the data with graphs and dashboards and create alarms based on the data reported tho metrics. If you are new to cloudwatch and cloudwatch metrics you can learn the basic concepts here\nCustom Metrics By default, AWS provides free metrics for most of its services. Apart from its own service metrics, AWS allows us to publish custom metrics, which means we can send our application-specific metrics to cloudwatch metrics. for example, we can push the metrics for the duration of third-party api calls, or the count of status codes returned by an API, etc. Then we can create alarms, dashboards based on those metrics.\nNow let\u0026rsquo;s see how we can create custom metrics and put data points to them, There are three ways of creating custom cloudwatch metrics from your application.\nAWS API\u0026rsquo;s/SDK for cloudwatch metric Metric Log Filters Cloudwatch Embedded Metric Format Let\u0026rsquo;s see how we can create Custom metrics with the above three methods. For the demo purpose, let\u0026rsquo;s assume we have an AWS lambda function that calls a weather API, and we want to create metrics around the API call duration and the count of status codes returned by the API endpoint.\nAWS API\u0026rsquo;s/SDK\u0026rsquo;s This method uses the AWS cloudwatch metrics SDK\u0026rsquo;s putMetricData API to create the custom metrics. This method is pretty straightforward, but the problem with this method is that it will incur an additional API call and it can block other API calls in your application while putting metrics to cloudwatch. This could affect the latency of your application (for eg: REST APIs). Also, each putMetricData api call involves cost. AWS will charge $0.01 per 1000 requests.\nExample\n'use strict'; const axios = require('axios') const AWS = require('aws-sdk') const cloudwatch = new AWS.CloudWatch(); module.exports.handler = async (event) =\u0026gt; { try { const startTime = new Date() const response = await axios.get('https://www.metaweather.com/api/location/2487956/2021/8/8') const apiStatusCode = response.status const endTime = new Date() console.log(apiStatusCode) const apiCallDuration = endTime - startTime const statusMetricParams = { MetricData: [ { MetricName: 'status_code', Dimensions: [ { Name: 'status_code', Value: `http_${apiStatusCode}` }, ], Timestamp: new Date(), Unit: 'Count', Value: 1, } ], Namespace: 'MetricFromSDK_1' }; await cloudwatch.putMetricData(statusMetricParams).promise(); const durationMetricParams = { MetricData: [ { MetricName: 'api_call_duration', Dimensions: [ { Name: 'api_name', Value: `location_api` }, ], Timestamp: new Date(), Unit: 'Milliseconds', Value: apiCallDuration, } ], Namespace: 'MetricFromSDK_1' }; await cloudwatch.putMetricData(durationMetricParams).promise(); } catch (error) { console.error('failed',error) } }; Metric Log Filters Metric log filters can search and filter data points needed to create metrics from Cloudwatch log groups. CloudWatch Logs uses these metric filters to turn log data into numerical CloudWatch metrics that you can graph or set an alarm on. When you create a metric from a log filter, you can also choose to assign dimensions and a unit to the metric. If you specify a unit, be sure to specify the correct one when you create the filter. Changing the unit for the filter later will have no effect.\nWith this method, the metrics are generated asynchronously. You don\u0026rsquo;t need any additional API calls from the application to generate the metrics. You just need to log the metrics data in a JSON format in the application and create a metric filter for each metric on the applications cloudwatch log group which filters the metric data from the logs based on the filter expressions defined.\nThe only downside I see with this method is the creation of metric filters on log groups every time you need to create a new metric. You can create them manually or use any IaC tool to generate them on demand.\nExample\n'use strict'; const axios = require('axios') module.exports.handler = async (event) =\u0026gt; { try { const startTime = new Date() const response = await axios.get('https://www.metaweather.com/api/location/2487956/2021/8/8') const apiStatusCode = response.status const endTime = new Date() const apiCallDuration = endTime - startTime console.log({ metricName: 'api_call_duration', metricValue: apiCallDuration }) console.log({ metricName: 'status_code_count', metricValue: apiStatusCode}) console.log({[`http_${apiStatusCode}`]: 1}) } catch (error) { console.error(error) } }; Once the logs are pushed to cloudwatch logs, the next step is to create a metric filter on the log from which we want to filter the data points to generate the metric.\nFollow the below screenshots to see how to create a metric filter based on the logs that we generate from the code. Once the metric filter is created properly and the filter patterns match with the logs it will create a metric and start pushing data points to it on every new log.\nCloudwatch Embedded Metric Format The CloudWatch embedded metric format is a JSON specification used to instruct CloudWatch Logs to automatically extract metric values embedded in structured log events. You can use CloudWatch to graph and create alarms on the extracted metric values.\nThis is my personal favorite method. This is an asynchronous process, which means it does not make any API call to generate metrics, and no metric filters are needed. All you have to do is log your metrics to cloudwatch in a specific JSON format as documented here. AWS will automatically parse these logs from cloudwatch log groups and generate the metrics for you.\nThere are two ways to use this method,\nDirectly log the metrics in JSON format as documented here Using embedded metric NPM module from AWS (Examples available at the modules GitHub page here) Below is an example of the first method.\n'use strict'; const axios = require('axios') module.exports.handler = async (event) =\u0026gt; { try { const startTime = new Date() const response = await axios.get('https://www.metaweather.com/api/location/2487956/2021/8/') const apiStatusCode = response.status const endTime = new Date() console.log(apiStatusCode) const apiCallDuration = endTime - startTime // Create Metric For Status Code Count console.log( JSON.stringify({ message: '[Embedded Metric]', // Identifier for metric logs in CW logs status_code_count: 1, // Metric Name and value status_code: `http_${apiStatusCode}`, // Diamension name and value _aws: { Timestamp: Date.now(), CloudWatchMetrics: [ { Namespace: `demo_2`, Dimensions: [['status_code']], Metrics: [ { Name: 'status_code_count', Unit: 'Count', }, ], }, ], }, }) ) // Create Metric For API Call Duration console.log( JSON.stringify({ message: '[Embedded Metric]', // Identifier for metric logs in CW logs api_call_duration: apiCallDuration, // Metric Name and value api_name: 'location_api', // Diamension name and value _aws: { Timestamp: Date.now(), CloudWatchMetrics: [ { Namespace: `demo_2`, Dimensions: [['api_name']], Metrics: [ { Name: 'api_call_duration', Unit: 'Milliseconds', }, ], }, ], }, }) ) } catch (error) { console.error(error) } }; Below are the screenshots of the custom metrics we created with the above methods,\nName Space\nDimension\nMetric\nConclusion We have discussed three methods above, First one is synchronous and the other two are asynchronous. I personally prefer the asynchronous method because the metric generation process will not block the other API calls in the application.\nCloudwatch custom metrics can be used in the following scenarios,\nThird-Party integration metrics(API call duration, success or failed count of processes, etc) Custom metrics around events/processes in the application ","permalink":"https://vishnuprasad.blog/posts/cloudwath-custom-metrics-and-how-to-create-them/","summary":"Introduction Cloudwatch is an integral part of the AWS ecosystem. Every service in AWS reports to cloudwatch for the service logs, application logs, metrics, etc.\nIn this article let\u0026rsquo;s discuss the cloudwatch custom metrics in detail.\nMetrics help us with finding the performance of the AWS services and the applications we run using these services. It also allows us to visualize the data with graphs and dashboards and create alarms based on the data reported tho metrics.","title":"Cloudwatch Custom Metrics With CloudWatch Embedded Metric Format"},{"content":"Update Nov 23, 2021,\nMy wish I mentioned at the end of this blog has been granted by AWS 🥳. AWS Lambda now supports partial batch response for SQS as an event source. Find the announcement here. I have written a new article on how to do it here https://vishnuprasad.blog/posts/aws-sqs-lambda-partial-batch-failure-improved-way/\nAmazon Web Services released SQS triggers for Lambda functions in June 2018. You can use an AWS Lambda function to process messages in an Amazon Simple Queue Service (Amazon SQS) queue. Lambda polls the queue and invokes your Lambda function synchronously with an event that contains queue messages. Lambda reads messages in batches and invokes your function once for each batch. When your function successfully processes a batch, Lambda deletes its messages from the queue.\nHow Does Lambda Process The Messages? By default, Lambda invokes your function as soon as records are available in the SQS queue. Lambda will poll up to 10(Can be increased) messages in your queue at once and will send that batch to the function. This means each invocation of lambda will receive up to 10 messages from the Queue to process. Once all messages in the batch are processed lambda will delete the batch from the queue and will start processing the next batch.\nWhat Happens If One Of The Message In The Batch Failed To Process? If you don\u0026rsquo;t throw an error on your function and one of the messages didn\u0026rsquo;t process correctly the message will be lost If you catch and throw an error, the whole batch will be sent back to the queue including the ones which were processed successfully. This batch will be retried again multiple times based on the maxReceiveCount configuration if the error is not resolved. This will lead to reprocessing of successful messages multiple times if you have configured a Dead letter Queue configured with your SQS Queue the failed batch will end up there once the ReceiveCount for a message exceeds the maxReceiveCount . The successfully processed messaged will also end up in the DLQ. If the consumer of this DLQ has the ability to differentiate between failure and success messages in the batch we are good to go. How to Handle The Partial Failure? Use a batchSize of 1\nThis is useful in low-traffic scenarios. Only one message will be sent to lambda on each invocation. But this limits the throughput of how quickly you are able to process messages\nDelete successfully processed messages\nThis is the most effective method to handle this situation. Process the batch messages inside a try-catch block and store the receiptHandle for each message in an array and call the sq.deleteMessage API and delete those messages when you catch the error and throw an error once you delete the messages that are successfully processed.\n'use strict'; const AWS = require('aws-sdk') const sqs = new AWS.SQS(); module.exports.handler = async event =\u0026gt; { const sqsSuccessMessages = []; try { const records = event.Records ? event.Records : [event]; for (const record of records) { await processMessageFucntion(record) // Store successfully processed records sqsSuccessMessages.push(record); } } catch (e) { if (sqsSuccessMessages.length \u0026gt; 0) { await deleteSuccessMessages(sqsSuccessMessages); } throw new Error(e); } }; // Delete success messages from the queue incase any failure while processing the batch // On no failure case lambda will delete the whole batch once processed const deleteSuccessMessages = async messages =\u0026gt; { for (const msg of messages) { await sqs .deleteMessage({ QueueUrl: getQueueUrl({ sqs, eventSourceARN: msg.eventSourceARN }), ReceiptHandle: msg.receiptHandle }) .promise(); } }; const getQueueUrl = ({ eventSourceARN, sqs }) =\u0026gt; { const [, , , , accountId, queueName] = eventSourceARN.split(':'); return `${sqs.endpoint.href}${accountId}/${queueName}`; }; Conclusion\nAs of now, there is no support for handling partial batch failures in Lambda with SQS. It is totally up to you to decide if you want to handle it or not depends on your application need.\nRecently AWS had added support for custom checkpointing for DynamoDB Streams and Kinesis. This means customers now can automatically checkpoint records that have been successfully processed using a new parameter, FunctionResponseType. When customers set this parameter to Report Batch Item Failure, if a batch fails to process, only records after the last successful message are retried. This reduces duplicate processing and gives customers more options for failure handling.\nThis gives hope, that AWS may add something similar for SQS as well🙏\n","permalink":"https://vishnuprasad.blog/posts/aws-sqs-partial-failure-handling/","summary":"Update Nov 23, 2021,\nMy wish I mentioned at the end of this blog has been granted by AWS 🥳. AWS Lambda now supports partial batch response for SQS as an event source. Find the announcement here. I have written a new article on how to do it here https://vishnuprasad.blog/posts/aws-sqs-lambda-partial-batch-failure-improved-way/\nAmazon Web Services released SQS triggers for Lambda functions in June 2018. You can use an AWS Lambda function to process messages in an Amazon Simple Queue Service (Amazon SQS) queue.","title":"AWS SQS With Lambda, Partial Batch Failure Handling"},{"content":"The easiest and most common way of adding application configurations(Eg: feature toggle flags, secrets, fallback URLs, etc) with your serverless applications is by setting them as lambda environment variables. These variables are set to the lambda functions from a configuration file in your code (eg: serverless.yml) or read from secrets manager or parameter store etc and exported during the deployment on your CICD pipeline.\nThe problem with this approach is, suppose you have a serverless application that has multiple lambda functions under it. These lambda functions are set to do individual tasks. For eg: lambda1 is set to call a third-party payment service, and it reads the API URLs and Keys from lambda environment variables. Now if we want to change the API URL or KEY for this service, would result in a significantly longer and more involved build, test, and deployment process. Each and every time you make a change in the configuration you have to repeat the build, test, deploy process.\nIn this article, we will discuss how to decouple your application configuration from your application code and how to deploy the changes to the service without redeploying the application code base every time there is a change in the configurations. with AWS AppConfig.\nWhat is AppConfig? AWS AppConfig can be used to create, manage, and quickly deploy application configurations. AppConfig supports controlled deployments to applications of any size and includes built-in validation checks and monitoring. You can use AppConfig with applications hosted on EC2 instances, AWS Lambda, containers, mobile applications, or IoT devices.\nAWS AppConfig helps simplify the following tasks:\nConfigure\nSource your configurations from Amazon Simple Storage Service (Amazon S3), AWS AppConfig hosted configurations, Parameter Store, Systems Manager Document Store. Use AWS CodePipeline integration to source your configurations from Bitbucket Pipelines, GitHub, and AWS CodeCommit.\nValidate\nWhile deploying application configurations, a simple typo could cause an unexpected outage. Prevent errors in production systems using AWS AppConfig validators. AWS AppConfig validators provide a syntactic check using a JSON schema or a semantic check using an AWS Lambda function to ensure that your configurations deploy as intended. Configuration deployments only proceed when the configuration data is valid.\nDeploy and monitor\nDefine deployment criteria and rate controls to determine how your targets receive the new configuration. Use AWS AppConfig deployment strategies to set deployment velocity, deployment time, and bake time. Monitor each deployment to proactively catch any errors using AWS AppConfig integration with Amazon CloudWatch Events. If AWS AppConfig encounters an error, the system rolls back the deployment to minimize the ct on your application users.\nAWS AppConfig can help you in the following use cases:\nApplication tuning – Introduce changes carefully to your application that can be tested with production traffic. Feature toggle – Turn on new features that require a timely deployment, such as a product launch or announcement. Allow list – Allow premium subscribers to access paid content. Operational issues – Reduce stress on your application when a dependency or other external factor impacts the system. DEMO\nThe generic way of using app config with your lambda function is to use AppConfig is using AppConfig SDK in the code and call the configuration. The problem with this approach is each and every lambda execution will call the AppConfig API\u0026rsquo;s which will incur additional costs and it might also hit the AppConfig service limits when the traffic is High.\nTo avoid calling the AppConfig API on each request Amazon has come up with a solution. They have created a Lambda Extension for AppConfig.\nWhen the AWS AppConfig extension starts, two main components are created:\nThe first, the proxy, exposes a localhost HTTP endpoint that can be called from your Lambda code to retrieve a piece of configuration data. The proxy does not call AWS AppConfig directly. Instead, it uses an extension-managed cache that contains the freshest configuration data available. Because the data is already available locally, the HTTP endpoint can be called on every invocation of your function (or even multiple times if you have long-running functions that you want to update mid-flight).\nThe second component, the retriever, works in the background to keep your configuration data fresh. It checks for potential updates even before your code asks for it. It tracks which configurations your function needs, whether the data is potentially stale, and makes appropriate calls to AWS AppConfig to retrieve fresher data, if available. It ensures the right metadata is passed to avoid any unnecessary data delivery and support various types of rollout strategy.\nThe determination of “how fresh is fresh” can be configured using Lambda environment variables. These configs changes rarely.\nAWS_APPCONFIG_EXTENSION_POLL_INTERVAL_SECONDS, which defaults to 45 seconds, specifies the frequency with which the extension checks for new configuration data. AWS_APPCONFIG_EXTENSION_POLL_TIMEOUT_MILLIS, which defaults to 3000 milliseconds, specifies the maximum time the extension waits for a piece of configuration data before giving up and trying again during the next poll interval AWS_APPCONFIG_EXTENSION_HTTP_PORT which defaults to 2772, specifies the port that the proxy’s HTTP endpoint uses Now, Let\u0026rsquo;s create a simple REST API for the demo.\nFirst, we need to create a new application in AppConfig. For that go to AWS Console → Systems Manager → AppConfig\nCreate an Application Create an Environment Create a Configuration Profile and add some configs 4. Deploy the configuration To the Code\nConsidering we have e-commerce API where you want to change the discounts for new customers, the value for discounts is something that can vary often. By Using AppConfig we can update that without any changes and deployments in our application code.\nBelow is the sample code for our demo App.\nhandler.js\nconst http = require('http'); const axios = require('axios') exports.demo = async (event) =\u0026gt; { let configData = await axios.get(\u0026quot;http://localhost:2772/applications/DemoApp/environments/develop/configurations/generalConfig\u0026quot;) let discountPercentage = configData.data.discountPercentage const response = { statusCode: 200, body: `You have ${discountPercentage}% off on your first purchase`, }; return response; }; http://localhost:2772/applications/DemoApp/environments/develop/configurations/generalConfig This URL is the HTTP endpoint for the proxy running on the lambda extension. Our Lambda will call this on every execution to get the latest configurations\nserverless.yml\nservice: appconfig-poc provider: name: aws runtime: nodejs12.x region: us-west-2 iamRoleStatements: - Effect: 'Allow' Action: - 'appconfig:GetConfiguration' Resource: '*' ## These are the lambda extension configurations environment: AWS_APPCONFIG_EXTENSION_POLL_INTERVAL_SECONDS: 30 AWS_APPCONFIG_EXTENSION_POLL_TIMEOUT_MILLIS: 3000 AWS_APPCONFIG_EXTENSION_HTTP_PORT: 2772 functions: demo: handler: handler.demo ## AWS AppConfig Lambda Layer ## Choose the layer for your region from here https://docs.aws.amazon.com/appconfig/latest/userguide/appconfig-integration-lambda-extensions.html layers: - arn:aws:lambda:us-west-2:359756378197:layer:AWS-AppConfig-Extension:18 events: - http: path: getDiscount method: get Deploy the code. You will get an http endpoint.\n➜ sls deploy --stage dev Serverless: Running \u0026quot;serverless\u0026quot; installed locally (in service node_modules) Serverless: Packaging service... Serverless: Excluding development dependencies... Serverless: Uploading CloudFormation file to S3... Serverless: Uploading artifacts... Serverless: Uploading service appconfig-poc.zip file to S3 (135.58 KB)... Serverless: Validating template... Serverless: Updating Stack... Serverless: Checking Stack update progress... ............... Serverless: Stack update finished... Service Information service: appconfig-poc stage: dev region: us-west-2 stack: appconfig-poc-dev resources: 11 api keys: None endpoints: GET - https://xxxxxx.execute-api.us-west-2.amazonaws.com/dev/getDiscount functions: demo: appconfig-poc-dev-demo layers: None Serverless: Removing old service artifacts from S3... Now once you call the endpoint you will get the message like this.\n➜ curl https://xxxxxx.execute-api.us-west-2.amazonaws.com/dev/getDiscount You have 5% off on your first purchase Now change the value for discountPercentage on AppConfig and deploy.\nGoto configuration profile and create a new version of the configuration 2. Deploy the new version of the config\nOnce the deployment is finished hit the endpoint to see the updated discount percentage.\n➜ curl https://xxxxx.execute-api.us-west-2.amazonaws.com/dev/getDiscount You have 10% off on your first purchase See We have successfully updated our application config without changing/deploying our codebase 🎉\nConclusion The demo above is a very simple use case of AWS AppConfig. But there are many other things we can achieve with it. AWS customers are using this for multiple use cases like,\nFeature flags: You can deploy features onto production that are hidden behind a feature flag. Toggling the feature flag turns on the feature immediately, without doing another code deployment. Allow lists: You might have some features in your app that are for specific callers only. Using AWS AppConfig, you can control access to those features and dynamically update access rules without another code deployment The verbosity of logging: You can control how often an event occurs by setting a variable limit. For example, you can adjust the verbosity of your logging during a production incident to better analyze what is going on. You would want to do another full deployment in the case of a production incident, but a quick configuration change gets you what you need. ","permalink":"https://vishnuprasad.blog/posts/decoupling-application-configuration-from-application-code-in-your-serverless-application-with-aws-appconfig/","summary":"The easiest and most common way of adding application configurations(Eg: feature toggle flags, secrets, fallback URLs, etc) with your serverless applications is by setting them as lambda environment variables. These variables are set to the lambda functions from a configuration file in your code (eg: serverless.yml) or read from secrets manager or parameter store etc and exported during the deployment on your CICD pipeline.\nThe problem with this approach is, suppose you have a serverless application that has multiple lambda functions under it.","title":"Decoupling Application configuration from application code in your serverless application with AWS Appconfig"},{"content":"Serverless is great, it helps companies to focus on product and application development without worrying much about the infrastructure and scaling. But there are some soft and hard limits for every AWS service which we need to keep in mind when we are developing a serverless application. These limits are set to protect the customer as well as the provider against any unintentional use.\nIn this article, we will talk about some of those limits and how to avoid them.\nDeployment Limits Lambda 1. 50 MB: Function Deployment Package Size, 250 MB: Size of code/dependencies that you can zip into a deployment package (uncompressed .zip/.jar size)\nThere is a limit of 50MB on the package size of the code which we upload to lambda.\nThis limit is applied when we try to create or update a lambda function with the AWS CLI.\nIf you try to create the function from the AWS Web console it is limited to 10MB. We can avoid these limitations by uploading the ZIP file to S3 and create the function.\nThe total size of code/dependencies we can compress into the deployment package is limited to 250MB. In simple REST API cases, we may not hit these limits. But when we have to use binaries like FFMPEG or ML/AI libraries like scikit-learn or ntlk with lambda could hit this limit. these dependencies are high in size\nAre these a soft limit? : NO\nHow to Avoid?\n- Use Serverless framework\nBy default serverless framework zip and deploys your code first to S3 and deploys it lambda via cloud formation.\n- Use WebPack\n*WebPack* is a well-known tool serving to create bundles of assets (code and files). Webpack helps to reduce and optimize the packaging size by\nInclude only the code used by your function Optimize your NPM dependencies Use a single file for your source code Optimizing and reducing the package size will also help to reduce the cold start of the functions.\n2. 75GB: Total Size Of All Deployment Packages That Can Be Uploaded Per Region\nThis limit is a region-wide soft limit. It can be increased by a service Quota limit increase. Most of the time people get hit by this limit is when they have a huge number of lambda functions and every time we update a new code a new version of lambda is created. Each version has its own deployment package it will be counted towards this limit.\nIs it a soft limit? : YES\nHow to Avoid?\nVersion your code and do not version functions. (Except for lambda@edge, For lambda@edge versioning is a must) - Remove older or unused versions If you are updating the function via AWS CLI use --no-publish flag not to create a new version update**.** Keep only the latest version of the lambda function. Remove the older versions, and if we really needed to keep a specific older version of the function, add an ALIAS to those versions and remove all the unused versions. 3. 512MB: Amount of data that be stored inside lambda instance during execution (/tmp)\nIf you want to download a file and store in the /tmpdirectory to process it during the execution, this limit will be applied. You cannot store files into this directory only up to 512 MB, even if it is a single file or multiple files.\nIs it a soft limit? : NO\nHow to Avoid?\nUse the Nodejs Stream method to read and process and write files without loading the whole file into lambdas filesystem 4. 6MB: Lambda payload limit\nThis means we cannot POST more than 6MB of data to Lambda through API Gateway. So if we build an image or video uploading API to upload files to S3, we are limited to this 6MB Limit.\nIs it a soft limit? : NO\nHow to Avoid?\n- Use a pre-signed S3 URL.\nIn this case, the client makes an HTTP GET request to API Gateway, and the Lambda function generates and returns a pre-signed S3 URL, the client uploads the image to S3 directly, using the pre-signed S3 URL\nCloudformation If you are using the serverless framework for deploying your application, as your application grows you may hit some of the cloudformation limits when you deploy as serverless framework uses cloudformation behind the scenes for deploying services.\nA CloudFormation stack can have at most 500 resources Let\u0026rsquo;s take an example of a backend Application with multiple REST API\u0026rsquo;s. This Application may have multiple Lambda functions, API Gateway Endpoints, Methods, Custom Domains, SNS Topics, DynamoDB, S3 Buckets, etc. When we deploy this application to AWS with cloudformation, it will create cloudformation resources for all the mentioned services in a single cloudformation stack. There will be multiple resources created per services(IAM roles, IAM Policies, Cloudwatch log groups). In the case of a single lambda function following resources will be created per each function,\nAWS::Lambda::Function AWS::Lambda::Version AWS::Logs::LogGroup Plus additional resources will be added if we attach event-sources like API Gateway, SNS to the function When the application grows, the total number of resources will also increase. And when it hit the 500 limit the deployments will start failing.\nIs it a soft limit? : NO\nHow to Avoid?\n- Use Cloudformation Nested Stacks to Reuse Common Template Patterns,\nAs your infrastructure grows, common patterns can emerge in which you declare the same components in each of your templates. You can separate out these common components and create dedicated templates for them. That way, you can mix and match different templates but use nested stacks to create a single, unified stack. The nested solutions may buy you little time to avoid this limit, but as the stack grows it will be hard to manage it.\n- Use serverless split stack plugin\nThis plugin migrates CloudFormation resources into nested stacks in order to work around the 200 resource limit. There are built-in migration strategies that can be turned on or off as well as defining your own custom migrations.\nThe recommended way is, Try to create your services(Multiple Micro Services) as small as you can. Keep an eye on no resources every time you deploy the stack, and when you think the stack may hit the limit, break some of its features into alternative service.\nAn IAM role policy can have up to 10,240 characters This is one of the other limits we may hit when the stack grows. This happens when the whole application uses a single IAM role. By default serverless will include all the basic and custom IAM policies for all the functions used by the application into one single IAM role.\nHow to Avoid?\n- Create individual IAM roles for each function in the cloudformation stack instead of a single large IAM role for the whole stack. Using per-function roles is a recommended best practice to achieve and maintain the least privilege setup for your Lambda functions.\n- With the serverless framework, there are a couple of good plugins that help to do this.\nSummary\nIt is a good practice to know all the limits of all the AWS services that you are going to use when designing your infrastructure and develop the application. This will help us with the following,\n- Avoid redesigning the architecture in the future when we hit the hard limit\n- Design scalable and fault-tolerant serverless infrastructure by planning and implementing workarounds to avoid hitting the limits or calculating and increasing the soft limit of each service as per the requirement of the application\n","permalink":"https://vishnuprasad.blog/posts/aws-limits-to-keep-in-mind-while-developing-a-serverless-application/","summary":"Serverless is great, it helps companies to focus on product and application development without worrying much about the infrastructure and scaling. But there are some soft and hard limits for every AWS service which we need to keep in mind when we are developing a serverless application. These limits are set to protect the customer as well as the provider against any unintentional use.\nIn this article, we will talk about some of those limits and how to avoid them.","title":"AWS Service Limits To Keep In Mind While Developing A Serverless Application"},{"content":"Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database so that you don\u0026rsquo;t have to worry about hardware provisioning, setup, and configuration, replication, software patching, or cluster scaling. DynamoDB also offers encryption at rest, which eliminates the operational burden and complexity involved in protecting sensitive data.\nThis cheat sheet will cover the most commonly used scenarios of data operations in DynamoDB with AWS DynamoDB Document client for JavaScript/Nodejs. The DynamoDB Document Client is the easiest and most preferred way to interact with a DynamoDB database from a Nodejs or JavaScript application.\nGETTING STARTED Install npm install aws-sdk\nConfigure const AWS = require(\u0026#39;aws-sdk\u0026#39;) const ddb = new AWS.DynamoDB.DocumentClient() CREATE ITEM Let\u0026rsquo;s create a new item for the new user. This user will have one album and one image in the album.\nasync function createItem (buildInfo) { console.log(\u0026#39;Creating new item\u0026#39;) const params = { TableName: tableName, Item: { \u0026#39;userId\u0026#39;: \u0026#39;johnDoe\u0026#39;, \u0026#39;createdAt\u0026#39;: 1598362623, \u0026#39;updatedAt\u0026#39;: 1598362623, \u0026#39;albums\u0026#39;: { \u0026#39;album1\u0026#39;: { \u0026#39;id\u0026#39;: \u0026#39;album-kjuijhs342\u0026#39;, \u0026#39;createdAt\u0026#39;: 1598362623, \u0026#39;updatedAt\u0026#39;: 1598362623, \u0026#39;description\u0026#39;: \u0026#39;My First Album\u0026#39;, \u0026#39;Title\u0026#39;: \u0026#39;Holidays\u0026#39;, \u0026#39;images\u0026#39;: { \u0026#39;img-1\u0026#39;: { \u0026#39;filename\u0026#39;: \u0026#39;johndoe/album1/e8TtkC5xyv4.jpg\u0026#39;, \u0026#39;s3Url\u0026#39;: \u0026#39;s3://photo-bucket/johndoe/album1/e8TtkC5xyv4.jpg\u0026#39;, \u0026#39;tags\u0026#39;: [\u0026#39;nature\u0026#39;, \u0026#39;animals\u0026#39;] } } } } } } try { await ddb.put(params).promise() } catch (error) { console.log(error) } } SCAN Scan and returns all items in a table\nasync function scan() { const params = { TableName: tableName } try { await ddb.scan(params).promise() } catch (error) { console.error(error) } } GET ITEM Get a single item from the table\nasync function getItem() { const params = { TableName: tableName, Key: { \u0026#39;userId\u0026#39;: \u0026#39;johnDoe\u0026#39; } } try { await ddb.get(params).promise() } catch (error) { console.error(error) } } GET ONLY SOME DATA FROM AN ITEM this will return only the tags from img1 and img2 in the result.\nasync function getSome() { const params = { TableName: tableName, ProjectionExpression: `albums.album1.images.#imageName1.tags, albums.album1.images.#imageName2.tags`, ExpressionAttributeNames: { \u0026#39;#imageName1\u0026#39;: \u0026#39;img-1\u0026#39;, \u0026#39;#imageName2\u0026#39;: \u0026#39;img-2\u0026#39; }, Key: { \u0026#39;userId\u0026#39;: \u0026#39;johnDoe\u0026#39;, } } try { await ddb.get(params).promise() } catch (error) { console.error(error) } } DELETE ITEM deletes a single item from the table\nasync function deleteItem () { const params = { TableName: tableName, Key: { userId: \u0026#39;johnDoe\u0026#39;, } } try { await ddb.delete(params).promise() } catch (error) { console.error(error) } } QUERY Query an item from a table\nasync function query () { const params = { TableName: tableName, KeyConditionExpression: \u0026#39;userId = :id \u0026#39;, ExpressionAttributeValues: { \u0026#39;:id\u0026#39;: \u0026#39;johnDoe\u0026#39; } } try { await ddb.query(params).promise() } catch (error) { console.error(error) } } UPDATE A TOP-LEVEL ATTRIBUTE Let\u0026rsquo;s update the updatedAt key\nasync function updateItem () { const params = { TableName: tableName, Key: { userId: \u0026#39;johnDoe\u0026#39; }, UpdateExpression: \u0026#39;set updatedAt = :newUpdatedAt\u0026#39;, ExpressionAttributeValues: { \u0026#39;:newUpdatedAt\u0026#39;: 1598367687 }, ReturnValues: \u0026#39;UPDATED_NEW\u0026#39; } try { await ddb.update(params).promise() } catch (error) { console.error(error) } } UPDATE A NESTED ATTRIBUTE Here we will add a new attribute(size) to img-1 of album1\nasync function updateNestedAttribute() { const params = { TableName: tableName, Key: { userId: \u0026#39;johnDoe\u0026#39; }, UpdateExpression: `set albums.album1.images.#img.size = :newImage`, ConditionExpression: `attribute_not_exists(albums.album1.images.#img.size)`, // only creates if size attribute doestnt exists ExpressionAttributeNames: { \u0026#39;#img\u0026#39;: \u0026#39;img-1\u0026#39; }, ExpressionAttributeValues: { \u0026#39;:newImage\u0026#39;: 2048 } } try { await ddb.update(params).promise() } catch (error) { console.error(error) } } NOTE: If an attribute name begins with a number or contains a space, a special character, or a reserved word, then you must use an expression attribute name to replace that attribute\u0026rsquo;s name in the expression. In the above example, img-2 attribute has - in its name. So if we set the update expression to set albums.album1.images.image-2 = :newImage it will throw an error.\nAPPEND TO A NESTED OBJECT Here we will add a new image to album1\nasync function appendToAnObject () { const newImage = { \u0026#39;filename\u0026#39;: \u0026#39;johndoe/album1/food-826349.jpg\u0026#39;, \u0026#39;s3Url\u0026#39;: \u0026#39;s3://photo-bucket/johndoe/album1/food-826349.jpg\u0026#39;, \u0026#39;tags\u0026#39;: [\u0026#39;burger\u0026#39;, \u0026#39;food\u0026#39;] } const params = { TableName: tableName, Key: { userId: \u0026#39;johnDoe\u0026#39; }, UpdateExpression: `set albums.album1.images.#image = :newImage`, ExpressionAttributeNames: { \u0026#39;#image\u0026#39;: \u0026#39;img-2\u0026#39; }, ExpressionAttributeValues: { \u0026#39;:newImage\u0026#39;: newImage } } try { await ddb.update(params).promise() } catch (error) { console.error(error) } } APPEND TO A LIST Here we will add a couple of tags to one of the images. Tags are stored as an array\nasync function appendToList() { const params = { TableName: tableName, Key: { userId: \u0026#39;johnDoe\u0026#39; }, UpdateExpression: \u0026#39;SET albums.album1.images.#image1.tags = list_append(albums.album1.images.#image1.tags, :newTags)\u0026#39;, ExpressionAttributeNames: { \u0026#39;#image1\u0026#39;: \u0026#39;img-1\u0026#39; }, ExpressionAttributeValues: { \u0026#39;:newTags\u0026#39;: [\u0026#39;burger\u0026#39;, \u0026#39;pizza\u0026#39;] } } try { await ddb.update(params).promise() } catch (error) { console.error(error) } } ","permalink":"https://vishnuprasad.blog/posts/dynamodb-cheatsheet-for-nodejs-javascript/","summary":"Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB lets you offload the administrative burdens of operating and scaling a distributed database so that you don\u0026rsquo;t have to worry about hardware provisioning, setup, and configuration, replication, software patching, or cluster scaling. DynamoDB also offers encryption at rest, which eliminates the operational burden and complexity involved in protecting sensitive data.\nThis cheat sheet will cover the most commonly used scenarios of data operations in DynamoDB with AWS DynamoDB Document client for JavaScript/Nodejs.","title":"DynamoDB CheatSheet For NodeJS/JavaScript"},{"content":"Load testing is an important part when you are designing any type of application, whether it is traditional EC2 based or container-based or a complete serverless application.\nWhy is Load Testing important? Load testing will help us to find the following\n- How fast is the system\n- How much load can the system handle\n- Under what conditions will the system fail\n- Determine our application’s capabilities by measuring its response time, throughput, CPU utilization, latency, etc. during average and heavy user load. This will eventually help in determining the infrastructure needs as the system scales upward.\n- It gives us an opportunity to find strange behavior or surprises when we subject an application to an insane amount of load (stress testing). Strange behaviors include request timeouts, IO Exceptions, memory leaks, or any security issues.\nChoosing a Load testing tool or framework There are many great load testing frameworks available. Some of the leading tools are,\n- Jmeter\n- Locust\n- Artillery.io\nEach of the above tools provides common and some additional features and different methods of load testing. But the only problem with these tools is the throughput it can generate towards your application is limited to the host systems\u0026rsquo; memory and CPU capacity. If you want to want to test high and quick traffic ramp-up scenarios it\u0026rsquo;s not possible to do it from your laptop or PC. You can either have a high-end PC or you can run it on a Cloud Virtual Machine, it can be expensive, plus some of the above tools come with a GUI, which cannot be accessed via VM\u0026rsquo;s.\nSo how can we do load tests at scale without having a high-end testing infrastructure?\nLoad Testing Serverless Applications with Serverless Artillery Serverless artillery is a combination of serverless framework and artillery.io\nCombine serverless with artillery and you get serverless-artillery for an instant, cheap, and easy performance testing at scale\nServerless-artillery makes it easy to test your services for performance and functionality quickly, easily, and without having to maintain any servers or testing infrastructure.\nUse serverless-artillery if 1. You want to know if your services (either internal or public) can handle different amounts of traffic load (i.e. performance or load testing).\n2. You want to test if your services behave as you expect after you deploy new changes (i.e. acceptance testing).\n3. You want to constantly monitor your services overtime to make sure the latency of your services is under control (i.e. monitoring mode).\nHow It Works - Serverless-artillery would be installed and run on your local machine. From the command line run slsart --help to see various serverless-artillery commands\n- It takes your JSON or YAML load script `script.yml` that specifies,\n- test target/URL/endpoint/service - load progression - and the scenarios that are important for your service to test. Let\u0026rsquo;s See It in Action *Load Testing A Sample Application*\nIn this example, we will load test a single endpoint(GET) serverless API built with AWS API Gateway, Lambda, and DynamoDB\n*Installing Serverless Artillery on local machine*\n*Prerequisite*\n- NodeJS v8 +\n- Serverless Framework CLI\nnpm install -g serverless Installing serverless-artillery\nnpm install -g serverless-artillery To check that the installation succeeded, run:\nslsart --version We can also install it on a [docker container](https://github.com/Nordstrom/serverless-artillery#installing-in-docker)\n*Setting up the Load Test Configuration*\nmkdir load-test cd load-test slsart script // this will create script.yml config: target: \u0026quot;https://xxxxxxx.execute-api.us-east-1.amazonaws.com\u0026quot; phases: - duration: 300 arrivalRate: 500 rampTo: 10000 scenarios: - flow: - get: url: \u0026quot;/dev/get?id=john\u0026quot; Understanding `script.yml`\nconfig:\nThe config section defines the target (the hostname or IP address of the system under test),the load progression, and protocol-specific settings such as HTTP response timeouts or [Socket.io](http://socket.io/) transport options\ntarget:\nthe URI of the application under test. For an HTTP application, it\u0026rsquo;s the base URL for all requests\nphases:\nspecify the duration of the test and the frequency of requests\nscenarios:\nThe scenarios section contains definitions for one or more scenarios for the virtual users that Artillery will create.\nflow:\na \u0026ldquo;flow\u0026rdquo; is an array of operations that a virtual user performs, e.g. GET and POST requests for an HTTP-based application\n*Deploy to AWS*\nslsart deploy --stage \u0026lt;your-unique-stage-name\u0026gt; Start the load Test\nslsart invoke --stage \u0026lt;your-unique-stage-name\u0026gt; The above \u0026ldquo;script.yml\u0026rdquo; will try to generate 500 user request/second towards the API Gateway Endpoint and it will try to ramp up the requests to 10000/RPS in a period of 5 minutes\nAnd the result of the test will look like this in a cloud watch dashboard.\nAs we can see in the above graph, there are a lot of requests that were throttled by lambda. That is because of lambda\u0026rsquo;s concurrency limit of 1000.\nHow Load Testing Helps Serverless Applications One of the important insights we can get from load testing serverless applications is, It helps to find out the default soft limits or hidden limits of serverless tools. By knowing this we will be able to architecture our application to handle high traffic without throttling the request and hitting the AWS limits.\nIt also helps to find out the following things,\n- Lambda Insights\n- To find concurrency limits - To find out the timeouts - To find out Memory Exceptions - To find out Cold starts (You can warm up or add provisioned concurrency to those functions) - API Gateway\n- To understand the request throttling limits, increase or decrease them according to application needs - DynamoDB\n- To get the read write usage metrics and do capacity planning for handling different level of traffic ","permalink":"https://vishnuprasad.blog/posts/load-testing-serverless-sls-artillery/","summary":"Load testing is an important part when you are designing any type of application, whether it is traditional EC2 based or container-based or a complete serverless application.\nWhy is Load Testing important? Load testing will help us to find the following\n- How fast is the system\n- How much load can the system handle\n- Under what conditions will the system fail\n- Determine our application’s capabilities by measuring its response time, throughput, CPU utilization, latency, etc.","title":"Load Testing Serverless Applications With Serverless Artillery"},{"content":"Hosting a static website with S3 is awesome! It is Faster, Cheaper, Zero maintenance.\nIn this article, we will see how to do URL redirects on a website hosted with AWS S3 and Cloudfront.\nThere was a scenario which I was faced once in my company, One of our websites had deleted some old content and replaced it with new content and URL. And when people who google search for that particular content they get the old URL which doest exists.\nTo fix this issue the approach we had was to do add a temporary redirect for that old URL to the new one until it gets updated at google search.\nThe Fix\nAWS S3 Static hosting provides an option to add redirection rules to the website hosted in a particular bucket. https://docs.aws.amazon.com/AmazonS3/latest/dev/how-to-page-redirect.html\nIn this particular case, the URL\u0026rsquo;s we are going to use will be these,\nhttps://example.com/content/old-content\nand we will be redirecting this to\nhttps://example.com/content/new/content\nTo add the rules,\nClick on your bucket Go to properties and click on static website hosting Under the redirection rules filed, put the following code Redirect Rule,\n\u0026lt;RoutingRules\u0026gt; \u0026lt;RoutingRule\u0026gt; \u0026lt;Condition\u0026gt; \u0026lt;KeyPrefixEquals\u0026gt;content/old-content/\u0026lt;/KeyPrefixEquals\u0026gt; \u0026lt;/Condition\u0026gt; \u0026lt;Redirect\u0026gt; \u0026lt;HostName\u0026gt;example.com\u0026lt;/HostName\u0026gt; \u0026lt;ReplaceKeyPrefixWith\u0026gt;content/new/content\u0026lt;/ReplaceKeyPrefixWith\u0026gt; \u0026lt;/Redirect\u0026gt; \u0026lt;/RoutingRule\u0026gt; \u0026lt;/RoutingRules\u0026gt; Please note, The HostName(Line 7) part is important if your S3 website is configured with Cloudfront. Else during redirect, the domain name will be replaced with the S3 website endpoint.\nThat\u0026rsquo;s it. Now any requests coming to the old URL will be automatically redirected to the new one\n","permalink":"https://vishnuprasad.blog/posts/2019-12-15-url-redirects-with-aws-s3-and-cloudfront/","summary":"Hosting a static website with S3 is awesome! It is Faster, Cheaper, Zero maintenance.\nIn this article, we will see how to do URL redirects on a website hosted with AWS S3 and Cloudfront.\nThere was a scenario which I was faced once in my company, One of our websites had deleted some old content and replaced it with new content and URL. And when people who google search for that particular content they get the old URL which doest exists.","title":"URL redirects with AWS S3 and Cloudfront"},{"content":"In this guide we will set up a very simple REST API endpoint with the serverless framework, AWS Lambda, and API Gateway and deploy it to AWS Lambda with Github, AWS Codepipeline, Codebuild\n1. Install the Serverless Framework npm install serverless -g 2. Create a project serverless create --template aws-nodejs --path serverless-nodejs-api This will create two files handler.js and serveless.yml\n\u0026#39;use strict\u0026#39;; module.exports.api = async event =\u0026gt; { return { statusCode: 200, body: JSON.stringify( { message: \u0026#39;Go Serverless v1.0! Your function executed successfully!\u0026#39; }, null, 2 ), }; }; Update your serverless.yml to add an API Gateway endpoint.\nservice: serverless-nodejs-api provider: name: aws runtime: nodejs10.x stage: dev functions: getMsg: handler: handler.api events: - http: GET / Now we have our serverless API code ready.\nYou can deploy this to AWS manually by running sls deploy --stage dev\nThis will deploy the lambda function and create an API gateway endpoint for the function.\nOnce deployed, the output will print the newly created API gateway endpoint. test the function by calling the API endpoint. Something like this,\nService Information service: serverless-nodejs-api stage: dev region: us-east-1 stack: serverless-nodejs-api-dev resources: 9 api keys: None endpoints: GET - https://xxxxx.execute-api.us-east-1.amazonaws.com/dev functions: api: serverless-nodejs-api-dev-getMsg layers: None test the function by calling the API endpoint.\ncurl https://xxxxx.execute-api.us-east-1.amazonaws.com/dev { \u0026#34;message\u0026#34;: \u0026#34;Go Serverless v1.0! Your function executed successfully!\u0026#34; } Now let\u0026rsquo;s automate the deployment process with Github, AWS Codepipeline\nLet\u0026rsquo;s consider this code as production-ready and push the code to the GitHub repo master branch.\nPS: We can create multiple pipelines per brach for eg: Master -\u0026gt; Prod, Development -\u0026gt; Staging/Dev Environment\n3. Setup Codepipeline 3.1 Set Pipeline name and Create IAM Role 3.2 Add source stage In this stage, Connect to your Github account and choose your repo and branch Set the detection method\n3.3 Add build stage In this step, we have to create a Codebuild project, where we configure our build and deploy environment and commands.\nClick on the Create Project button, it will take you to the Codebuild setup page.\nSet the project name here\nChoose your runtime and image for the build environment\nChoose an IAM role for the project - This part is important\nThis role must have enough permissions for the serverless framework to deploy the function and its resources to AWS as follows,\nCreate an S3 bucket for your function deployments Upload your function zip files to that S3 bucket Submit a CloudFormation template Create the log groups for your Lambda functions Create a REST API in API Gateway You can use the below awesome NPM modules to create a narrow IAM policy template that will cover many Serverless use cases.\nnpm install -g yo generator-serverless-policy\nthen on your serverless app directory\n$ yo serverless-policy ? Your Serverless service name test-service ? You can specify a specific stage, if you like: dev ? You can specify a specific region, if you like: us-west-1 ? Does your service rely on DynamoDB? Yes ? Is your service going to be using S3 buckets? Yes app name test-service app stage dev app region us-west-1 Writing to test-service-dev-us-west-1-policy.json After you finish creating the codebuild project go to its IAM role and append the policy with the rules created by the above template.\nYou can find the IAM policy we used for this guide here, https://github.com/imewish/serverless-nodejs-api/blob/master/codebuild-IAM-policy.json\nDefine Build Spec.\nYou can find it here. https://github.com/imewish/serverless-nodejs-api/blob/master/buildspec.yml\nHere we will define the commands to set up the serverless framework and deploy commands to AWS.\nOn install phase\nSet nodejs 10 as runtime\nInstall serverless framework On Build Phase\nInstall npm packages\nDeploy to lambda with sls deploy --stage dev/prod\nNB: You can also run your tests here if you have test cases written for your lambda functions.\nEnable Cloudwatch logs so that we can tail our build process logs.\nThen click on Continue to Codepipeline this will take us back to Codepipeline Setup.\n4. Deploy Stage This stage is optional.\nSince the serverless framework already put the deployment artifacts to an S3 bucket we can skip this part. But if you want to store it to a different bucket you can set up like this.\nClick Next and then review all the setup then Create the pipeline.\nThat\u0026rsquo;s it!. Now you can test this by going to the newly created pipeline and click on Release Change\n","permalink":"https://vishnuprasad.blog/posts/2019-11-23-automating-deployment-of-lambda-functions-using-serverless-framework-aws-codepipeline/","summary":"In this guide we will set up a very simple REST API endpoint with the serverless framework, AWS Lambda, and API Gateway and deploy it to AWS Lambda with Github, AWS Codepipeline, Codebuild\n1. Install the Serverless Framework npm install serverless -g 2. Create a project serverless create --template aws-nodejs --path serverless-nodejs-api This will create two files handler.js and serveless.yml\n\u0026#39;use strict\u0026#39;; module.exports.api = async event =\u0026gt; { return { statusCode: 200, body: JSON.","title":"Automating Deployment Of Lambda Functions Using Serverless Framework, AWS CodePipeline"},{"content":"Download Resume\nExperienced Serverless Developer and Architect with over 8 years of hands-on expertise in the AWS Cloud. My professional journey has been marked by a deep understanding of cloud technologies, particularly in the realm of Serverless computing. My career spans various industries, including OTT, E-commerce, Blockchain, and Marketing Technology (MarTech). Here\u0026rsquo;s a bit more about my journey:\nServerless Expertise: With more than eight years in the field, I have honed my skills as a Serverless Developer and Architect, specializing in AWS Cloud. My focus has been on crafting highly scalable and efficient serverless architectures that drive innovation and cost-effectiveness for businesses.\nDevOps Proficiency: I bring a strong DevOps background to the table, complementing my development skills. This blend of talents allows me to streamline processes, ensure continuous integration and delivery, and maintain robust, secure, and highly available serverless applications.\nTypeScript Development: TypeScript is a powerful tool in my toolkit. I leverage this language to build robust, type-safe serverless applications that enhance reliability and maintainability.\nDiverse Industry Experience: Throughout my career, I\u0026rsquo;ve had the privilege of working across diverse industries. My expertise extends from the Over-The-Top (OTT) industry, where I\u0026rsquo;ve optimized video streaming platforms, to the dynamic landscape of E-commerce, where I\u0026rsquo;ve helped businesses scale and thrive online. My experience also includes the innovative world of Blockchain, where I\u0026rsquo;ve contributed to blockchain-based applications, and the dynamic field of Marketing Technology (MarTech), where I\u0026rsquo;ve supported data-driven marketing initiatives.\nI am passionate about staying at the forefront of technological advancements and leveraging my expertise to solve complex challenges in various industries. If you\u0026rsquo;re seeking a seasoned Serverless Developer and Architect who brings both technical acumen and a broad industry perspective to the table, let\u0026rsquo;s connect. I\u0026rsquo;m eager to explore how I can contribute to your organization\u0026rsquo;s success and growth\n","permalink":"https://vishnuprasad.blog/about-me/","summary":"Download Resume\nExperienced Serverless Developer and Architect with over 8 years of hands-on expertise in the AWS Cloud. My professional journey has been marked by a deep understanding of cloud technologies, particularly in the realm of Serverless computing. My career spans various industries, including OTT, E-commerce, Blockchain, and Marketing Technology (MarTech). Here\u0026rsquo;s a bit more about my journey:\nServerless Expertise: With more than eight years in the field, I have honed my skills as a Serverless Developer and Architect, specializing in AWS Cloud.","title":"About Me"}]