A reusable GitHub Actions workflow for deploying Terraform infrastructure across multiple cloud providers and data platforms with comprehensive validation, linting, and deployment capabilities.
- Multi-Cloud & Platform Support: Deploy to AWS, GCP, Azure, Snowflake, and Databricks
- Platform Mode: Automatically detect and deploy to all cloud providers in your
infra/directory - Comprehensive Pipeline: Automated linting, validation, planning, and deployment
- Flexible Backend: Support for both S3 and HCP Terraform Cloud backends
- S3 Backend Configuration: Configurable S3 bucket, region, and key prefix for state storage
- Security First: All sensitive credentials handled as secrets and environment variables
- Validation Pipeline: TFLint → Terraform Validate → Terraform Plan → Terraform Apply
- Reusable Design: Can be called from any repository workflow
- Debug Mode: Built-in debug output for troubleshooting
The workflow consists of four sequential jobs:
- ⚙️ TFLint - Lints Terraform code for best practices and errors (includes debug output)
- ✅ Validate - Validates Terraform configuration syntax
- 🔍 Plan - Creates and reviews Terraform execution plan
- 🚀 Apply - Applies the Terraform changes to infrastructure
| Input | Description | Type | Required |
|---|---|---|---|
cloud-provider |
Target provider (aws, gcp, azure, snowflake, databricks, platform) |
string | ✅ |
tflint-ver |
TFLint version to install | string | ✅ |
| Input | Description | Type | Default |
|---|---|---|---|
concurrency-group |
Custom concurrency group name for workflow runs | string | terraform-ci-{cloud-provider}-{ref} |
The workflow uses concurrency control to prevent multiple runs from executing simultaneously. By default, it creates a concurrency group based on the cloud provider and git ref. You can override this by providing a custom concurrency-group value. When a new run starts, any in-progress runs in the same concurrency group are automatically cancelled.
| Input | Description | Type | Default |
|---|---|---|---|
backend-type |
Backend type (s3 or remote for HCP Terraform Cloud) |
string | s3 |
s3-bucket |
S3 bucket name for Terraform state (required when backend-type is 's3') | string | - |
s3-region |
AWS region for S3 backend bucket (required when backend-type is 's3') | string | - |
s3-key-prefix |
Optional prefix for the S3 state key | string | "" |
| Input | Description | Type | Default |
|---|---|---|---|
aws-region |
AWS region for authentication | string | - |
snowflake-organization-name |
Snowflake organization name | string | - |
snowflake-account-name |
Snowflake account name within the organization | string | - |
snowflake-user |
Snowflake user name | string | - |
snowflake-role |
Snowflake role name | string | - |
tf-vars-file |
Terraform variables file path | string | terraform.tfvars |
| Secret | Description | Required When |
|---|---|---|
tfc-token |
HCP Terraform Cloud API token | backend-type is remote |
| Secret | Description | Required When |
|---|---|---|
aws-role-to-assume |
AWS IAM role ARN to assume | cloud-provider is aws |
| Secret | Description | Required When |
|---|---|---|
gcp-wif-provider |
GCP Workload Identity Federation provider | cloud-provider is gcp |
gcp-service-account |
GCP service account email | cloud-provider is gcp |
| Secret | Description | Required When |
|---|---|---|
azure-client-id |
Azure client ID | cloud-provider is azure |
azure-tenant-id |
Azure tenant ID | cloud-provider is azure |
azure-subscription-id |
Azure subscription ID | cloud-provider is azure |
| Secret | Description | Required When | Environment Variable |
|---|---|---|---|
snowflake-private-key |
Snowflake private key for authentication | cloud-provider is snowflake |
SNOWFLAKE_PRIVATE_KEY |
Note: Snowflake credentials (
snowflake-organization-name,snowflake-account-name,snowflake-user,snowflake-role,snowflake-private-key) are passed as environment variables (SNOWFLAKE_ORGANIZATION_NAME,SNOWFLAKE_ACCOUNT_NAME,SNOWFLAKE_USER,SNOWFLAKE_ROLE,SNOWFLAKE_PRIVATE_KEY) to the Terraform provider.
| Secret | Description | Required When | Environment Variable |
|---|---|---|---|
databricks-host |
Databricks workspace URL | cloud-provider is databricks |
DATABRICKS_HOST |
databricks-token |
Databricks personal access token | cloud-provider is databricks |
DATABRICKS_TOKEN |
Note: Databricks credentials are passed as environment variables (
DATABRICKS_HOST,DATABRICKS_TOKEN) to the Terraform provider.
When cloud-provider is set to platform, the workflow automatically:
- Detects all cloud provider directories in
infra/(aws, gcp, azure, snowflake, databricks) - Validates that required inputs/secrets are provided for each detected provider
- Runs TFLint, Validate, Plan, and Apply for each detected provider sequentially
This is useful for multi-cloud deployments where you want to deploy to all configured providers in a single workflow run.
name: Deploy to All Platforms
on:
push:
branches: [main]
jobs:
deploy:
uses: subhamay-bhattacharyya-gha/tf-deploy-multi-reusable-wf/.github/workflows/terraform-deploy.yaml@main
with:
cloud-provider: platform
tflint-ver: v0.52.0
backend-type: s3
s3-bucket: my-terraform-state-bucket
s3-region: us-east-1
aws-region: us-east-1
snowflake-organization-name: myorg
snowflake-account-name: myaccount
snowflake-user: terraform_user
snowflake-role: TERRAFORM_ROLE
secrets:
aws-role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
gcp-wif-provider: ${{ secrets.GCP_WIF_PROVIDER }}
gcp-service-account: ${{ secrets.GCP_SERVICE_ACCOUNT }}
azure-client-id: ${{ secrets.AZURE_CLIENT_ID }}
azure-tenant-id: ${{ secrets.AZURE_TENANT_ID }}
azure-subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}
snowflake-private-key: ${{ secrets.SNOWFLAKE_PRIVATE_KEY }}
databricks-host: ${{ secrets.DATABRICKS_HOST }}
databricks-token: ${{ secrets.DATABRICKS_TOKEN }}The workflow expects your Terraform files to be organized as follows:
your-repo/
├── infra/
│ ├── aws/
│ │ └── tf/
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ └── terraform.tfvars
│ ├── gcp/
│ │ └── tf/
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ └── terraform.tfvars
│ ├── azure/
│ │ └── tf/
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ └── terraform.tfvars
│ ├── snowflake/
│ │ └── tf/
│ │ ├── main.tf
│ │ ├── variables.tf
│ │ └── terraform.tfvars
│ └── databricks/
│ └── tf/
│ ├── main.tf
│ ├── variables.tf
│ └── terraform.tfvars
└── .github/
└── workflows/
└── deploy.yml
name: Deploy to AWS
on:
push:
branches: [main]
jobs:
deploy:
uses: subhamay-bhattacharyya-gha/tf-deploy-multi-reusable-wf/.github/workflows/terraform-deploy.yaml@main
with:
cloud-provider: aws
tflint-ver: v0.52.0
backend-type: s3
s3-bucket: my-terraform-state-bucket
s3-region: us-east-1
aws-region: us-east-1
concurrency-group: terraform-aws-prod # Optional: custom concurrency group
secrets:
aws-role-to-assume: ${{ secrets.AWS_ROLE_ARN }}name: Deploy to GCP
on:
workflow_dispatch:
jobs:
deploy:
uses: subhamay-bhattacharyya-gha/tf-deploy-multi-reusable-wf/.github/workflows/terraform-deploy.yaml@main
with:
cloud-provider: gcp
tflint-ver: v0.52.0
backend-type: remote
tf-vars-file: production.tfvars
secrets:
tfc-token: ${{ secrets.TF_CLOUD_TOKEN }}
gcp-wif-provider: ${{ secrets.GCP_WIF_PROVIDER }}
gcp-service-account: ${{ secrets.GCP_SERVICE_ACCOUNT }}name: Deploy to Azure
on:
pull_request:
types: [closed]
branches: [main]
jobs:
deploy:
if: github.event.pull_request.merged == true
uses: subhamay-bhattacharyya-gha/tf-deploy-multi-reusable-wf/.github/workflows/terraform-deploy.yaml@main
with:
cloud-provider: azure
tflint-ver: v0.52.0
backend-type: s3
s3-bucket: my-terraform-state-bucket
s3-region: us-east-1
secrets:
azure-client-id: ${{ secrets.AZURE_CLIENT_ID }}
azure-tenant-id: ${{ secrets.AZURE_TENANT_ID }}
azure-subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }}name: Deploy to Snowflake
on:
workflow_dispatch:
jobs:
deploy:
uses: subhamay-bhattacharyya-gha/tf-deploy-multi-reusable-wf/.github/workflows/terraform-deploy.yaml@main
with:
cloud-provider: snowflake
tflint-ver: v0.52.0
backend-type: remote
snowflake-organization-name: myorg
snowflake-account-name: myaccount
snowflake-user: terraform_user
snowflake-role: TERRAFORM_ROLE
secrets:
tfc-token: ${{ secrets.TF_CLOUD_TOKEN }}
snowflake-private-key: ${{ secrets.SNOWFLAKE_PRIVATE_KEY }}name: Deploy to Databricks
on:
push:
branches: [main]
jobs:
deploy:
uses: subhamay-bhattacharyya-gha/tf-deploy-multi-reusable-wf/.github/workflows/terraform-deploy.yaml@main
with:
cloud-provider: databricks
tflint-ver: v0.52.0
backend-type: s3
s3-bucket: my-terraform-state-bucket
s3-region: us-east-1
secrets:
databricks-host: ${{ secrets.DATABRICKS_HOST }}
databricks-token: ${{ secrets.DATABRICKS_TOKEN }}- Configure AWS OIDC provider in your GitHub repository
- Create an IAM role with necessary permissions
- Set up S3 bucket for Terraform state (if using S3 backend)
- Set up Workload Identity Federation
- Create a service account with required permissions
- Configure the WIF provider and service account
- Register an Azure AD application
- Create a service principal
- Assign necessary permissions to the service principal
- Create a Snowflake user for Terraform
- Generate RSA key pair for authentication
- Assign appropriate role and permissions
- Credentials are passed via environment variables:
SNOWFLAKE_ORGANIZATION_NAME,SNOWFLAKE_ACCOUNT_NAME,SNOWFLAKE_USER,SNOWFLAKE_ROLE,SNOWFLAKE_PRIVATE_KEY
- Create a Databricks workspace
- Generate a personal access token
- Configure workspace permissions
- Credentials are passed via environment variables:
DATABRICKS_HOST,DATABRICKS_TOKEN
- Create a Terraform Cloud account
- Generate an API token
- Configure your workspace and variables
| Action | Version/Branch | Purpose |
|---|---|---|
actions/checkout |
@v4 |
Checkout repository |
subhamay-bhattacharyya-gha/tf-lint-action |
@main |
TFLint |
subhamay-bhattacharyya-gha/tf-validate-action |
@main |
Terraform validate |
subhamay-bhattacharyya-gha/tf-plan-action |
@feature/GHA-0047-add-platform-based-multi |
Terraform plan |
subhamay-bhattacharyya-gha/tf-apply-action |
@feature/GHA-0039-add-platform-based-multi |
Terraform apply |
- Never commit secrets to your repository
- Use GitHub repository secrets for all sensitive information
- Follow the principle of least privilege when setting up cloud permissions
- Regularly rotate your credentials and tokens
- Use OIDC providers where possible instead of long-lived credentials
- For Snowflake, use key-pair authentication instead of passwords
- For Databricks, use service principals in production environments
- Snowflake and Databricks credentials are passed as environment variables for secure handling
The workflow includes a comprehensive debug step that outputs all inputs and secrets (redacted). Check the "Debug - Print All Inputs" step in the TFLint job to troubleshoot configuration issues.
All sensitive values are displayed as [REDACTED - PROVIDED] or [REDACTED - NOT PROVIDED] to help verify configuration without exposing actual values.
- Fork the repository
- Create a feature branch
- Make your changes
- Test thoroughly
- Submit a pull request
MIT
If you encounter any issues or have questions:
- Check the Issues page
- Create a new issue with detailed information
- Include workflow logs and error messages
Built with ❤️ using Kiro.dev