GitOps infrastructure using ArgoCD App-of-Apps pattern.
- Server with 4+ CPU cores, 8+ GB RAM
- SSH access
1. Server Setup → Tailscale SSH, k3s bootstrap
2. Configure Services → Doppler (+ K8s secrets), Cloudflare, Auth0, etc.
3. ArgoCD Bootstrap → Install ArgoCD, SSH keys
4. Configuration → Edit values.yaml
5. Deploy → Apply root.yaml
6. Post-Setup → Verify access
Recommendation: Create a dedicated email (e.g.,
infra@yourcompany.com) for all infrastructure accounts. This acts as a super admin owner and simplifies team access management.
Join server to tailnet → Setup Guide
Note: Tailscale SSH is optional but highly recommended — it allows secure SSH access from anywhere without exposing port 22.
SSH to your server and run the bootstrap script:
curl -fsSL https://raw.githubusercontent.com/mshykhov/gitops-platform/master/infrastructure/scripts/bootstrap.sh | sudo bashThis installs k3s, configures kubeconfig, and installs dependencies (open-iscsi for Longhorn).
Follow each guide and add the required secrets to Doppler shared config.
Secrets management → Setup Guide
Setup: Create account → Create project → Create configs (shared, dev, prd) → Generate service tokens → Create K8s secrets
Tunnel & DNS → Setup Guide
Setup: Account → Domain → Tunnel (CLI) → API token
Placeholders: <CF_TUNNEL_ID>, <DOMAIN>
Doppler: CF_TUNNEL_CREDENTIALS, CF_API_TOKEN
R2 Storage → Setup Guide
Setup: Create bucket → Create API token → Get account ID
Placeholders: <CF_ACCOUNT_ID>
Doppler: S3_ACCESS_KEY_ID, S3_SECRET_ACCESS_KEY
ACL policy, OAuth client → Setup Guide
Setup: ACL policy → Enable HTTPS → Create OAuth client
Placeholders: <TAILNET_NAME>, <TS_CLIENT_ID>
Doppler: TS_OAUTH_CLIENT_SECRET
oauth2-proxy (internal services) → Setup Guide
Setup: Create tenant → Create application → Configure URLs → Create Action for groups
Placeholders: <AUTH0_DOMAIN>, <AUTH0_CLIENT_ID>, <AUTH0_GROUPS_CLAIM>
Doppler: AUTH0_CLIENT_SECRET
Applications (SPA/API) → Setup Guide
Setup: Create API → Create SPA application → Configure URLs
Placeholders: <AUTH0_AUDIENCE>
Doppler: AUTH0_CLIENT_SECRET (same as oauth2-proxy)
Access token for pulling images (avoids rate limits).
- Create account or login
- Go to Account Settings → Personal access tokens
- Click Generate new token
- Description:
k8s-pull, Access: Read-only - Click Generate and copy token
Placeholders: <DOCKERHUB_USERNAME> — your Docker Hub username
Doppler: DOCKERHUB_PULL_TOKEN
Receive alerts from Prometheus/Alertmanager and deploy notifications from ArgoCD.
Setup: Create bot → Create group with topics → Get chat ID and topic IDs
Placeholders:
<TELEGRAM_CHAT_ID>— group chat ID (e.g.,-1001234567890)<TELEGRAM_TOPIC_CRITICAL>— topic ID for critical alerts<TELEGRAM_TOPIC_WARNING>— topic ID for warnings<TELEGRAM_TOPIC_INFO>— topic ID for info<TELEGRAM_TOPIC_DEPLOYS>— topic ID for deploy notifications
Doppler: TELEGRAM_BOT_TOKEN
Generate and add to Doppler shared:
- OAUTH2_PROXY_COOKIE_SECRET — run:
openssl rand -base64 32 - OAUTH2_PROXY_REDIS_PASSWORD — run:
openssl rand -base64 24
After completing all steps, verify shared config contains:
CF_TUNNEL_CREDENTIALSCF_API_TOKENS3_ACCESS_KEY_IDS3_SECRET_ACCESS_KEYTS_OAUTH_CLIENT_SECRETAUTH0_CLIENT_SECRETDOCKERHUB_PULL_TOKENTELEGRAM_BOT_TOKENOAUTH2_PROXY_COOKIE_SECRETOAUTH2_PROXY_REDIS_PASSWORD
- Fork or copy this
infrastructuredirectory to a new GitHub repository - Name it (e.g.,
mshykhov/smhomelab-infrastructure) - This will be your GitOps source of truth
Save as <GITHUB_USER>/<INFRASTRUCTURE_REPO> (e.g., mshykhov/smhomelab-infrastructure)
Using Helm chart provides automatic pod restarts when ConfigMaps change.
helm repo add argo https://argoproj.github.io/argo-helm
helm repo update
helm install argocd argo/argo-cd -n argocd --create-namespace --waitNote: The Helm chart includes built-in mechanism to restart pods when ConfigMaps change (checksum annotations).
ssh-keygen -t ed25519 -C "argocd-infrastructure" -f ~/.ssh/argocd-infrastructure -N ""
cat ~/.ssh/argocd-infrastructure.pub- Go to your infrastructure repo → Settings → Deploy keys
- Click Add deploy key
- Title:
argocd-infrastructure - Key: paste output from
cat ~/.ssh/argocd-infrastructure.pub - Leave "Allow write access" unchecked (read-only is sufficient)
- Click Add key
kubectl create secret generic repo-infrastructure \
--from-literal=type=git \
--from-literal=url=git@github.com:<GITHUB_USER>/<INFRASTRUCTURE_REPO>.git \
--from-file=sshPrivateKey=$HOME/.ssh/argocd-infrastructure \
-n argocd
kubectl label secret repo-infrastructure argocd.argoproj.io/secret-type=repository -n argocdThe deploy repository contains Helm charts for your applications (services, databases).
- Copy the
deploydirectory from this project to a new GitHub repository - Name it (e.g.,
mshykhov/smhomelab-deploy)
Save as <GITHUB_USER>/<DEPLOY_REPO> (e.g., mshykhov/smhomelab-deploy)
ssh-keygen -t ed25519 -C "argocd-deploy" -f ~/.ssh/argocd-deploy -N ""
cat ~/.ssh/argocd-deploy.pub- Go to your deploy repo → Settings → Deploy keys
- Click Add deploy key
- Title:
argocd-deploy - Key: paste output from
cat ~/.ssh/argocd-deploy.pub - Check "Allow write access" (required for ArgoCD Image Updater)
- Click Add key
kubectl create secret generic repo-deploy \
--from-literal=type=git \
--from-literal=url=git@github.com:<GITHUB_USER>/<DEPLOY_REPO>.git \
--from-file=sshPrivateKey=$HOME/.ssh/argocd-deploy \
-n argocd
kubectl label secret repo-deploy argocd.argoproj.io/secret-type=repository -n argocdNote: The deploy repo is optional if you only need infrastructure. Skip steps 3.6-3.9 if not deploying custom applications.
Replace placeholders in configuration files with values collected in Step 2.
| Placeholder | Description |
|---|---|
<INFRASTRUCTURE_REPO_URL> |
git@github.com:<GITHUB_USER>/<INFRASTRUCTURE_REPO>.git |
| Placeholder | Source (Step 2) |
|---|---|
<INFRASTRUCTURE_REPO_URL> |
Step 3.1 |
<DEPLOY_REPO_URL> |
git@github.com:<GITHUB_USER>/<DEPLOY_REPO>.git |
<SERVICE_PREFIX> |
Your app prefix (e.g., myapp) |
<CLUSTER_NAME> |
Cluster identifier (e.g., k3s-home) |
<DOMAIN> |
2.2 Cloudflare |
<TAILNET_NAME> |
2.3 Tailscale |
<TS_CLIENT_ID> |
2.3 Tailscale |
<AUTH0_DOMAIN> |
2.4 Auth0 |
<AUTH0_CLIENT_ID> |
2.4 Auth0 |
<AUTH0_GROUPS_CLAIM> |
2.4 Auth0 |
<DOCKERHUB_USERNAME> |
2.5 Docker Hub |
<CF_TUNNEL_ID> |
2.2 Cloudflare |
<CF_ACCOUNT_ID> |
2.2 R2 Storage |
<TELEGRAM_CHAT_ID> |
2.6 Telegram |
<TELEGRAM_TOPIC_*> |
2.6 Telegram |
For user-facing applications:
| Placeholder | Source |
|---|---|
<AUTH0_AUDIENCE> |
2.4 Auth0 Applications |
Clone repo to server and apply:
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/argocd-infrastructure
git clone git@github.com:<GITHUB_USER>/<INFRASTRUCTURE_REPO>.git
cd <INFRASTRUCTURE_REPO>
kubectl apply -f bootstrap/root.yamlWatch deployment:
kubectl get applications -n argocd -wApplications deploy in waves (0-9: core, 10-19: data, 20-29: network, 30-39: monitoring, 100+: services).
Initially, use port-forward to access ArgoCD UI and monitor deployment progress:
kubectl port-forward svc/argocd-server -n argocd 8080:443
# Open https://localhost:8080 (anonymous access enabled)Wait for all applications to sync (especially: tailscale-operator, oauth2-proxy, external-secrets).
Troubleshooting
- App stuck in "Progressing" — check Events tab for errors
- Sync failed — check app details for error messages
- ImagePullBackOff — check Doppler secrets (DOCKERHUB_PULL_TOKEN)
- External secrets not syncing — check ClusterSecretStore:
kubectl get clustersecretstores
After tailscale-operator is synced:
tailscale configure kubeconfig tailscale-operator
kubectl get nodesAfter oauth2-proxy is synced, open https://argocd.<TAILNET_NAME>.ts.net
Note: Requires Auth0 configured in Step 2.4. Login with your Auth0 account.
Application environment variables are defined in the deploy repository:
deploy/services/<service-name>/
├── values-dev.yaml # Dev environment config
└── values-prd.yaml # Production environment config
Each service has its own directory with environment-specific values files containing:
- Environment variables (
env:) - Resource limits
- Replica counts
- Feature flags
infrastructure/
├── apps/ # ArgoCD App-of-Apps
│ ├── values.yaml # Global configuration
│ └── templates/ # Application manifests
├── bootstrap/root.yaml # Entry point
├── charts/ # Custom Helm charts
├── helm-values/ # Values for upstream charts
├── manifests/ # Raw Kubernetes manifests
└── docs/ # Documentation
├── setup/ # Setup guides
└── reference/ # Reference docs
| Document | Description |
|---|---|
| Setup Guides | |
| Tailscale Server | Server setup + optional SSH |
| Doppler Setup | Secrets management configuration |
| Tailscale Operator | ACL, OAuth, kubectl access |
| Auth0 oauth2-proxy | Authentication for internal services |
| Auth0 Applications | Auth0 for UI/API applications |
| Cloudflare Setup | Tunnel, DNS, R2 storage |
| Telegram Setup | Alerts bot configuration |
| GitHub Actions | CI/CD secrets setup |
| Reference | |
| Secrets Reference | All secrets and configuration |
| Operations | |
| Adding Environment | Add new environment (stg, etc.) |
# Check applications
kubectl get applications -n argocd
# Sync app
kubectl patch application <app> -n argocd --type merge -p '{"operation":{"sync":{}}}'
# ArgoCD password
kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath='{.data.password}' | base64 -d
# Check secrets
kubectl get clustersecretstores
kubectl get externalsecrets -A- Doppler: Account, project, configs (shared/dev/prd), secrets, service tokens
- Tailscale: ACL policy, OAuth client, HTTPS enabled
- Auth0: Application, callback URLs, Action for groups
- Cloudflare: Domain, Tunnel (credentials.json), API token, R2 buckets
- Docker Hub: Access token
- Telegram: Bot and group with topics
- k3s installed (without traefik, servicelb)
- open-iscsi installed
- ArgoCD installed
- Repository SSH key configured
-
apps/values.yamledited - Doppler token secrets created
-
bootstrap/root.yamlapplied - All applications synced
- kubectl via Tailscale working
- ArgoCD UI accessible
- Cloudflare routes configured