Skip to content

ArgoCD Setup Guide for Kubernetes

Overview

Guide Information

Difficulty: Intermediate
Time Required: ~45 minutes
Last Updated: March 2024
ArgoCD Version: v2.14.3
Kubernetes Compatibility: K3s v1.32.2+k3s1, K8s 1.24+
OS: Debian 12

This guide provides comprehensive instructions for setting up ArgoCD in a Kubernetes or K3s cluster, configuring Traefik ingress, and securing it with cert-manager for automatic SSL certificate renewal.

What is ArgoCD?

ArgoCD is a declarative, GitOps continuous delivery tool for Kubernetes. It automates the deployment of applications to Kubernetes clusters by monitoring Git repositories and applying changes when they occur.

graph LR
    A[Git Repository] -->|Contains manifests| B[ArgoCD]
    B -->|Syncs to| C[Kubernetes Cluster]
    D[Developers] -->|Push changes| A
    E[ArgoCD UI/CLI] -->|Manage| B
    style B fill:#f9f,stroke:#333,stroke-width:2px

Prerequisites

Requirements

  • A running Kubernetes or K3s cluster
  • kubectl installed and configured
  • helm v3.x installed
  • A domain name for ArgoCD access
  • DNS record pointing to your cluster's IP
  • Administrative access to your cluster

Installation Steps

1. Prepare Your Cluster

# Verify cluster access
kubectl cluster-info

# Create namespace for ArgoCD
kubectl create namespace argocd
# Verify K3s is running
sudo systemctl status k3s

# Create namespace for ArgoCD
kubectl create namespace argocd

2. Install ArgoCD

There are two methods to install ArgoCD: using manifests directly or using Helm.

# Apply the ArgoCD installation manifest
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.14.3/manifests/install.yaml

# Verify pods are running
kubectl get pods -n argocd
# Add ArgoCD Helm repository
helm repo add argo https://argoproj.github.io/argo-helm
helm repo update

# Install ArgoCD
helm install argocd argo/argo-cd \
  --namespace argocd \
  --create-namespace \
  --version 6.7.0 \
  --set server.extraArgs="{--insecure}" \
  --set controller.metrics.enabled=true \
  --set server.metrics.enabled=true

Resource Requirements

ArgoCD is relatively lightweight, but for production use, consider allocating: - At least 2 CPU cores and 4GB RAM for the cluster - 1GB RAM for the ArgoCD controller - 512MB RAM for the ArgoCD server

3. Install cert-manager

cert-manager is required to automatically provision and manage TLS certificates.

# Add Jetstack Helm repository
helm repo add jetstack https://charts.jetstack.io
helm repo update

# Install cert-manager with CRDs
helm install cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --version v1.13.2 \
  --set installCRDs=true

# Verify cert-manager pods are running
kubectl get pods -n cert-manager

4. Configure ClusterIssuer for Let's Encrypt

Create a ClusterIssuer to obtain certificates from Let's Encrypt:

letsencrypt-issuer.yaml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: your-email@example.com  # Replace with your email
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
    - http01:
        ingress:
          class: traefik

Apply the ClusterIssuer:

# Apply the ClusterIssuer
kubectl apply -f letsencrypt-issuer.yaml

# Verify the ClusterIssuer is ready
kubectl get clusterissuer letsencrypt-prod -o wide

Rate Limits

Let's Encrypt has rate limits: 50 certificates per domain per week. Use the staging server (https://acme-staging-v02.api.letsencrypt.org/directory) for testing.

5. Configure Traefik Ingress for ArgoCD

Create an Ingress resource for ArgoCD:

argocd-ingress.yaml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: argocd-server-ingress
  namespace: argocd
  annotations:
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
    traefik.ingress.kubernetes.io/router.entrypoints: "websecure"
    traefik.ingress.kubernetes.io/router.tls: "true"
    traefik.ingress.kubernetes.io/router.middlewares: "argocd-argocd-middleware@kubernetescrd"
spec:
  ingressClassName: traefik
  tls:
  - hosts:
    - argocd.example.com  # Replace with your domain
    secretName: argocd-server-tls
  rules:
  - host: argocd.example.com  # Replace with your domain
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: argocd-server
            port:
              number: 80

Create a middleware to handle gRPC and HTTP traffic:

argocd-middleware.yaml
1
2
3
4
5
6
7
8
9
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
  name: argocd-middleware
  namespace: argocd
spec:
  headers:
    customRequestHeaders:
      X-Forwarded-Proto: "https"

Apply the configurations:

# Apply the middleware
kubectl apply -f argocd-middleware.yaml

# Apply the ingress
kubectl apply -f argocd-ingress.yaml

# Check the status of the ingress
kubectl get ingress -n argocd

Accessing ArgoCD

Initial Login

Once ArgoCD is installed and the ingress is configured, you can access it via your domain (e.g., https://argocd.example.com).

# For manifest installation
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d

# For Helm installation (if using default values)
kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d
# Install ArgoCD CLI
curl -sSL -o argocd-linux-amd64 https://github.com/argoproj/argo-cd/releases/download/v2.14.3/argocd-linux-amd64
sudo install -m 555 argocd-linux-amd64 /usr/local/bin/argocd
rm argocd-linux-amd64

# Login using CLI
argocd login argocd.example.com

# Change the default password
argocd account update-password

Security Note

Always change the default admin password immediately after the first login!

Setting Up Your First Application

After logging in, you can deploy your first application:

  1. Click on "+ New App" in the UI
  2. Fill in the application details:
  3. Name: example-app
  4. Project: default
  5. Sync Policy: Automatic
  6. Repository URL: Your Git repository URL
  7. Path: Path to your Kubernetes manifests
  8. Cluster: https://kubernetes.default.svc (for in-cluster deployment)
  9. Namespace: Your target namespace

Security Hardening

Security Best Practices

  1. RBAC Configuration: Limit access to ArgoCD
  2. SSO Integration: Connect to your identity provider
  3. Network Policies: Restrict pod communication
  4. Secrets Management: Use external secret stores

Configure RBAC

Create a custom RBAC policy:

argocd-rbac-cm.yaml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
apiVersion: v1
kind: ConfigMap
metadata:
  name: argocd-rbac-cm
  namespace: argocd
data:
  policy.csv: |
    p, role:readonly, applications, get, */*, allow
    p, role:readonly, clusters, get, *, allow
    p, role:developer, applications, create, */*, allow
    p, role:developer, applications, update, */*, allow
    p, role:developer, applications, delete, */*, allow
    g, developer@example.com, role:developer
    g, viewer@example.com, role:readonly
  policy.default: role:readonly

Apply the ConfigMap:

kubectl apply -f argocd-rbac-cm.yaml

Configure SSO (GitHub Example)

Update the ArgoCD ConfigMap:

argocd-cm.yaml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
apiVersion: v1
kind: ConfigMap
metadata:
  name: argocd-cm
  namespace: argocd
data:
  url: https://argocd.example.com
  dex.config: |
    connectors:
      - type: github
        id: github
        name: GitHub
        config:
          clientID: your-github-client-id
          clientSecret: $dex.github.clientSecret
          orgs:
          - name: your-github-org

Create a secret for GitHub OAuth:

kubectl -n argocd create secret generic github-secret \
  --from-literal=clientSecret=your-github-client-secret

Apply the ConfigMap:

kubectl apply -f argocd-cm.yaml

Network Policies

Restrict network traffic to ArgoCD:

argocd-network-policy.yaml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: argocd-server-network-policy
  namespace: argocd
spec:
  podSelector:
    matchLabels:
      app.kubernetes.io/name: argocd-server
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector: {}
    ports:
    - protocol: TCP
      port: 80
    - protocol: TCP
      port: 443

Apply the network policy:

kubectl apply -f argocd-network-policy.yaml

Troubleshooting

Common Issues and Solutions

Common Problems

Symptoms: Unable to access ArgoCD through the domain

Solutions: 1. Check if the certificate is issued correctly:

kubectl get certificate -n argocd
2. Verify Traefik is properly configured:
kubectl get ingressroute -A
3. Check the Traefik logs:
kubectl logs -n kube-system -l app.kubernetes.io/name=traefik

Symptoms: SSL errors or certificate not issuing

Solutions: 1. Check cert-manager logs:

kubectl logs -n cert-manager -l app=cert-manager
2. Verify the ClusterIssuer status:
kubectl describe clusterissuer letsencrypt-prod
3. Check certificate request status:
kubectl get certificaterequest -n argocd

Symptoms: ArgoCD UI unavailable, server pods restarting

Solutions: 1. Check server logs:

kubectl logs -n argocd -l app.kubernetes.io/name=argocd-server
2. Verify resource allocation:
kubectl top pods -n argocd
3. Check for eviction events:
kubectl get events -n argocd

Diagnostic Commands

Here are some useful commands for diagnosing issues:

Diagnostic Commands
# Check all ArgoCD components
kubectl get pods -n argocd

# Check ArgoCD server logs
kubectl logs -n argocd -l app.kubernetes.io/name=argocd-server

# Check ArgoCD application controller logs
kubectl logs -n argocd -l app.kubernetes.io/name=argocd-application-controller

# Check certificate status
kubectl get certificate -n argocd

# Check ingress status
kubectl describe ingress argocd-server-ingress -n argocd

Maintenance

Upgrading ArgoCD

# Update to a new version
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.14.3/manifests/install.yaml
# Update Helm repositories
helm repo update

# Upgrade ArgoCD
helm upgrade argocd argo/argo-cd \
  --namespace argocd \
  --version 6.7.0

Backup Before Upgrading

Always backup your ArgoCD settings before upgrading:

kubectl get -n argocd -o yaml configmap,secret,application > argocd-backup.yaml

Monitoring ArgoCD

ArgoCD exposes Prometheus metrics that can be scraped for monitoring:

argocd-prometheus-servicemonitor.yaml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: argocd-metrics
  namespace: monitoring
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: argocd-metrics
  endpoints:
  - port: metrics

Advanced Configuration

GitOps Workflow Example

sequenceDiagram
    participant Dev as Developer
    participant Git as Git Repository
    participant CI as CI Pipeline
    participant Argo as ArgoCD
    participant K8s as Kubernetes

    Dev->>Git: Push code changes
    Git->>CI: Trigger CI pipeline
    CI->>Git: Update manifests
    Git->>Argo: Detect changes
    Argo->>K8s: Apply changes
    K8s->>Argo: Report status
    Argo->>Git: Update deployment status

Multi-Cluster Setup

For managing multiple clusters with ArgoCD:

  1. Register the external cluster:

    argocd cluster add context-name
    

  2. Create applications targeting the external cluster:

    apiVersion: argoproj.io/v1alpha1
    kind: Application
    metadata:
      name: multi-cluster-app
      namespace: argocd
    spec:
      destination:
        namespace: default
        server: https://external-cluster-api-url
      project: default
      source:
        path: path/to/manifests
        repoURL: https://github.com/your-org/your-repo.git
        targetRevision: HEAD
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
    

Conclusion

You now have a fully functional ArgoCD setup with:

  • Secure access via HTTPS
  • Automatic certificate management
  • Traefik ingress integration
  • Basic security hardening

Next Steps

  • Configure notifications
  • Set up project templates
  • Integrate with your CI pipeline
  • Explore ApplicationSets for multi-cluster management

References

Back to top