Enterprise GitOps with ArgoCD and Harbor on RKE2: Complete Integration Guide

Introduction: Building the Complete GitOps Stack

In Part 1 of this series, we established a robust Harbor container registry on RKE2 using SUSE Application Collection. Now we’ll complete the enterprise GitOps stack by integrating ArgoCD with Harbor, creating a secure, automated continuous deployment pipeline that meets enterprise requirements.

This integration combines Harbor’s enterprise-grade container security features with ArgoCD’s declarative GitOps workflows, providing a comprehensive platform for automated application deployment with built-in security scanning, policy enforcement, and compliance capabilities.

Why ArgoCD with Harbor for Enterprise GitOps

The ArgoCD-Harbor integration provides critical enterprise capabilities:

  • Secure Image Management: Automated vulnerability scanning before deployment
  • Policy Enforcement: Pre-deployment security validation and compliance checks
  • Image Promotion: Controlled progression from development to production registries
  • Audit Trail: Complete deployment history with image provenance tracking
  • Multi-Environment GitOps: Consistent deployment patterns across environments

Architecture Overview: Complete GitOps Stack

┌────────────────────────────────────────────────────────────────────────┐
│                          GitOps Workflow Architecture                     │
├────────────────────────────────────────────────────────────────────────┤
│  ┌─────────────┐    ┌─────────────┐    ┌─────────────┐    ┌─────────────┐ │
│  │   GitHub    │    │  CI Pipeline │    │   Harbor    │    │   ArgoCD    │ │
│  │ Repository  │───▶│ Build & Test │───▶│  Registry   │───▶│ Deployment  │ │
│  │ - Apps      │    │ - Security   │    │ - Scanning  │    │ - Sync      │ │
│  │ - Manifests │    │ - Quality    │    │ - Signing   │    │ - Rollback  │ │
│  └─────────────┘    └─────────────┘    └─────────────┘    └─────────────┘ │
├────────────────────────────────────────────────────────────────────────┤
│                           RKE2 Kubernetes Cluster                         │
│  ┌──────────────┐  ┌──────────────┐  ┌──────────────┐  ┌──────────────┐  │
│  │   ArgoCD     │  │    Harbor    │  │  Application │  │  Monitoring  │  │
│  │  - Server    │  │  - Core      │  │  Workloads   │  │  - Prometheus│  │
│  │  - Repo Srv  │  │  - Registry  │  │  - Services  │  │  - Grafana   │  │
│  │  - App Ctrl  │  │  - Scanner   │  │  - Ingress   │  │  - Alerts    │  │
│  │  - Redis     │  │  - Notary    │  │  - Storage   │  │  - Logs      │  │
│  └──────────────┘  └──────────────┘  └──────────────┘  └──────────────┘  │
└────────────────────────────────────────────────────────────────────────┘

Part 1: ArgoCD Installation and Configuration

Prerequisites and Environment Validation

Before proceeding, ensure you have the Harbor setup from Part 1 and validate your environment:

# Verify Harbor is running and accessible
kubectl get pods -n harbor-system
kubectl get ingress -n harbor-system

# Test Harbor API connectivity
curl -k https://harbor.yourdomain.com/api/v2.0/systeminfo

# Verify sufficient cluster resources for ArgoCD
kubectl top nodes
kubectl get storageclass

ArgoCD Installation via SUSE Application Collection

Install ArgoCD using the SUSE Application Collection for enterprise support and optimizations:

# Verify SUSE Application Collection repository
helm repo list | grep suse-application-collection
helm search repo suse-application-collection/argo-cd

# Create ArgoCD namespace with proper labels
kubectl create namespace argocd-system
kubectl label namespace argocd-system name=argocd-system

# Label nodes for ArgoCD workload placement
kubectl label nodes worker-node-1 argocd.io/workload=server
kubectl label nodes worker-node-2 argocd.io/workload=server
kubectl label nodes worker-node-3 argocd.io/workload=repo-server

Create comprehensive ArgoCD values configuration:

# argocd-values.yaml
global:
  domain: argocd.yourdomain.com
  
configs:
  params:
    # Enable insecure mode for internal communication
    server.insecure: true
    # Enable GRPC-Web for better performance
    server.grpc.web: true
    # Increase application reconciliation timeout
    application.instanceLabelKey: argocd.argoproj.io/instance
  
server:
  replicas: 3
  nodeSelector:
    argocd.io/workload: server
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchLabels:
              app.kubernetes.io/name: argocd-server
          topologyKey: kubernetes.io/hostname
  
  ingress:
    enabled: true
    ingressClassName: nginx
    annotations:
      cert-manager.io/cluster-issuer: "letsencrypt-prod-harbor"
      nginx.ingress.kubernetes.io/ssl-redirect: "true"
      nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
      nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"
      nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
      nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
      nginx.ingress.kubernetes.io/proxy-body-size: "64m"
    hosts:
    - argocd.yourdomain.com
    tls:
    - secretName: argocd-server-tls
      hosts:
      - argocd.yourdomain.com

repoServer:
  replicas: 2
  nodeSelector:
    argocd.io/workload: repo-server
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100
        podAffinityTerm:
          labelSelector:
            matchLabels:
              app.kubernetes.io/name: argocd-repo-server
          topologyKey: kubernetes.io/hostname
  
  # Harbor registry access configuration
  env:
  - name: DOCKER_CONFIG
    value: /tmp/.docker
  - name: HELM_REGISTRY_CONFIG
    value: /tmp/.docker/config.json
  
  volumeMounts:
  - name: harbor-registry-config
    mountPath: /tmp/.docker
    readOnly: true
  
  volumes:
  - name: harbor-registry-config
    secret:
      secretName: harbor-registry-config
      items:
      - key: .dockerconfigjson
        path: config.json

applicationSet:
  enabled: true
  replicas: 2

controller:
  replicas: 1
  nodeSelector:
    argocd.io/workload: server

redis:
  enabled: true
  nodeSelector:
    argocd.io/workload: server

Deploy ArgoCD with the configuration:

# Install ArgoCD
helm upgrade --install argocd \
  suse-application-collection/argo-cd \
  --namespace argocd-system \
  --values argocd-values.yaml \
  --timeout 20m \
  --wait

# Monitor deployment progress
kubectl get pods -n argocd-system -w

Initial ArgoCD Access and Configuration

# Get initial admin password
kubectl -n argocd-system get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d

# Access ArgoCD UI
echo "ArgoCD UI: https://argocd.yourdomain.com"
echo "Username: admin"
echo "Password: [decoded-password-from-above]"

High Availability Configuration

Configure ArgoCD for enterprise high availability:

# argocd-ha-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: argocd-cmd-params-cm
  namespace: argocd-system
data:
  # High availability settings
  server.enable.proxy.extension: "true"
  controller.repo.server.timeout.seconds: "300"
  controller.operation.processors: "20"
  controller.status.processors: "20"
  controller.self.heal.timeout.seconds: "300"
  reposerver.parallelism.limit: "10"
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: argocd-server-config
  namespace: argocd-system
data:
  # Enable RBAC
  policy.default: role:readonly
  policy.csv: |
    p, role:admin, applications, *, */*, allow
    p, role:admin, clusters, *, *, allow
    p, role:admin, repositories, *, *, allow
    g, argocd-admins, role:admin
# Apply HA configuration
kubectl apply -f argocd-ha-config.yaml

# Restart ArgoCD components to pick up new config
kubectl rollout restart deployment/argocd-server -n argocd-system
kubectl rollout restart deployment/argocd-repo-server -n argocd-system
kubectl rollout restart deployment/argocd-application-controller -n argocd-system

Part 2: Harbor-ArgoCD Integration

Creating Harbor Robot Accounts for ArgoCD

Create dedicated robot accounts in Harbor for ArgoCD automation:

Via Harbor UI:

  1. Navigate to Harbor UI → Administration → Robot Accounts
  2. Create new robot account: argocd-deployer
  3. Set expiration to appropriate timeframe (e.g., 1 year)
  4. Grant permissions:
    • Pull permission on all projects
    • Push permission on development projects
    • Read permission on vulnerability reports
  5. Save robot token securely

Alternatively, via Harbor API:

# Create robot account via API
curl -X POST "https://harbor.yourdomain.com/api/v2.0/robots" \
  -H "Content-Type: application/json" \
  -H "Authorization: Basic $(echo -n 'admin:Harbor12345!' | base64)" \
  -d '{
    "name": "argocd-deployer",
    "description": "ArgoCD deployment robot account",
    "duration": 365,
    "level": "system",
    "permissions": [
      {
        "kind": "project",
        "namespace": "*",
        "access": [
          {"resource": "repository", "action": "pull"},
          {"resource": "repository", "action": "push"},
          {"resource": "artifact", "action": "read"},
          {"resource": "scan", "action": "read"}
        ]
      }
    ]
  }'

Configuring ArgoCD Repository Credentials

Configure ArgoCD to authenticate with Harbor registry:

# Create Harbor registry secret for ArgoCD
kubectl create secret docker-registry harbor-registry-config \
  --docker-server=harbor.yourdomain.com \
  --docker-username="robot\$argocd-deployer" \
  --docker-password="[robot-token-from-harbor]" \
  --namespace=argocd-system

# Verify secret creation
kubectl get secret harbor-registry-config -n argocd-system -o yaml

Configure ArgoCD repository credentials via UI or declaratively:

# argocd-repo-creds.yaml
apiVersion: v1
kind: Secret
metadata:
  name: harbor-repo-creds
  namespace: argocd-system
  labels:
    argocd.argoproj.io/secret-type: repository
type: Opaque
stringData:
  type: helm
  name: harbor-helm-repo
  url: https://harbor.yourdomain.com/chartrepo/library
  username: robot$argocd-deployer
  password: [robot-token-from-harbor]
  tlsClientCertData: ""
  tlsClientCertKey: ""
  insecure: "false"
  enableLfs: "false"
# Apply repository credentials
kubectl apply -f argocd-repo-creds.yaml

Image Pull Secrets Configuration

Create image pull secrets for different namespaces and configure automatic injection:

# create-harbor-secrets.sh
#!/bin/bash

NAMESPACES=("default" "production" "staging" "development")
ROBOT_USERNAME="robot\$argocd-deployer"
ROBOT_PASSWORD="[robot-token-from-harbor]"
HARBOR_URL="harbor.yourdomain.com"

for ns in "${NAMESPACES[@]}"; do
  echo "Creating secret in namespace: $ns"
  
  # Create namespace if it doesn't exist
  kubectl create namespace $ns --dry-run=client -o yaml | kubectl apply -f -
  
  # Create docker registry secret
  kubectl create secret docker-registry harbor-registry-secret \
    --docker-server=$HARBOR_URL \
    --docker-username="$ROBOT_USERNAME" \
    --docker-password="$ROBOT_PASSWORD" \
    --namespace=$ns \
    --dry-run=client -o yaml | kubectl apply -f -
  
  # Patch default service account to use the secret
  kubectl patch serviceaccount default -n $ns \
    -p '{"imagePullSecrets": [{"name": "harbor-registry-secret"}]}'
done

echo "Harbor registry secrets created in all namespaces"
# Make script executable and run
chmod +x create-harbor-secrets.sh
./create-harbor-secrets.sh

Part 3: GitOps Workflows and Applications

Sample Application with Harbor Images

Create a complete sample application that demonstrates the ArgoCD-Harbor integration:

# sample-app/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web-app
  template:
    metadata:
      labels:
        app: web-app
    spec:
      imagePullSecrets:
      - name: harbor-registry-secret
      containers:
      - name: web-app
        image: harbor.yourdomain.com/library/nginx:1.25.3
        ports:
        - containerPort: 80
        resources:
          requests:
            cpu: 100m
            memory: 128Mi
          limits:
            cpu: 500m
            memory: 256Mi
        securityContext:
          runAsNonRoot: true
          runAsUser: 101
          readOnlyRootFilesystem: true
          allowPrivilegeEscalation: false
          capabilities:
            drop:
            - ALL
        livenessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
  name: web-app-service
  namespace: default
spec:
  selector:
    app: web-app
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: web-app-ingress
  namespace: default
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    cert-manager.io/cluster-issuer: "letsencrypt-prod-harbor"
spec:
  ingressClassName: nginx
  tls:
  - hosts:
    - web-app.yourdomain.com
    secretName: web-app-tls
  rules:
  - host: web-app.yourdomain.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: web-app-service
            port:
              number: 80

ArgoCD Application Configuration

Create an ArgoCD Application that deploys from a Git repository with Harbor-hosted images:

# argocd-application.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: web-app
  namespace: argocd-system
  labels:
    app.kubernetes.io/name: web-app
  finalizers:
  - resources-finalizer.argocd.argoproj.io
spec:
  project: default
  source:
    repoURL: https://github.com/your-org/k8s-manifests.git
    targetRevision: HEAD
    path: sample-app
  destination:
    server: https://kubernetes.default.svc
    namespace: default
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
    - CreateNamespace=true
    - PrunePropagationPolicy=foreground
    - PruneLast=true
  revisionHistoryLimit: 10
# Deploy ArgoCD Application
kubectl apply -f argocd-application.yaml

# Monitor application sync
kubectl get application web-app -n argocd-system
argocd app get web-app --server argocd.yourdomain.com

Kustomization with Harbor Registry Overrides

Use Kustomize to manage environment-specific configurations with different Harbor registry paths:

# base/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- deployment.yaml
- service.yaml
- ingress.yaml

images:
- name: nginx
  newName: harbor.yourdomain.com/library/nginx
  newTag: "1.25.3"

commonLabels:
  app: web-app
  version: v1.0.0
# overlays/production/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
- ../../base

namespace: production

images:
- name: harbor.yourdomain.com/library/nginx
  newName: harbor.yourdomain.com/production/nginx
  newTag: "1.25.3-prod"

patchesStrategicMerge:
- deployment-patch.yaml

replicas:
- name: web-app
  count: 5
# overlays/production/deployment-patch.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app
spec:
  template:
    spec:
      containers:
      - name: web-app
        resources:
          requests:
            cpu: 500m
            memory: 512Mi
          limits:
            cpu: 1000m
            memory: 1Gi

Helm Charts with Harbor Repositories

Configure ArgoCD to deploy Helm charts from Harbor’s chart repository:

# helm-app-application.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: web-app-helm
  namespace: argocd-system
spec:
  project: default
  source:
    repoURL: https://harbor.yourdomain.com/chartrepo/library
    chart: web-app
    targetRevision: "1.0.0"
    helm:
      parameters:
      - name: image.repository
        value: harbor.yourdomain.com/library/nginx
      - name: image.tag
        value: "1.25.3"
      - name: ingress.enabled
        value: "true"
      - name: ingress.host
        value: web-app-helm.yourdomain.com
      valueFiles:
      - values-production.yaml
  destination:
    server: https://kubernetes.default.svc
    namespace: production
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
    - CreateNamespace=true

Multi-Environment Deployment Patterns

Implement progressive deployment across multiple environments:

# app-of-apps.yaml - ArgoCD App of Apps pattern
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: app-of-apps
  namespace: argocd-system
spec:
  project: default
  source:
    repoURL: https://github.com/your-org/argocd-apps.git
    targetRevision: HEAD
    path: environments
  destination:
    server: https://kubernetes.default.svc
    namespace: argocd-system
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
# environments/development/web-app.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: web-app-dev
  namespace: argocd-system
spec:
  project: default
  source:
    repoURL: https://github.com/your-org/k8s-manifests.git
    targetRevision: develop
    path: overlays/development
  destination:
    server: https://kubernetes.default.svc
    namespace: development
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
---
# environments/staging/web-app.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: web-app-staging
  namespace: argocd-system
spec:
  project: default
  source:
    repoURL: https://github.com/your-org/k8s-manifests.git
    targetRevision: release
    path: overlays/staging
  destination:
    server: https://kubernetes.default.svc
    namespace: staging
  syncPolicy:
    automated:
      prune: false
      selfHeal: false
---
# environments/production/web-app.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: web-app-prod
  namespace: argocd-system
spec:
  project: default
  source:
    repoURL: https://github.com/your-org/k8s-manifests.git
    targetRevision: main
    path: overlays/production
  destination:
    server: https://kubernetes.default.svc
    namespace: production
  syncPolicy:
    automated:
      prune: false
      selfHeal: false

Part 4: Advanced Integration Features

Security Scanning Integration

Implement pre-deployment security validation using Harbor’s vulnerability scanning:

# security-policy.yaml - OPA Gatekeeper policy for image scanning
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: harborvulnerabilityscan
spec:
  crd:
    spec:
      names:
        kind: HarborVulnerabilityScan
      validation:
        type: object
        properties:
          severity:
            type: array
            items:
              type: string
          maxCVSS:
            type: number
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package harborvulnerabilityscan
        
        violation[{"msg": msg}] {
          input.review.kind.kind == "Deployment"
          image := input.review.object.spec.template.spec.containers[_].image
          contains(image, "harbor.yourdomain.com")
          not image_is_scanned(image)
          msg := sprintf("Image %v must be scanned by Harbor before deployment", [image])
        }
        
        image_is_scanned(image) {
          # Check Harbor API for scan results
          # This would require external data provider or webhook
          true
        }

Create a pre-sync hook to validate image security:

# security-validation-hook.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: security-validation
  annotations:
    argocd.argoproj.io/hook: PreSync
    argocd.argoproj.io/hook-delete-policy: BeforeHookCreation
spec:
  template:
    spec:
      restartPolicy: Never
      containers:
      - name: security-validator
        image: harbor.yourdomain.com/tools/security-validator:latest
        env:
        - name: HARBOR_URL
          value: "https://harbor.yourdomain.com"
        - name: HARBOR_USERNAME
          valueFrom:
            secretKeyRef:
              name: harbor-robot-secret
              key: username
        - name: HARBOR_PASSWORD
          valueFrom:
            secretKeyRef:
              name: harbor-robot-secret
              key: password
        command:
        - /bin/sh
        - -c
        - |
          #!/bin/sh
          echo "Validating image security scan results..."
          
          # Extract images from deployment manifests
          IMAGES=$(kubectl get deployment web-app -o jsonpath='{.spec.template.spec.containers[*].image}')
          
          for image in $IMAGES; do
            echo "Checking scan results for: $image"
            
            # Parse Harbor project and repository from image
            PROJECT=$(echo $image | cut -d'/' -f2)
            REPOSITORY=$(echo $image | cut -d'/' -f3 | cut -d':' -f1)
            TAG=$(echo $image | cut -d':' -f2)
            
            # Query Harbor API for vulnerability scan results
            SCAN_RESULT=$(curl -s -u "$HARBOR_USERNAME:$HARBOR_PASSWORD" \
              "$HARBOR_URL/api/v2.0/projects/$PROJECT/repositories/$REPOSITORY/artifacts/$TAG/scan")
            
            # Check for critical vulnerabilities
            CRITICAL_COUNT=$(echo $SCAN_RESULT | jq -r '.scan_overview."application/vnd.security.vulnerability.report; version=1.1".summary.total // 0')
            
            if [ "$CRITICAL_COUNT" -gt 0 ]; then
              echo "CRITICAL: Image $image has $CRITICAL_COUNT critical vulnerabilities"
              echo "Scan results: $SCAN_RESULT"
              exit 1
            fi
            
            echo "✓ Image $image passed security validation"
          done
          
          echo "All images passed security validation"

Automated CI/CD Pipeline Integration

Complete GitHub Actions workflow that builds, scans, and deploys via ArgoCD:

# .github/workflows/ci-cd-pipeline.yaml
name: CI/CD Pipeline with Harbor and ArgoCD

on:
  push:
    branches: [ main, develop ]
  pull_request:
    branches: [ main ]

env:
  HARBOR_REGISTRY: harbor.yourdomain.com
  HARBOR_PROJECT: library
  IMAGE_NAME: web-app
  ARGOCD_SERVER: argocd.yourdomain.com

jobs:
  build-and-scan:
    runs-on: ubuntu-latest
    outputs:
      image-digest: ${{ steps.build.outputs.digest }}
      image-tag: ${{ steps.meta.outputs.tags }}
      
    steps:
    - name: Checkout code
      uses: actions/checkout@v4
      
    - name: Set up Docker Buildx
      uses: docker/setup-buildx-action@v3
      
    - name: Login to Harbor
      uses: docker/login-action@v3
      with:
        registry: ${{ env.HARBOR_REGISTRY }}
        username: ${{ secrets.HARBOR_USERNAME }}
        password: ${{ secrets.HARBOR_PASSWORD }}
        
    - name: Extract metadata
      id: meta
      uses: docker/metadata-action@v5
      with:
        images: ${{ env.HARBOR_REGISTRY }}/${{ env.HARBOR_PROJECT }}/${{ env.IMAGE_NAME }}
        tags: |
          type=ref,event=branch
          type=ref,event=pr
          type=sha,prefix={{branch}}-
          type=raw,value=latest,enable={{is_default_branch}}
          
    - name: Build and push image
      id: build
      uses: docker/build-push-action@v5
      with:
        context: .
        platforms: linux/amd64,linux/arm64
        push: true
        tags: ${{ steps.meta.outputs.tags }}
        labels: ${{ steps.meta.outputs.labels }}
        cache-from: type=gha
        cache-to: type=gha,mode=max
        
    - name: Wait for Harbor scan
      run: |
        echo "Waiting for Harbor vulnerability scan to complete..."
        sleep 30
        
        # Check scan status
        TAG=$(echo "${{ steps.meta.outputs.tags }}" | head -n1 | cut -d':' -f2)
        
        for i in {1..10}; do
          SCAN_STATUS=$(curl -s -u "${{ secrets.HARBOR_USERNAME }}:${{ secrets.HARBOR_PASSWORD }}" \
            "${{ env.HARBOR_REGISTRY }}/api/v2.0/projects/${{ env.HARBOR_PROJECT }}/repositories/${{ env.IMAGE_NAME }}/artifacts/$TAG/scan" \
            | jq -r '.scan_overview."application/vnd.security.vulnerability.report; version=1.1".scan_status // "Unknown"')
          
          echo "Scan status: $SCAN_STATUS"
          
          if [ "$SCAN_STATUS" = "Success" ]; then
            echo "Vulnerability scan completed successfully"
            break
          elif [ "$SCAN_STATUS" = "Error" ]; then
            echo "Vulnerability scan failed"
            exit 1
          fi
          
          sleep 30
        done
        
    - name: Check vulnerability results
      run: |
        TAG=$(echo "${{ steps.meta.outputs.tags }}" | head -n1 | cut -d':' -f2)
        
        # Get vulnerability summary
        VULN_SUMMARY=$(curl -s -u "${{ secrets.HARBOR_USERNAME }}:${{ secrets.HARBOR_PASSWORD }}" \
          "${{ env.HARBOR_REGISTRY }}/api/v2.0/projects/${{ env.HARBOR_PROJECT }}/repositories/${{ env.IMAGE_NAME }}/artifacts/$TAG/scan" \
          | jq -r '.scan_overview."application/vnd.security.vulnerability.report; version=1.1".summary // {}')
        
        echo "Vulnerability Summary: $VULN_SUMMARY"
        
        CRITICAL=$(echo $VULN_SUMMARY | jq -r '.critical // 0')
        HIGH=$(echo $VULN_SUMMARY | jq -r '.high // 0')
        
        if [ "$CRITICAL" -gt 0 ]; then
          echo "❌ Image has $CRITICAL critical vulnerabilities - blocking deployment"
          exit 1
        fi
        
        if [ "$HIGH" -gt 5 ]; then
          echo "⚠️  Image has $HIGH high vulnerabilities - review required"
          # Set output for manual approval step
          echo "review-required=true" >> $GITHUB_OUTPUT
        fi
        
        echo "✅ Image passed vulnerability check"

  update-manifests:
    needs: build-and-scan
    runs-on: ubuntu-latest
    if: github.ref == 'refs/heads/main'
    
    steps:
    - name: Checkout manifests repository
      uses: actions/checkout@v4
      with:
        repository: your-org/k8s-manifests
        token: ${{ secrets.MANIFEST_REPO_TOKEN }}
        path: manifests
        
    - name: Update image tag
      run: |
        cd manifests
        NEW_TAG=$(echo "${{ needs.build-and-scan.outputs.image-tag }}" | head -n1 | cut -d':' -f2)
        
        # Update kustomization files
        find overlays -name "kustomization.yaml" -exec \
          yq eval ".images[] |= select(.name == \"harbor.yourdomain.com/library/${{ env.IMAGE_NAME }}\") |= .newTag = \"$NEW_TAG\"" -i {} \;
        
        # Update Helm values files
        find overlays -name "values*.yaml" -exec \
          yq eval ".image.tag = \"$NEW_TAG\"" -i {} \;
          
    - name: Commit and push changes
      run: |
        cd manifests
        git config user.name "GitHub Actions"
        git config user.email "actions@github.com"
        git add .
        git commit -m "Update ${{ env.IMAGE_NAME }} to ${{ needs.build-and-scan.outputs.image-tag }}"
        git push

  deploy-staging:
    needs: [build-and-scan, update-manifests]
    runs-on: ubuntu-latest
    environment: staging
    
    steps:
    - name: Login to ArgoCD
      run: |
        argocd login ${{ env.ARGOCD_SERVER }} \
          --username ${{ secrets.ARGOCD_USERNAME }} \
          --password ${{ secrets.ARGOCD_PASSWORD }} \
          --insecure
          
    - name: Sync staging application
      run: |
        argocd app sync web-app-staging --server ${{ env.ARGOCD_SERVER }}
        argocd app wait web-app-staging --server ${{ env.ARGOCD_SERVER }} --timeout 300
        
    - name: Run integration tests
      run: |
        echo "Running integration tests against staging..."
        # Add your integration test commands here
        
  deploy-production:
    needs: [build-and-scan, update-manifests, deploy-staging]
    runs-on: ubuntu-latest
    environment: production
    
    steps:
    - name: Login to ArgoCD
      run: |
        argocd login ${{ env.ARGOCD_SERVER }} \
          --username ${{ secrets.ARGOCD_USERNAME }} \
          --password ${{ secrets.ARGOCD_PASSWORD }} \
          --insecure
          
    - name: Sync production application
      run: |
        argocd app sync web-app-prod --server ${{ env.ARGOCD_SERVER }}
        argocd app wait web-app-prod --server ${{ env.ARGOCD_SERVER }} --timeout 600

Image Promotion Workflows

Implement controlled image promotion between Harbor projects:

# image-promotion-script.sh
#!/bin/bash

# Image promotion script for Harbor projects
# Usage: ./image-promotion-script.sh source-project target-project image-name tag

SOURCE_PROJECT=$1
TARGET_PROJECT=$2
IMAGE_NAME=$3
TAG=$4
HARBOR_URL="https://harbor.yourdomain.com"

echo "Promoting image: $SOURCE_PROJECT/$IMAGE_NAME:$TAG → $TARGET_PROJECT/$IMAGE_NAME:$TAG"

# Step 1: Get source image details
echo "Fetching source image details..."
SOURCE_IMAGE_INFO=$(curl -s -u "$HARBOR_USERNAME:$HARBOR_PASSWORD" \
  "$HARBOR_URL/api/v2.0/projects/$SOURCE_PROJECT/repositories/$IMAGE_NAME/artifacts/$TAG")

# Step 2: Verify image has passed security scan
echo "Verifying security scan results..."
SCAN_RESULT=$(echo $SOURCE_IMAGE_INFO | jq -r '.scan_overview."application/vnd.security.vulnerability.report; version=1.1".summary // {}')
CRITICAL_VULNS=$(echo $SCAN_RESULT | jq -r '.critical // 0')

if [ "$CRITICAL_VULNS" -gt 0 ]; then
  echo "❌ Cannot promote image with $CRITICAL_VULNS critical vulnerabilities"
  exit 1
fi

# Step 3: Copy image to target project
echo "Copying image to target project..."
docker pull $HARBOR_URL/$SOURCE_PROJECT/$IMAGE_NAME:$TAG
docker tag $HARBOR_URL/$SOURCE_PROJECT/$IMAGE_NAME:$TAG $HARBOR_URL/$TARGET_PROJECT/$IMAGE_NAME:$TAG
docker push $HARBOR_URL/$TARGET_PROJECT/$IMAGE_NAME:$TAG

# Step 4: Sign the promoted image (if Notary is enabled)
if [ "$ENABLE_CONTENT_TRUST" = "true" ]; then
  echo "Signing promoted image..."
  docker trust sign $HARBOR_URL/$TARGET_PROJECT/$IMAGE_NAME:$TAG
fi

# Step 5: Tag as promoted
echo "Adding promotion metadata..."
curl -X POST "$HARBOR_URL/api/v2.0/projects/$TARGET_PROJECT/repositories/$IMAGE_NAME/artifacts/$TAG/labels" \
  -H "Content-Type: application/json" \
  -u "$HARBOR_USERNAME:$HARBOR_PASSWORD" \
  -d '{
    "label": {
      "name": "promoted",
      "description": "Image promoted from '$SOURCE_PROJECT'",
      "color": "#00FF00"
    }
  }'

echo "✅ Image promotion completed successfully"

Policy Enforcement and Governance

Implement comprehensive policy enforcement using OPA Gatekeeper:

# harbor-policy-constraints.yaml
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: harborimageorigin
spec:
  crd:
    spec:
      names:
        kind: HarborImageOrigin
      validation:
        type: object
        properties:
          allowedRegistries:
            type: array
            items:
              type: string
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package harborimageorigin
        
        violation[{"msg": msg}] {
          input.review.kind.kind == "Deployment"
          container := input.review.object.spec.template.spec.containers[_]
          image := container.image
          not image_from_allowed_registry(image)
          msg := sprintf("Image %v must come from approved Harbor registry", [image])
        }
        
        image_from_allowed_registry(image) {
          registry := input.parameters.allowedRegistries[_]
          startswith(image, registry)
        }
---
apiVersion: config.gatekeeper.sh/v1alpha1
kind: HarborImageOrigin
metadata:
  name: harbor-only-images
spec:
  match:
    - apiGroups: ["apps"]
      kinds: ["Deployment"]
      namespaces: ["production", "staging"]
  parameters:
    allowedRegistries:
    - "harbor.yourdomain.com/production/"
    - "harbor.yourdomain.com/staging/"
---
# Additional constraint for requiring signed images in production
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: requiredsignedimages
spec:
  crd:
    spec:
      names:
        kind: RequiredSignedImages
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package requiredsignedimages
        
        violation[{"msg": msg}] {
          input.review.kind.kind == "Deployment"
          input.review.namespace == "production"
          container := input.review.object.spec.template.spec.containers[_]
          image := container.image
          not image_is_signed(image)
          msg := sprintf("Production images must be signed: %v", [image])
        }
        
        image_is_signed(image) {
          # This would need to be implemented with external data provider
          # to check Harbor's Notary signatures
          true
        }

Part 5: Production Operations

Monitoring and Observability Setup

Configure comprehensive monitoring for the ArgoCD-Harbor integration:

# argocd-monitoring.yaml
apiVersion: v1
kind: ServiceMonitor
metadata:
  name: argocd-metrics
  namespace: argocd-system
  labels:
    app.kubernetes.io/name: argocd-metrics
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: argocd-server-metrics
  endpoints:
  - port: metrics
    interval: 30s
    path: /metrics
---
apiVersion: v1
kind: ServiceMonitor
metadata:
  name: argocd-application-controller-metrics
  namespace: argocd-system
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: argocd-application-controller
  endpoints:
  - port: metrics
    interval: 30s
    path: /metrics
---
apiVersion: v1
kind: ServiceMonitor
metadata:
  name: argocd-repo-server-metrics
  namespace: argocd-system
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: argocd-repo-server
  endpoints:
  - port: metrics
    interval: 30s
    path: /metrics

Essential Grafana dashboards for ArgoCD monitoring:

# Key ArgoCD metrics to monitor:

# Application Health
- argocd_app_health_status
- argocd_app_sync_total
- argocd_app_info

# Repository Operations
- argocd_git_request_duration_seconds
- argocd_git_request_total
- argocd_repo_pending_request_total

# Controller Performance  
- argocd_app_reconcile_bucket
- argocd_cluster_api_resource_objects
- argocd_kubectl_exec_pending

# Resource Usage
- process_resident_memory_bytes
- process_cpu_seconds_total
- go_memstats_heap_inuse_bytes

Harbor integration-specific monitoring:

# harbor-registry-monitoring.yaml
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  name: harbor-registry-alerts
  namespace: harbor-system
spec:
  groups:
  - name: harbor.registry
    rules:
    - alert: HarborRegistryDown
      expr: up{job="harbor-core"} == 0
      for: 5m
      labels:
        severity: critical
      annotations:
        summary: "Harbor registry is down"
        description: "Harbor registry has been down for more than 5 minutes"
        
    - alert: HarborHighVulnerabilities
      expr: harbor_artifact_vulnerabilities{severity="Critical"} > 0
      for: 0m
      labels:
        severity: warning
      annotations:
        summary: "Critical vulnerabilities detected in Harbor"
        description: "{{ $value }} critical vulnerabilities found in {{ $labels.project }}/{{ $labels.repository }}"
        
    - alert: ArgocdSyncFailure
      expr: increase(argocd_app_sync_total{phase="Failed"}[5m]) > 0
      for: 0m
      labels:
        severity: warning
      annotations:
        summary: "ArgoCD application sync failed"
        description: "Application {{ $labels.name }} failed to sync: {{ $labels.operation }}

Performance Optimization

Optimize ArgoCD performance for large-scale deployments:

# argocd-performance-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: argocd-cmd-params-cm
  namespace: argocd-system
data:
  # Application controller settings
  controller.status.processors: "40"
  controller.operation.processors: "40"
  controller.self.heal.timeout.seconds: "300"
  controller.repo.server.timeout.seconds: "300"
  
  # Repository server settings
  reposerver.parallelism.limit: "20"
  reposerver.git.request.timeout: "300"
  
  # Server settings
  server.enable.proxy.extension: "true"
  server.repo.server.timeout.seconds: "300"
  
  # Performance optimizations
  application.instanceLabelKey: "argocd.argoproj.io/instance"
  application.reconciliation.jitter: "0.1"
  timeout.reconciliation: "300s"

Harbor performance optimization:

# harbor-performance-optimization.yaml
# Add to harbor-values.yaml under respective components

core:
  resources:
    requests:
      cpu: 500m
      memory: 1Gi
    limits:
      cpu: 2000m
      memory: 4Gi
  nodeSelector:
    harbor.io/workload: core
    
registry:
  resources:
    requests:
      cpu: 500m
      memory: 1Gi
    limits:
      cpu: 1000m
      memory: 2Gi
  storage:
    # Optimize for high IOPS
    storageClass: "longhorn-harbor-registry-ssd"
    
database:
  internal:
    resources:
      requests:
        cpu: 1000m
        memory: 2Gi
      limits:
        cpu: 2000m
        memory: 4Gi
    # PostgreSQL performance tuning
    config:
      shared_buffers: "1GB"
      effective_cache_size: "3GB"
      maintenance_work_mem: "256MB"
      checkpoint_completion_target: "0.9"
      wal_buffers: "16MB"
      default_statistics_target: "100"

Backup and Disaster Recovery

Implement comprehensive backup strategies:

# argocd-backup-cronjob.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
  name: argocd-backup
  namespace: argocd-system
spec:
  schedule: "0 2 * * *"  # Daily at 2 AM
  jobTemplate:
    spec:
      template:
        spec:
          serviceAccountName: argocd-backup-sa
          containers:
          - name: argocd-backup
            image: harbor.yourdomain.com/tools/argocd-backup:latest
            env:
            - name: ARGOCD_SERVER
              value: "argocd-server.argocd-system.svc.cluster.local:443"
            - name: ARGOCD_AUTH_TOKEN
              valueFrom:
                secretKeyRef:
                  name: argocd-backup-token
                  key: token
            - name: BACKUP_LOCATION
              value: "s3://your-backup-bucket/argocd"
            command:
            - /bin/sh
            - -c
            - |
              #!/bin/sh
              echo "Starting ArgoCD backup..."
              
              # Export all applications
              argocd app list -o json > /tmp/applications.json
              
              # Export all projects
              argocd proj list -o json > /tmp/projects.json
              
              # Export all repositories
              argocd repo list -o json > /tmp/repositories.json
              
              # Export cluster configurations
              argocd cluster list -o json > /tmp/clusters.json
              
              # Create backup archive
              tar -czf /tmp/argocd-backup-$(date +%Y%m%d).tar.gz \
                /tmp/applications.json \
                /tmp/projects.json \
                /tmp/repositories.json \
                /tmp/clusters.json
              
              # Upload to backup storage
              aws s3 cp /tmp/argocd-backup-$(date +%Y%m%d).tar.gz $BACKUP_LOCATION/
              
              echo "Backup completed successfully"
          restartPolicy: OnFailure

Harbor backup strategy:

# harbor-backup-cronjob.yaml
apiVersion: batch/v1
kind: CronJob
metadata:
  name: harbor-backup
  namespace: harbor-system
spec:
  schedule: "0 1 * * *"  # Daily at 1 AM
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: harbor-backup
            image: harbor.yourdomain.com/tools/harbor-backup:latest
            env:
            - name: PGPASSWORD
              valueFrom:
                secretKeyRef:
                  name: harbor-database
                  key: password
            volumeMounts:
            - name: backup-storage
              mountPath: /backup
            command:
            - /bin/sh
            - -c
            - |
              #!/bin/sh
              echo "Starting Harbor backup..."
              
              # Backup PostgreSQL database
              pg_dump -h harbor-database \
                -U postgres \
                -d registry \
                -f /backup/harbor-db-$(date +%Y%m%d).sql
              
              # Backup Redis data (if using internal Redis)
              redis-cli -h harbor-redis \
                --rdb /backup/harbor-redis-$(date +%Y%m%d).rdb
              
              # Registry storage is handled by Longhorn snapshots
              
              echo "Harbor backup completed"
          volumes:
          - name: backup-storage
            persistentVolumeClaim:
              claimName: backup-storage-pvc
          restartPolicy: OnFailure

Troubleshooting Common Integration Issues

Common issues and diagnostic procedures:

# troubleshooting-guide.sh
#!/bin/bash

echo "ArgoCD-Harbor Integration Troubleshooting Guide"
echo "================================================"

# 1. Check ArgoCD Application Status
echo "1. ArgoCD Application Status:"
kubectl get applications -n argocd-system
argocd app list --output wide

# 2. Verify Harbor Registry Access
echo "2. Testing Harbor Registry Access:"
kubectl run -it --rm debug --image=busybox --restart=Never -- /bin/sh -c "
  apk add --no-cache curl
  curl -k -I https://harbor.yourdomain.com/v2/
"

# 3. Check Image Pull Secrets
echo "3. Verifying Image Pull Secrets:"
for ns in default production staging development; do
  echo "Namespace: $ns"
  kubectl get secrets -n $ns | grep harbor
  kubectl get serviceaccount default -n $ns -o yaml | grep imagePullSecrets
done

# 4. Validate ArgoCD Repository Credentials
echo "4. ArgoCD Repository Credentials:"
kubectl get secrets -n argocd-system | grep repo
argocd repo list

# 5. Check Container Registry Authentication
echo "5. Testing Container Registry Authentication:"
kubectl run -it --rm debug --image=busybox --restart=Never -- /bin/sh -c "
  # Test if image pull secret works
  echo 'Testing image pull with Harbor credentials...'
"

# 6. ArgoCD Sync Issues Diagnosis
echo "6. ArgoCD Sync Issues:"
argocd app diff web-app
argocd app sync web-app --dry-run

# 7. Harbor Vulnerability Scan Status  
echo "7. Harbor Scan Status:"
# This would need Harbor API credentials
# curl -u username:password https://harbor.yourdomain.com/api/v2.0/...

# 8. Network Connectivity Tests
echo "8. Network Connectivity:"
kubectl run -it --rm netdebug --image=nicolaka/netshoot --restart=Never -- /bin/bash -c "
  echo 'Testing connectivity to Harbor...'
  nslookup harbor.yourdomain.com
  curl -k -I https://harbor.yourdomain.com
  
  echo 'Testing connectivity to ArgoCD...'
  nslookup argocd.yourdomain.com
  curl -k -I https://argocd.yourdomain.com
"

Application-specific troubleshooting:

# Common Issues and Solutions:

# Issue 1: ImagePullBackOff errors
kubectl describe pod [failing-pod] -n [namespace]
# Check:
# - Image name and tag correctness
# - Registry authentication
# - Network policies blocking registry access

# Issue 2: ArgoCD sync failures
argocd app get [app-name] --show-params
# Check:
# - Repository access credentials
# - Manifest syntax errors
# - Resource quotas and limits
# - RBAC permissions

# Issue 3: Harbor vulnerability scan failures
# Check Harbor logs:
kubectl logs -n harbor-system deployment/harbor-core
kubectl logs -n harbor-system deployment/harbor-trivy

# Issue 4: Certificate verification errors
# Verify TLS certificates:
kubectl get certificate -n argocd-system
kubectl get certificate -n harbor-system
kubectl describe certificaterequest [cert-name] -n [namespace]

# Issue 5: Performance issues
# Check resource utilization:
kubectl top pods -n argocd-system
kubectl top pods -n harbor-system

# Review ArgoCD application controller logs:
kubectl logs -n argocd-system deployment/argocd-application-controller --tail=100

Maintenance and Upgrade Procedures

Structured approach to maintaining the GitOps stack:

# maintenance-checklist.md
# Monthly Maintenance Checklist

## Security Updates
- [ ] Review and update Harbor robot account credentials
- [ ] Rotate ArgoCD admin password and service tokens
- [ ] Update TLS certificates if approaching expiration
- [ ] Review and update image vulnerability policies
- [ ] Audit user access and permissions

## Performance Review
- [ ] Analyze ArgoCD application sync times
- [ ] Review Harbor storage usage and cleanup old images
- [ ] Monitor resource utilization and scale if needed
- [ ] Review and optimize backup retention policies
- [ ] Check and optimize database performance

## Health Checks
- [ ] Verify all ArgoCD applications are synced
- [ ] Test Harbor registry pull/push operations
- [ ] Validate monitoring and alerting functionality  
- [ ] Test disaster recovery procedures
- [ ] Review application deployment success rates

## Updates and Upgrades
- [ ] Plan ArgoCD version upgrades
- [ ] Plan Harbor version upgrades
- [ ] Update Helm charts and dependencies
- [ ] Test upgrades in non-production environments
- [ ] Document any configuration changes

Production Best Practices Summary

Security Best Practices

  • Image Security: Implement mandatory vulnerability scanning with defined severity thresholds
  • Access Control: Use dedicated robot accounts with minimal required permissions
  • Network Security: Implement network policies to restrict traffic between components
  • Content Trust: Enable Docker Content Trust and image signing for production images
  • Secret Management: Rotate credentials regularly and use external secret management systems

Operational Excellence

  • Monitoring: Comprehensive monitoring of application health, sync status, and registry operations
  • Alerting: Proactive alerting for sync failures, security issues, and performance degradation
  • Backup: Automated backups of configurations, applications, and registry data
  • Documentation: Maintain current runbooks and troubleshooting guides
  • Testing: Regular testing of disaster recovery and upgrade procedures

Performance Optimization

  • Resource Allocation: Right-size components based on workload requirements
  • Caching: Optimize repository caching and image layer caching
  • Parallelization: Configure appropriate parallelism for sync operations
  • Storage: Use high-performance storage for registry and database workloads
  • Network: Optimize network connectivity and reduce latency between components

Conclusion: Enterprise-Ready GitOps Platform

The integration of ArgoCD with Harbor on RKE2 creates a comprehensive, enterprise-grade GitOps platform that addresses the critical requirements of modern cloud-native applications:

  • Complete Security Pipeline: From vulnerability scanning to signed image deployment with policy enforcement
  • Automated Operations: Self-healing deployments with comprehensive monitoring and alerting
  • Enterprise Integration: LDAP/OIDC authentication, RBAC, and audit trails for compliance
  • Scalable Architecture: High-availability design supporting large-scale enterprise deployments
  • Developer Productivity: GitOps workflows that enable rapid, secure, and reliable application delivery

This platform provides the foundation for modern DevOps practices while meeting enterprise security, compliance, and operational requirements. The combination of Harbor’s robust container management capabilities with ArgoCD’s declarative GitOps approach creates a secure, automated deployment pipeline that scales with your organization’s needs.

Next Steps and Advanced Topics

With your GitOps platform established, consider exploring these advanced topics:

  • Multi-Cluster Management: Extend ArgoCD to manage applications across multiple Kubernetes clusters
  • Progressive Delivery: Implement canary deployments and blue-green strategies with ArgoCD and Argo Rollouts
  • Policy as Code: Expand OPA Gatekeeper policies for comprehensive governance automation
  • Supply Chain Security: Integrate SLSA compliance and software bill of materials (SBOM) generation
  • Observability Enhancement: Implement distributed tracing and advanced application performance monitoring

The GitOps platform you’ve built serves as the foundation for these advanced capabilities, enabling your organization to continuously improve its cloud-native application delivery practices while maintaining security, compliance, and operational excellence.