Introduction: Transforming Enterprise Container Management with Harbor on RKE2
In today’s cloud-native landscape, enterprise organizations require robust, secure, and scalable container registry solutions that integrate seamlessly with their Kubernetes infrastructure. Harbor, as a CNCF graduated project, provides enterprise-grade container registry capabilities that go far beyond basic image storage. When combined with RKE2 (Rancher Kubernetes Engine 2) and deployed through SUSE Application Collection, it creates a powerful foundation for enterprise container management strategies.
This comprehensive guide will walk you through the complete process of deploying Harbor registry on RKE2 using SUSE Application Collection, covering everything from initial cluster preparation to advanced security configurations. Whether you’re a DevOps engineer, platform engineer, or SRE managing enterprise Kubernetes environments, this guide provides the technical depth and practical insights needed for production deployments.
Why Harbor on RKE2 Matters for Enterprise
Harbor brings critical enterprise features that are essential for production container registries:
- Security Scanning: Built-in vulnerability scanning with Trivy and Clair integration
- Content Trust: Docker Content Trust and Notary integration for image signing
- Multi-tenancy: Project-based organization with RBAC controls
- Policy Enforcement: Image promotion policies and admission controllers
- Replication: Multi-site replication for disaster recovery and global distribution
- Compliance: Audit logging and governance features for regulatory requirements
RKE2 provides the secure, government-hardened Kubernetes distribution that enterprises need, with FIPS 140-2 compliance and CIS benchmark adherence out of the box. The combination creates a production-ready platform that meets the most stringent security and operational requirements.
Architecture Overview
Before diving into implementation, understanding the architecture is crucial for successful deployment:
┌─────────────────────────────────────────────────────────────┐
│ RKE2 Kubernetes Cluster │
├─────────────────────────────────────────────────────────────┤
│ ┌─────────────────┐ ┌─────────────────┐ ┌──────────────┐ │
│ │ Control Plane │ │ Worker Nodes │ │ Longhorn │ │
│ │ - etcd │ │ - Harbor Pods │ │ Storage │ │
│ │ - kube-api │ │ - Registry │ │ - Database │ │
│ │ - scheduler │ │ - Notary │ │ - Redis │ │
│ │ - controller │ │ - Trivy │ │ - Registry │ │
│ └─────────────────┘ └─────────────────┘ └──────────────┘ │
├─────────────────────────────────────────────────────────────┤
│ ┌─────────────────┐ ┌─────────────────┐ ┌──────────────┐ │
│ │ Ingress Nginx │ │ Cert Manager │ │ Monitoring │ │
│ │ - SSL Term │ │ - TLS Certs │ │ - Prometheus│ │
│ │ - Load Balance │ │ - Let's Encrypt│ │ - Grafana │ │
│ └─────────────────┘ └─────────────────┘ └──────────────┘ │
└─────────────────────────────────────────────────────────────┘
Prerequisites and System Requirements
Infrastructure Requirements
For a production Harbor deployment on RKE2, you’ll need:
- Kubernetes Cluster: RKE2 v1.24+ with at least 3 control plane nodes and 3 worker nodes
- Memory: Minimum 16GB per node (32GB recommended for production)
- CPU: Minimum 4 cores per node (8 cores recommended)
- Storage: SSD-based storage with 500GB minimum (2TB+ for production)
- Network: High-bandwidth connectivity between nodes (10Gbps recommended)
Software Prerequisites
- Rancher Management Server: v2.7+ with SUSE Application Collection enabled
- Longhorn Storage: v1.4+ for persistent storage
- Ingress Controller: NGINX Ingress Controller or Traefik
- Cert Manager: v1.10+ for TLS certificate management
- DNS: Properly configured DNS with wildcard certificates capability
Step 1: RKE2 Cluster Preparation
Validating Cluster Readiness
First, ensure your RKE2 cluster meets Harbor’s requirements:
# Check cluster nodes and status
kubectl get nodes -o wide
# Verify resource availability
kubectl top nodes
# Check available storage classes
kubectl get storageclass
# Verify DNS resolution
kubectl run -it --rm debug --image=busybox --restart=Never -- nslookup kubernetes.default
Node Labeling and Taints
For production deployments, label nodes specifically for Harbor workloads:
# Label nodes for Harbor components
kubectl label nodes worker-node-1 harbor.io/workload=core
kubectl label nodes worker-node-2 harbor.io/workload=core
kubectl label nodes worker-node-3 harbor.io/workload=database
# Add taints if you want dedicated Harbor nodes
kubectl taint nodes worker-node-1 harbor.io/dedicated=true:NoSchedule
kubectl taint nodes worker-node-2 harbor.io/dedicated=true:NoSchedule
Step 2: SUSE Application Collection Setup
Accessing SUSE Application Collection
SUSE Application Collection provides curated, enterprise-supported Helm charts. To access Harbor through the collection:
- Navigate to Rancher UI → Apps & Marketplace
- Enable SUSE Application Collection repository
- Verify Harbor chart availability
# Via CLI - Add SUSE Application Collection repository
helm repo add suse-application-collection https://registry.suse.com/repository/suse-application-collection/
helm repo update
# Search for Harbor in the collection
helm search repo suse-application-collection/harbor
# Get chart information
helm show chart suse-application-collection/harbor
helm show values suse-application-collection/harbor
Step 3: Storage Configuration with Longhorn
Longhorn Storage Classes
Harbor requires persistent storage for its database, Redis, and registry data. Configure optimized Longhorn storage classes:
# Create storage class for Harbor database
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: longhorn-harbor-db
annotations:
storageclass.kubernetes.io/is-default-class: "false"
provisioner: driver.longhorn.io
allowVolumeExpansion: true
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
parameters:
numberOfReplicas: "3"
staleReplicaTimeout: "2880"
fromBackup: ""
diskSelector: "ssd"
nodeSelector: "harbor.io/workload=database"
dataLocality: "strict-local"
# Create storage class for Harbor registry data
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: longhorn-harbor-registry
annotations:
storageclass.kubernetes.io/is-default-class: "false"
provisioner: driver.longhorn.io
allowVolumeExpansion: true
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
parameters:
numberOfReplicas: "2"
staleReplicaTimeout: "2880"
fromBackup: ""
diskSelector: "ssd,hdd"
nodeSelector: "harbor.io/workload=core"
dataLocality: "best-effort"
# Apply storage classes
kubectl apply -f harbor-storage-classes.yaml
# Verify storage classes
kubectl get storageclass | grep harbor
Pre-provisioning Volumes (Optional)
For better control over volume placement, you can pre-provision PVCs:
# Pre-provision database volume
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: harbor-database-pvc
namespace: harbor-system
spec:
accessModes:
- ReadWriteOnce
storageClassName: longhorn-harbor-db
resources:
requests:
storage: 20Gi
---
# Pre-provision registry volume
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: harbor-registry-pvc
namespace: harbor-system
spec:
accessModes:
- ReadWriteOnce
storageClassName: longhorn-harbor-registry
resources:
requests:
storage: 100Gi
Step 4: Network and Ingress Configuration
NGINX Ingress Controller Setup
Configure NGINX Ingress Controller with Harbor-specific optimizations:
# Deploy NGINX Ingress with Harbor optimizations
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configuration
namespace: ingress-nginx
data:
# Increase client body size for large image pushes
proxy-body-size: "1024m"
# Connection timeout settings
proxy-connect-timeout: "600"
proxy-send-timeout: "600"
proxy-read-timeout: "600"
# Buffer settings for better performance
proxy-buffering: "off"
proxy-request-buffering: "off"
# Additional Harbor-specific settings
upstream-keepalive-connections: "32"
upstream-keepalive-timeout: "60"
keep-alive: "2"
DNS and Certificate Preparation
Harbor requires properly configured DNS and TLS certificates. Set up DNS records:
# DNS Records Required:
# harbor.yourdomain.com A <INGRESS_IP>
# notary.yourdomain.com A <INGRESS_IP>
# Verify DNS resolution
nslookup harbor.yourdomain.com
nslookup notary.yourdomain.com
Step 5: SSL/TLS Certificate Management with Cert-Manager
Cert-Manager ClusterIssuer Configuration
Configure cert-manager for automatic certificate provisioning:
# Create ClusterIssuer for Let's Encrypt
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod-harbor
spec:
acme:
# Replace with your email
email: admin@yourdomain.com
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod-harbor-key
solvers:
- http01:
ingress:
class: nginx
podTemplate:
spec:
nodeSelector:
"kubernetes.io/os": linux
# Apply ClusterIssuer
kubectl apply -f letsencrypt-clusterissuer.yaml
# Verify ClusterIssuer status
kubectl get clusterissuer letsencrypt-prod-harbor -o yaml
Custom CA Certificate (Enterprise Environments)
For enterprises using internal CAs, configure custom certificates:
# Create secret with custom CA certificate
kubectl create secret tls harbor-tls-secret \
--cert=harbor.yourdomain.com.crt \
--key=harbor.yourdomain.com.key \
--namespace=harbor-system
# Create secret with CA certificate for trust
kubectl create secret generic harbor-ca-secret \
--from-file=ca.crt=ca.crt \
--namespace=harbor-system
Step 6: Harbor Installation via SUSE Application Collection
Creating Harbor Namespace
# Create dedicated namespace for Harbor
kubectl create namespace harbor-system
# Label namespace for network policies
kubectl label namespace harbor-system name=harbor-system
kubectl label namespace harbor-system security.harbor.io/enabled=true
Harbor Values Configuration
Create a comprehensive values file for Harbor configuration:
# harbor-values.yaml
global:
# External URL for Harbor
externalURL: https://harbor.yourdomain.com
# Image pull policy
imagePullPolicy: IfNotPresent
# Storage class for all components
storageClass: "longhorn-harbor-registry"
# Database configuration
database:
# Use internal PostgreSQL
type: internal
internal:
# Database password
password: "HarborDB123!"
# Storage configuration
persistence:
enabled: true
storageClass: "longhorn-harbor-db"
size: 20Gi
# Resource limits
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "2Gi"
cpu: "1000m"
# Node selection
nodeSelector:
harbor.io/workload: database
# Redis configuration
redis:
# Use internal Redis
type: internal
internal:
# Resource limits
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "1Gi"
cpu: "500m"
# Node selection
nodeSelector:
harbor.io/workload: core
# Core Harbor configuration
core:
# Resource configuration
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "1Gi"
cpu: "500m"
# Node selection
nodeSelector:
harbor.io/workload: core
# Environment configurations
env:
# GDPR compliance
GDPR_DELETE_USER: true
# Job service configuration
jobservice:
# Resource configuration
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "1Gi"
cpu: "500m"
# Node selection
nodeSelector:
harbor.io/workload: core
# Registry configuration
registry:
# Registry storage
storage:
type: filesystem
filesystem:
rootdirectory: /storage
# Storage persistence
persistence:
enabled: true
storageClass: "longhorn-harbor-registry"
size: 100Gi
# Resource configuration
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "1Gi"
cpu: "500m"
# Node selection
nodeSelector:
harbor.io/workload: core
# Trivy security scanner
trivy:
enabled: true
# Resource configuration
resources:
requests:
memory: "512Mi"
cpu: "200m"
limits:
memory: "2Gi"
cpu: "1000m"
# Node selection
nodeSelector:
harbor.io/workload: core
# Notary for content trust
notary:
enabled: true
server:
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
signer:
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
# Ingress configuration
ingress:
enabled: true
className: "nginx"
annotations:
# Certificate management
cert-manager.io/cluster-issuer: "letsencrypt-prod-harbor"
# NGINX specific configurations
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "1024m"
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
nginx.ingress.kubernetes.io/proxy-buffer-size: "16k"
nginx.ingress.kubernetes.io/proxy-buffering: "off"
nginx.ingress.kubernetes.io/proxy-request-buffering: "off"
# Security headers
nginx.ingress.kubernetes.io/server-snippet: |
add_header X-Content-Type-Options nosniff;
add_header X-Frame-Options DENY;
add_header X-XSS-Protection "1; mode=block";
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains; preload";
hosts:
- name: harbor.yourdomain.com
tls: true
tlsSecret: harbor-tls
- name: notary.yourdomain.com
tls: true
tlsSecret: notary-tls
# Additional security configurations
portal:
# Resource configuration
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
# Node selection
nodeSelector:
harbor.io/workload: core
Installing Harbor
Deploy Harbor using the SUSE Application Collection:
# Install Harbor via Helm
helm upgrade --install harbor \
suse-application-collection/harbor \
--namespace harbor-system \
--values harbor-values.yaml \
--timeout 20m \
--wait
# Monitor deployment status
kubectl get pods -n harbor-system -w
# Check all services
kubectl get svc -n harbor-system
# Verify ingress configuration
kubectl get ingress -n harbor-system
Post-Installation Verification
# Check pod status
kubectl get pods -n harbor-system
# Verify PVC status
kubectl get pvc -n harbor-system
# Check certificate status
kubectl get certificate -n harbor-system
# Test external access
curl -I https://harbor.yourdomain.com
curl -I https://notary.yourdomain.com
Step 7: Authentication and Authorization Setup
Initial Admin Configuration
Access Harbor UI and configure initial settings:
- Navigate to https://harbor.yourdomain.com
- Login with admin credentials (default: admin/Harbor12345)
- Change admin password immediately
- Configure system settings
# Get admin password from secret (if using auto-generated)
kubectl get secret harbor-core -n harbor-system -o jsonpath='{.data.HARBOR_ADMIN_PASSWORD}' | base64 -d
LDAP Integration
Configure LDAP/Active Directory authentication:
- Navigate to Administration → Configuration → Authentication
- Select “LDAP” as auth mode
- Configure LDAP settings:
# LDAP Configuration Example
LDAP URL: ldaps://ldap.yourdomain.com:636
Search DN: cn=harbor-service,ou=Service Accounts,dc=yourdomain,dc=com
Search Password: [service-account-password]
Base DN: dc=yourdomain,dc=com
Filter: (objectClass=person)
UID: sAMAccountName
Scope: Subtree
OIDC Integration (Modern Authentication)
For modern OAuth/OIDC integration with providers like Azure AD:
# OIDC Configuration
Auth Mode: OIDC
OIDC Provider Name: Azure AD
OIDC Endpoint: https://login.microsoftonline.com/[tenant-id]/v2.0
OIDC Client ID: [application-client-id]
OIDC Client Secret: [application-client-secret]
Group Claim Name: groups
OIDC Scope: openid,profile,email,groups
OIDC Admin Group: harbor-admins
Robot Account Configuration
Create robot accounts for automated CI/CD access:
# Create project-specific robot account via Harbor API
curl -X POST \
https://harbor.yourdomain.com/api/v2.0/projects/1/robots \
-H 'accept: application/json' \
-H 'authorization: Basic [base64-encoded-admin-creds]' \
-H 'content-type: application/json' \
-d '{
"name": "ci-cd-robot",
"description": "Robot account for CI/CD pipelines",
"duration": 365,
"level": "project",
"permissions": [
{
"kind": "project",
"namespace": "my-project",
"access": [
{
"resource": "repository",
"action": "push"
},
{
"resource": "repository",
"action": "pull"
}
]
}
]
}'
Step 8: Security Hardening and Best Practices
Network Security
Implement network policies for Harbor security:
# Network policy to restrict Harbor namespace traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: harbor-network-policy
namespace: harbor-system
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
ingress:
# Allow ingress controller traffic
- from:
- namespaceSelector:
matchLabels:
name: ingress-nginx
ports:
- protocol: TCP
port: 8080
# Allow internal Harbor communication
- from:
- namespaceSelector:
matchLabels:
name: harbor-system
egress:
# Allow DNS resolution
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53
# Allow LDAP/OIDC communication
- to: []
ports:
- protocol: TCP
port: 443
- protocol: TCP
port: 636
Pod Security Standards
Apply pod security standards to the Harbor namespace:
# Apply pod security standard labels
kubectl label namespace harbor-system \
pod-security.kubernetes.io/enforce=restricted \
pod-security.kubernetes.io/audit=restricted \
pod-security.kubernetes.io/warn=restricted
Security Scanning Configuration
Configure automatic vulnerability scanning:
- Navigate to Administration → Interrogation Services
- Ensure Trivy is enabled and configured
- Set up scanning policies:
# Example scanning policy configuration
Scan automatically: Yes
Scan on push: Yes
Prevent vulnerable images from running: Yes
Severity threshold: High
CVE allowlist: Configure based on organization policy
Step 9: Integration with RKE2 Container Runtime
Configuring containerd for Harbor
Configure RKE2’s containerd to use Harbor as a private registry:
# Create containerd registry configuration
# /etc/rancher/rke2/registries.yaml
mirrors:
"harbor.yourdomain.com":
endpoint:
- "https://harbor.yourdomain.com"
configs:
"harbor.yourdomain.com":
auth:
username: "robot$harbor-system+ci-cd-robot"
password: "[robot-token]"
tls:
cert_file: "/etc/ssl/certs/harbor-client.crt"
key_file: "/etc/ssl/private/harbor-client.key"
ca_file: "/etc/ssl/certs/harbor-ca.crt"
insecure_skip_verify: false
# Apply configuration to all RKE2 nodes
sudo systemctl restart rke2-server # On server nodes
sudo systemctl restart rke2-agent # On agent nodes
# Verify registry configuration
sudo crictl info | jq '.config.registry'
Image Pull Secret Configuration
# Create image pull secret for Harbor
kubectl create secret docker-registry harbor-registry-secret \
--docker-server=harbor.yourdomain.com \
--docker-username="robot\$harbor-system+ci-cd-robot" \
--docker-password="[robot-token]" \
--namespace=default
# Create service account with image pull secret
apiVersion: v1
kind: ServiceAccount
metadata:
name: harbor-service-account
namespace: default
imagePullSecrets:
- name: harbor-registry-secret
Step 10: Monitoring and Observability
Prometheus Integration
Configure Harbor metrics collection for Prometheus:
# Enable Harbor metrics in values.yaml
metrics:
enabled: true
serviceMonitor:
enabled: true
interval: 30s
scrapeTimeout: 25s
labels:
release: prometheus
# ServiceMonitor for Harbor metrics
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: harbor-metrics
namespace: harbor-system
labels:
app: harbor
spec:
selector:
matchLabels:
app: harbor
component: core
endpoints:
- port: http
interval: 30s
path: /api/v2.0/metrics
Grafana Dashboard
Import Harbor-specific Grafana dashboard for comprehensive monitoring:
# Key Harbor metrics to monitor:
# - harbor_up: Harbor service availability
# - harbor_project_total: Number of projects
# - harbor_repository_total: Number of repositories
# - harbor_artifact_total: Number of artifacts
# - harbor_quota_usage_bytes: Storage usage
# - harbor_replication_total: Replication statistics
Troubleshooting Common Issues
Certificate Issues
# Check certificate status
kubectl describe certificate harbor-tls -n harbor-system
# Check cert-manager logs
kubectl logs -n cert-manager deployment/cert-manager -f
# Test certificate with openssl
echo | openssl s_client -connect harbor.yourdomain.com:443 -servername harbor.yourdomain.com
Storage Issues
# Check PVC status
kubectl get pvc -n harbor-system
# Check Longhorn volume status
kubectl get volumes -n longhorn-system
# Check storage events
kubectl get events -n harbor-system --sort-by='.lastTimestamp'
Performance Issues
# Check resource utilization
kubectl top pods -n harbor-system
# Check database performance
kubectl exec -it harbor-database-0 -n harbor-system -- psql -U postgres -c "SELECT * FROM pg_stat_activity;"
# Check registry storage performance
kubectl exec -it [registry-pod] -n harbor-system -- df -h /storage
Conclusion and Next Steps
You now have a fully functional Harbor container registry deployed on RKE2 using SUSE Application Collection. This enterprise-grade setup provides:
- Secure container registry with vulnerability scanning
- High availability with multi-replica storage
- Enterprise authentication via LDAP/OIDC
- Automated certificate management with cert-manager
- Comprehensive monitoring and alerting
In Part 2 of this series, we’ll cover ArgoCD installation and integration with Harbor, including:
- ArgoCD deployment via SUSE Application Collection
- Private registry integration with Harbor
- GitOps workflows and security policies
- CI/CD pipeline automation
- Production operations and maintenance
This foundation provides the enterprise container management platform needed for modern cloud-native applications. The combination of Harbor’s security features with RKE2’s hardened Kubernetes distribution creates a robust platform suitable for the most demanding production environments.
Key Takeaways
- Plan storage carefully – Use appropriate storage classes for different Harbor components
- Security first – Implement proper authentication, network policies, and scanning from day one
- Monitor proactively – Set up comprehensive monitoring and alerting for production deployments
- Test thoroughly – Validate all integrations before moving to production workloads
Ready to implement GitOps with ArgoCD? Continue to Part 2 of this series where we’ll integrate ArgoCD with your new Harbor registry for complete CI/CD automation.