Installation¶
This guide covers detailed installation instructions for all deployment methods.
Prerequisites¶
Required¶
- Kubernetes Clusters with Velero installed
- Cluster Access: Kubeconfig files or service account tokens with permissions to read/write Velero resources
- Python 3.13+ (for local development)
- Docker (for container deployment)
- Helm 3+ (for Kubernetes deployment)
Optional¶
- OIDC Provider: Dex, Keycloak, Auth0, Okta, etc.
- Ingress Controller: nginx, Traefik, or similar (for Kubernetes deployment)
- Cert Manager: For automatic TLS certificates
Velero Cluster Permissions¶
The Velero Dashboard needs a service account or kubeconfig with these permissions:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: velero-dashboard-role
rules:
# Velero resources
- apiGroups: ["velero.io"]
resources:
- backups
- restores
- schedules
- backupstoragelocations
- volumesnapshotlocations
- backuprepositories
- resticrepositories
verbs: ["get", "list", "create", "update", "delete", "patch"]
# Backup logs
- apiGroups: ["velero.io"]
resources:
- backups/logs
- restores/logs
verbs: ["get"]
# Pod logs for debugging
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list"]
# Namespaces (for listing)
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list"]
Installation Methods¶
Method 1: Local Development¶
Step 1: Clone Repository¶
Step 2: Create Virtual Environment¶
# Create virtual environment
python3.13 -m venv venv
# Activate it
source venv/bin/activate # Linux/macOS
# OR
venv\Scripts\activate # Windows
Step 3: Install Dependencies¶
# Install Python packages
pip install -r requirements.txt
# Install Kopia (optional, for repository browsing)
# macOS
brew install kopia
# Linux
wget https://github.com/kopia/kopia/releases/download/v0.17.0/kopia-0.17.0-linux-x64.tar.gz
tar -xzf kopia-0.17.0-linux-x64.tar.gz
sudo mv kopia-0.17.0-linux-x64/kopia /usr/local/bin/
Step 4: Configuration Files¶
Create configuration directory:
Create config/clusters.yaml:
clusters:
- name: dev-cluster
description: Development Cluster
environment: development
api_server: https://kubernetes.default.svc
auth_method: kubeconfig
kubeconfig_path: ~/.kube/config
context_name: dev-context # optional
is_active: true
- name: prod-cluster
description: Production Cluster
environment: production
api_server: https://k8s-prod.example.com:6443
auth_method: token
token: "eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9..."
certificate_authority_data: "LS0tLS1CRUdJTi..." # base64 encoded CA cert
is_active: true
Create config/casbin_policy.csv:
# Define roles
p, velero.admin, *, *, .*
p, velero.operator, *, backup, (view|list|create|delete|logs)
p, velero.operator, *, restore, (view|list|create|logs)
p, velero.operator, *, schedule, (view|list|create|delete)
p, velero.operator, *, bsl, (view|list)
p, velero.operator, *, repository, (view|list)
p, velero.viewer, *, *, (view|list|logs)
# Assign roles to users
g, admin@example.com, velero.admin
g, devops@example.com, velero.operator
g, developer@example.com, velero.viewer
# Environment-specific access
g, prod-admin@example.com, velero.admin.prod
p, velero.admin.prod, *:*:production, *, .*
Step 5: Environment Variables¶
Create .env file:
# Flask Configuration
SECRET_KEY=your-secret-key-at-least-32-characters-long
FLASK_ENV=development
FLASK_DEBUG=True
# Application
APP_URL=http://localhost:8000
DEFAULT_ITEMS_PER_PAGE=100
# OIDC Configuration
OIDC_CLIENT_ID=velerodashboard
OIDC_CLIENT_SECRET=dev-secret
OIDC_DISCOVERY_URL=http://localhost:5556/dex
USE_DEX_PROXY=True
OIDC_VERIFY_SSL=False
# Dex Proxy (if using built-in proxy)
DEX_BASE_URL=http://localhost:5556/dex
# Kopia (optional)
KOPIA_BIN=/usr/local/bin/kopia
Step 6: Run Application¶
# Load environment variables
source .env # or `set -a; source .env; set +a`
# Run Flask development server
python app.py
Visit http://localhost:8000
Method 2: Docker¶
Step 1: Build Image¶
# Clone repository
git clone https://github.com/yourusername/velero-dashboard.git
cd velero-dashboard
# Build Docker image
docker build -t velerodashboard:latest .
# Or with specific version
docker build -t velerodashboard:1.0.0 .
Step 2: Prepare Configuration¶
# Create config directory
mkdir -p ./config
# Create clusters.yaml and casbin_policy.csv
# (See Method 1 Step 4 for file contents)
Step 3: Run Container¶
docker run -d \
--name velerodashboard \
-p 8000:8000 \
-e SECRET_KEY="$(python -c 'import secrets; print(secrets.token_hex(32))')" \
-e OIDC_CLIENT_ID="velerodashboard" \
-e OIDC_CLIENT_SECRET="your-oidc-secret" \
-e OIDC_DISCOVERY_URL="https://dex.example.com/dex" \
-e APP_URL="http://localhost:8000" \
-v $(pwd)/config:/app/config:ro \
-v ~/.kube:/root/.kube:ro \
velerodashboard:latest
Step 4: Verify¶
# Check logs
docker logs -f velerodashboard
# Check if running
docker ps | grep velerodashboard
# Test endpoint
curl http://localhost:8000/health
Method 3: Kubernetes with Helm¶
Step 1: Prepare Helm Chart¶
# Clone repository
git clone https://github.com/yourusername/velero-dashboard.git
cd velero-dashboard
# Inspect default values
cat helm/velerodashboard/values.yaml
Step 2: Create Custom Values¶
Create values-prod.yaml:
global:
domain: velerodash.example.com
tlsSecretName: velerodash-tls
dashboard:
image:
repository: your-registry.example.com/velerodashboard
tag: "1.0.0"
pullPolicy: IfNotPresent
replicaCount: 3
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
config:
secretKey: "" # Leave empty to auto-generate
oidc:
clientId: velerodashboard
clientSecret: "your-oidc-secret"
discoveryUrl: "https://dex.example.com/dex"
verifySSL: true
useDexProxy: false
app:
url: "https://velerodash.example.com"
itemsPerPage: 100
clusters:
- name: prod-cluster-1
description: Production Cluster 1
environment: production
api_server: "https://k8s-prod-1.example.com:6443"
auth_method: token
token: "eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9..."
certificate_authority_data: "LS0tLS1CRUdJTi..."
is_active: true
- name: prod-cluster-2
description: Production Cluster 2
environment: production
api_server: "https://k8s-prod-2.example.com:6443"
auth_method: token
token: "eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9..."
certificate_authority_data: "LS0tLS1CRUdJTi..."
is_active: true
casbin:
policies: |
p, velero.admin, *, *, .*
p, velero.operator, *, backup, (view|list|create|delete|logs)
p, velero.operator, *, restore, (view|list|create|logs)
p, velero.viewer, *, *, (view|list|logs)
g, admin@example.com, velero.admin
g, devops-team@example.com, velero.operator
ingress:
enabled: true
className: nginx
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
hosts:
- host: velerodash.example.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: velerodash-tls
hosts:
- velerodash.example.com
dex:
enabled: false # Use external Dex or other OIDC provider
Step 3: Install¶
# Create namespace
kubectl create namespace velero-dashboard
# Install with Helm
helm install velerodashboard ./helm/velerodashboard \
-f values-prod.yaml \
--namespace velero-dashboard
# Or upgrade if already installed
helm upgrade velerodashboard ./helm/velerodashboard \
-f values-prod.yaml \
--namespace velero-dashboard
Step 4: Verify Installation¶
# Check pods
kubectl get pods -n velero-dashboard
# Check services
kubectl get svc -n velero-dashboard
# Check ingress
kubectl get ingress -n velero-dashboard
# Check logs
kubectl logs -n velero-dashboard -l app=velerodashboard --tail=100 -f
Step 5: Access Dashboard¶
# If using LoadBalancer service
kubectl get svc -n velero-dashboard velerodashboard
# If using Ingress
echo "Access at: https://$(kubectl get ingress -n velero-dashboard velerodashboard -o jsonpath='{.spec.rules[0].host}')"
Post-Installation¶
1. Verify Cluster Connectivity¶
Log in to the dashboard and check:
- Navigate to "Clusters" page
- Verify all clusters show "Connected" status
- Check if Velero version is displayed for each cluster
2. Test Permissions¶
- Create a test backup
- View backup logs
- Try accessing different namespaces
- Verify RBAC policies are enforced
3. Enable Monitoring (Optional)¶
# Add Prometheus annotations to deployment
kubectl patch deployment velerodashboard -n velero-dashboard -p '
spec:
template:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8000"
prometheus.io/path: "/metrics"
'
4. Configure Backup (Optional)¶
Create a Velero schedule to backup the dashboard configuration:
velero schedule create velerodashboard-config \
--schedule="0 2 * * *" \
--include-namespaces velero-dashboard \
--ttl 720h