Back to Apps
Deploy Varnish

Varnish Cache
Performance Infrastructure Pro Plan
Kubernetes-native HTTP caching with automatic service discovery, cache invalidation, and multi-tenant support. Enterprise-grade performance at scale.
Varnish Cache
Deploy a Kubernetes-native HTTP caching layer with EDKA’s Varnish solution.
Kubernetes-Native Features
Automatic Service Discovery
- Service Endpoint Watching: Monitors Kubernetes services for real-time updates
- Dynamic Backend Updates: Services added/removed without restarts
- Transparent Proxy: Sits between Ingress and your applications
- Multi-Service Support: Handle multiple backend services simultaneously
Intelligent Cache Management
- BAN Support: Instant cache invalidation using regular expressions
- PURGE Support: Remove specific cached objects immediately
- Signaller Component: Broadcasts invalidation across all cache nodes
- Template-Based Control: Configure caching behavior via ConfigMap templates
Enterprise-Grade Reliability
- High Availability Mode: Multiple Varnish nodes with shared configuration
- StatefulSet Deployment: Persistent cache across pod restarts
- Load Balancing: Hash-based and round-robin routing strategies
- Health Probes: Kubernetes-native liveness and readiness checks
How It Works
1. Service-Based Architecture
Varnish sits between your Ingress controller and application services, acting as a transparent caching layer:
Ingress → Varnish Cache → Your Application
2. Dynamic Configuration via Templates
Configure caching behavior using Go templates in ConfigMaps - the controller automatically generates optimized VCL:
backend {{ .Name }} {
.host = "{{ .Host }}";
.port = "{{ .Port }}";
}
3. Automatic Endpoint Discovery
The controller watches Kubernetes service endpoints and automatically updates Varnish configuration when:
- Services are added or removed
- Pods scale up or down
- Backend endpoints change
Monitoring & Observability
Built-in Metrics
- Prometheus Integration: Export metrics in Prometheus format
- Cache Hit Ratio: Monitor cache effectiveness in real-time
- Request Analytics: Track requests, responses, and errors
- Backend Health: Monitor upstream service availability
Management Features
- kubectl Integration: Manage cache via standard Kubernetes tools
- Rolling Updates: Deploy changes without cache loss
- Backup & Restore: Persistent cache state management
- Multi-Region Support: Deploy across availability zones
Advanced Features
Cache Invalidation Strategies
- Cross-Node Signalling: Automatically propagate PURGE/BAN across all nodes
- Regex-Based BAN: Invalidate multiple URLs with pattern matching
- Direct PURGE: Remove specific cached objects instantly
- CI/CD Integration: Trigger cache invalidation from deployment pipelines
Security & Compliance
- Request Filtering: Block malicious requests before they reach backends
- Rate Limiting: Protect services from abuse and DDoS
- IP Whitelisting: Restrict access to specific IP ranges
- GDPR Compliance: Cookie handling and privacy headers
Developer-Friendly
- Template-Based Configuration: Use Go templates for flexible VCL generation
- Environment Variables: Support for variable interpolation in templates
- Custom VCL: Override default behavior with custom caching logic
- kubectl Compatible: Manage cache using standard Kubernetes tools
Use Cases
Microservices Architecture
- Cache responses between microservices
- Reduce inter-service latency
- Implement circuit breaker patterns
- Service mesh integration
Content Delivery
- Static asset acceleration
- Dynamic content caching with ESI
- Geo-distributed caching
- Mobile-optimized delivery
API Gateway
- Response caching for REST/GraphQL APIs
- Request coalescing for identical queries
- Backend protection during traffic spikes
- API versioning support
Performance Metrics
Based on real-world deployments:
- 99.9% Cache Availability: Enterprise-grade reliability
- <1ms Response Time: For cached content
- 90% Traffic Reduction: To backend services
- 10x Cost Savings: Reduced infrastructure requirements
Deploy Varnish