A general example
apiVersion: deckhouse.io/v1
kind: IngressNginxController
metadata:
name: main
spec:
ingressClass: nginx
inlet: HostPort
hostPort:
httpPort: 80
httpsPort: 443
resourcesRequests:
mode: VPA
vpa:
mode: Auto
cpu:
max: 100m
memory:
max: 200Mi
An example for AWS (Network Load Balancer)
When creating a balancer, all zones available in the cluster will be used.
In each zone, the balancer receives a public IP. If there is an instance with an ingress controller in the zone, an A-record with the balancer’s IP address from this zone is automatically added to the balancer’s domain name.
When there are no instances with an ingress controller in the zone, then the IP is automatically removed from the DNS.
If there is only one instance with an ingress controller in a zone, when the pod is restarted, the IP address of the balancer of this zone will be temporarily excluded from DNS.
apiVersion: deckhouse.io/v1
kind: IngressNginxController
metadata:
name: main
spec:
ingressClass: "nginx"
inlet: "LoadBalancer"
loadBalancer:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
An example for GCP
apiVersion: deckhouse.io/v1
kind: IngressNginxController
metadata:
name: main
spec:
ingressClass: "nginx"
inlet: "LoadBalancer"
An example for OpenStack
apiVersion: deckhouse.io/v1
kind: IngressNginxController
metadata:
name: main-lbwpp
spec:
inlet: LoadBalancerWithProxyProtocol
ingressClass: nginx
loadBalancerWithProxyProtocol:
annotations:
loadbalancer.openstack.org/proxy-protocol: "true"
loadbalancer.openstack.org/timeout-member-connect: "2000"
An example for bare metal (Host Ports)
apiVersion: deckhouse.io/v1
kind: IngressNginxController
metadata:
name: main
spec:
ingressClass: nginx
inlet: HostWithFailover
nodeSelector:
node-role.deckhouse.io/frontend: ""
tolerations:
- effect: NoExecute
key: dedicated.deckhouse.io
value: frontend
An example for bare metal (MetalLB Load Balancer)
apiVersion: deckhouse.io/v1
kind: IngressNginxController
metadata:
name: main
spec:
ingressClass: nginx
inlet: LoadBalancer
nodeSelector:
node-role.deckhouse.io/frontend: ""
tolerations:
- effect: NoExecute
key: dedicated.deckhouse.io
value: frontend
In the case of using MetalLB, its speaker Pods must be run on the same Nodes as the ingress–controller Pods.
The controller must receive real IP addresses of clients — therefore its Service is created with the parameter externalTrafficPolicy: Local
(disabling cross–node SNAT), and to satisfy this parameter the MetalLB speaker announce this Service only from those Nodes where the target Pods are running.
So for the current example metallb module configuration should be like this:
metallb:
speaker:
nodeSelector:
node-role.deckhouse.io/frontend: ""
tolerations:
- effect: NoExecute
key: dedicated.deckhouse.io
value: frontend