We will now open some external traffic to our local K8S cluster. Before, we are able to hit nginx within another pod inside the cluster, at the end of this workshop external will be also be possible. We will also set up local SSL.
Prerequisites and objectives
We will assume here that you have followed the full Kubernetes(K8S) Workshop - Part 1 and Kubernetes(K8S) Workshop - Part 2
Let's do it
Ingress
NGINX Ingress controller is an addon used to allow some external traffic into your cluster, It is opposite of egress which concern the traffic going outside the cluster.
Installing this addon is not possible with Minikube with the default macOS hypervisor, that's why we used VirtualBox earlier
- Install the addon with Minikube
$ minikube addons enable ingress
▪ Using image us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.40.2
▪ Using image jettech/kube-webhook-certgen:v1.2.2
▪ Using image jettech/kube-webhook-certgen:v1.3.0
🔎 Verifying ingress addon...
🌟 The 'ingress' addon is enabled
If you are using as other Minikube profile as the default one you will need to pass
-p myprofile
to the command above
- Create a new file called ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: minikube.local
http:
paths:
- pathType: Prefix
path: /
backend:
service:
name: nginx
port:
number: 80
We are setting a local host that will match all /
and we are targeting the nginx-service created in the part 2 of the workshop, we also set the port number.
apiVersion: v1
kind: Service
metadata:
name: nginx
There are multiple ways to configure ingresses, you can for example have multiple hosts pointing to different services of your cluster, you can restrict path to services, etc.
- Apply and observe what is created
$ kubectl apply -f .
ingress.networking.k8s.io/ingress-service created
configmap/custom-index-config-map unchanged
deployment.apps/nginx-deployment unchanged
configmap/nginx-env-configmap unchanged
service/nginx unchanged
$ kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-service <none> minikube.local 80 24s
Minikube IP and local host
When you launch your minikube cluster, an IP will be allocated to It.
- Get your cluster IP
$ minikube ip # -p myprofile if needed
192.168.99.101
- Map the IP to a local DNS
$ sudo nano /etc/hosts
# Add this line, replace with your IP
192.168.99.101 minikube.local
- Try to hit your nginx pod, open your browser and go to http://minikube.local
The result is our custom running nginx container created in previous workshop available externally.
Local SSL with cert-manager
We are going to set local SSL with some self-signed certificates for demonstration purpose. For production or real env you may rely on Let's Encrypt or any other system.
To install cert-manager, we are going to use Helm. Helm is like a package manager for kubernetes cluster as homebrew or apt can be for macOS and linux.
- First, we are going to create a new namespace to separate all k8s objects from the default one
$ kubectl create namespace cert-manager
- Install helm
$ curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 > get_helm.sh
$ chmod 700 get_helm.sh
$ ./get_helm.sh
$ rm ./get_helm.sh
- Add cert-manager repo
helm repo add jetstack https://charts.jetstack.io
helm repo update
- Install cert-manager
helm install cert-manager jetstack/cert-manager --namespace cert-manager --version v1.2.0 --set installCRDs=true
It will create some K8S objects for you in cert-manager namespace
$ kubectl get all -n cert-manager #-n to specify the namespace
NAME READY STATUS RESTARTS AGE
pod/cert-manager-85f9bbcd97-fkb5c 1/1 Running 0 21m
pod/cert-manager-cainjector-74459fcc56-92ft9 1/1 Running 0 21m
pod/cert-manager-webhook-57d97ccc67-jkng6 1/1 Running 0 21m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/cert-manager ClusterIP 10.108.152.226 <none> 9402/TCP 21m
service/cert-manager-webhook ClusterIP 10.100.183.15 <none> 443/TCP 21m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/cert-manager 1/1 1 1 21m
deployment.apps/cert-manager-cainjector 1/1 1 1 21m
deployment.apps/cert-manager-webhook 1/1 1 1 21m
NAME DESIRED CURRENT READY AGE
replicaset.apps/cert-manager-85f9bbcd97 1 1 1 21m
replicaset.apps/cert-manager-cainjector-74459fcc56 1 1 1 21m
replicaset.apps/cert-manager-webhook-57d97ccc67 1 1 1 21m
Now we will have to create two K8S objects to get our certificates. For self-signed ones, It's pretty straightforward.
ISSUER
- Create a new file called issuer.yml
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: selfsigned-issuer
spec:
selfSigned: {}
CERTIFICATE
- Create a new file called certificate.yml we can then refer the issuer
issuerRef:
we also add the dnsNames with our previously created local host
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: selfsigned-cert
spec:
dnsNames:
- minikube.local
secretName: selfsigned-cert-tls
issuerRef:
name: selfsigned-issuer
- When can now apply and observe the changes
$ kubectl apply -f .
certificate.cert-manager.io/selfsigned-cert created
ingress.networking.k8s.io/ingress-service unchanged
clusterissuer.cert-manager.io/selfsigned-issuer created
configmap/custom-index-config-map unchanged
deployment.apps/nginx-deployment unchanged
configmap/nginx-env-configmap unchanged
service/nginx unchanged
$ kubectl get certs
NAME READY SECRET AGE
selfsigned-cert False selfsigned-cert-tls 4m42s
$ kubectl get clusterissuer
NAME READY AGE
selfsigned-issuer True 6m6s
- We can now go to https://minikube.local
As we use a self-signed certificate, most browsers will alert you when visiting the page. You will need to go in details and allow the website
PVC & Volumes
We are going to test here another important part of K8S : Persistent Volume Claim
PVC are used like a recipe for pods to generate volumes. Volumes are then injected into pods, It allows us to have a persistent storage between ephemeral pods. For example if we create a database or other things like that, we may want to save data across pod lifecycle.
- Create a new file called pvc.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
We allow here pods, to create volumes of 1Gi with a ReadWriteOnce
strategy
- Modify the nginx-deployment.yml file, we add an entry into
volumes
and involumeMounts
to persist/test
folder
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
volumeMounts:
- name: custom-index
mountPath: /usr/share/nginx/html/index.html
subPath: index.html
readOnly: true
- name: custom-storage
mountPath: /test
subPath: test
env:
- name: USER
valueFrom:
secretKeyRef:
name: my-secret
key: user
- name: PASSWORD
valueFrom:
secretKeyRef:
name: my-secret
key: password
- name: ENV_VAR_1
value: "Hello from the environment"
- name: ENV_VAR_2
value: "Hello 2 from the environment"
envFrom:
- configMapRef:
name: nginx-env-configmap
volumes:
- name: custom-index
configMap:
name: custom-index-config-map
- name: custom-storage
persistentVolumeClaim:
claimName: nginx-pvc
- Apply changes
$ kubectl apply -f .
certificate.cert-manager.io/selfsigned-cert unchanged
ingress.networking.k8s.io/ingress-service unchanged
issuer.cert-manager.io/selfsigned-issuer unchanged
configmap/custom-index-config-map unchanged
deployment.apps/nginx-deployment configured
configmap/nginx-env-configmap unchanged
service/nginx unchanged
persistentvolumeclaim/nginx-pvc created
- Observe changes and try It
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nginx-pvc Bound pvc-fed8b032-53c3-4fe4-a269-88841d067171 1Gi RWO standard 2m12s
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-fed8b032-53c3-4fe4-a269-88841d067171 1Gi RWO Delete Bound default/nginx-pvc standard 3m41s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-76c4c94b58-9bhpw 1/1 Running 0 62s
nginx-deployment-76c4c94b58-lt8fm 1/1 Running 0 64s
# Connect to one pod and add a file into /test
$ kubectl exec -it nginx-deployment-76c4c94b58-9bhpw -- bash
$ cd test/
$ touch hello_persisted.md
# EXIT
# Open a bash in the other running pod
$ kubectl exec -it nginx-deployment-76c4c94b58-lt8fm -- bash
root@nginx-deployment-76c4c94b58-lt8fm:/#
$ ls /test/
hello_persisted.md
- Delete the two pods
$ kubectl delete pod nginx-deployment-76c4c94b58-9bhpw
pod "nginx-deployment-76c4c94b58-9bhpw" deleted
$ kubectl delete pod nginx-deployment-76c4c94b58-lt8fm
pod "nginx-deployment-76c4c94b58-lt8fm" deleted
- New pods will be recreated automatically by the
nginx-deployment
, and data will be there
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-deployment-76c4c94b58-x5vt6 1/1 Running 0 57s
nginx-deployment-76c4c94b58-zpj8x 1/1 Running 0 64s
$ kubectl exec -it nginx-deployment-76c4c94b58-x5vt6 -- bash
root@nginx-deployment-76c4c94b58-x5vt6:/# ls /test/
hello_persisted.md
Be careful when manually deleting a deployment with
kubectl delete deployment
, It will erase all associated volumes, and you will lose data. You may need to read K8S documentation to know how to handle that.