$ apt-get install -y net-tools qemu-kvm qemu-system libvirt0 virt-manager bridge-utils libosinfo-bin
Posted by nerdcoding on May 8, 2018
This guide describes how to install Linux Kernel-based Virtual Machine (KVM) on a Debian 9 host, create some virtual machines and install a Kubernetes cluster on them. In the end, we have a fully Kubernetes cluster running on a bare metal hardware at home.
KVM is a virtualization infrastructure provided by the Linux kernel to run arbitrary virtual machines (VMs) on a Linux host. Here we install KVM on a basic Debian 9 host und we create four VMs, also based on Debian 9. Later we will install Kubernetes on these four VMs.
First install all packages required to run KVM:
$ apt-get install -y net-tools qemu-kvm qemu-system libvirt0 virt-manager bridge-utils libosinfo-bin
And reboot the machine:
$ shutdown -r now
Checks if the kvm
and kvm_intel
(or kvm_amd
) kernel modules are loaded into the kernel. If not, add them with modprobe
:
$ lsmod | grep kvm $ modprobe <module-name>
Finally start the libvirtd
daemon:
$ systemctl enable --now libvirtd
A VM (or also guest operating system) uses a so-called Usermode Networking
by default. Here the VM is able to access other hosts in the network and the internet. But the other direction from a network host to the VM is not possible.
To make a Kubernetes cluster usable it has to be accessible to its clients. For this purpose, a Bridged Network
is used, which enables us to connect the virtual interface of a VM to the physical interface of the host and makes the VM appear as a normal host on the network.
In /etc/network/interfaces
the physical interface of the host have to be removed and a bridged network interface could be created. On my machine, the name of the host’s physical network is enp2s0
change this if you have another interface name on your machine.
# Disable/comment the primary network interface #iface enp2s0 inet dhcp #allow-hotplug enp2s0 # Create a bridged interfaces auto br0 iface br0 inet dhcp # Fetch IP from DHCP server bridge_ports enp2s0 # Bridge with host's 'enp2s0' interface bridge_stp off # Only needed when multiple bridges work together bridge_fd 0 # Turn off forwarding delay bridge_maxwait 0 # Do not wait for ethernet ports to come up
Finally we overwrite three parameters of the kernels bridge module and end enable IP forwarding in the /etc/sysctl.conf
:
net.bridge.bridge-nf-call-ip6tables=0 net.bridge.bridge-nf-call-tables=0 net.bridge.bridge-nf-call-arptables=0 net.ipv4.ip_forward=1
And reboot the machine again :)
$ shutdown -r now
After the reboot, ifconfig
should show the new bridge network interface bro
with an assigned IP from the DHCP server. The enp2s0
interface is also present but got no IP address. Note, that the MAC addresses of the br0
and enp2s0
are identical:
$ ifconfig br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.1.5 netmask 255.255.255.0 broadcast 192.168.1.255 inet6 580e::e953:6afd:f4d:2de7 prefixlen 64 scopeid 0x20<link> ether 00:11:22:33:44:55 txqueuelen 1000 (Ethernet) RX packets 539 bytes 55590 (54.2 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 269 bytes 34067 (33.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 enp2s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 00:11:22:33:44:55 txqueuelen 1000 (Ethernet) RX packets 539 bytes 63136 (61.6 KiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 269 bytes 34067 (33.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Now we can create VMs on our host system. Therefore we need to provide a --os-variant
parameter, which defines the type of the guest operating system. With osinfo-query os
we get a list of all supported variants. In our case, we will install Debian 9 on the VMs, so debian9
ost the correct os-variant
.
Create a new VM with this command:
$ virt-install \ --virt-type kvm \ --name <virtual-machine-name> \ --vcpus 1 \ --memory 1024 \ --disk size=15 \ --location http://cdn-fastly.deb.debian.org/debian/dists/stretch/main/installer-amd64/ \ --os-variant debian9 \ --graphics none \ --network bridge:br0,model=virtio \ --extra-args "console=ttyS0" \ -v
A normal interactive Debian installation should be triggered. Make sure you install Debian without a graphical desktop environment but with the SSH server. Also, create a normal user account during installation.
virsh list --all
Shows all existing VMs.
virsh start | shutdown | reboot <vm-name>
Starts, stops or reboots a VM.
virsh destroy <vm-name>
Ungraceful shutdown of a VM.
virsh undefine <vm-name>
Deletes a VM.
When the just created VM is running we could use nmap
to scan our network and find out which IP address the DHCP server assigned to our VM.
$ nmap -sP 192.168.1.0/24 Nmap scan report for 192.168.1.134 Host is up (0.00019s latency). MAC Address: 00:11:22:33:44:55 (QEMU virtual NIC)
Now we can connect via SSH and with the user account created during installation into our VM:
ssh <account-name>@192.168.1.134
. If you have access to the DHCP server configuration it would be wise to assign a static IP to the VMs MAC address.
To install Kubernetes we create four virtual machines with the previously used virt-install
and name them:
master
node1
node2
node3
After each VM is installed, start them all up and login into each with SSH.
On each VM (the master and all three nodes) we have to install Docker:
$ apt-get update $ apt-get -y install \ apt-transport-https \ ca-certificates \ curl \ gnupg2 \ software-properties-common $ curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add - $ apt-key fingerprint 0EBFCD88 $ add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/debian \ $(lsb_release -cs) \ stable" $ apt-get update $ apt-get -y install docker-ce
After Docker is installed on all VMs, add each user account to a group named docker
:
$ groupadd docker $ usermod -aG docker <account-name>
Since Kubernetes 1.8 it is mandatory to disable Swap. In /etc/fstab
delete the Swap line and then enter the command: swapoff -a
.
kubeadm
kubeadm
is a toolbox that’s help to create a reasonable Kubernetes cluster. First, we need to install all required packages:
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - $ echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" \ | tee -a /etc/apt/sources.list.d/kubernetes.list $ apt-get update $ apt-get install -y kubelet kubeadm kubectl kubernetes-cni
The following initialization needs only be done on the master node:
$ kubeadm init --pod-network-cidr 10.244.0.0/16
This may take a while. When initialization was successfully done, note down the join command. We will need this later to join our nodes to the cluster:
kubeadm join 192.168.1.134:6443 --token xxxxxx.xxxxxxxxxxxxxxxx --discovery-token-ca-cert-hash sha256:<hash>
To start using our cluster:
$ mkdir -p <account-user-home>/.kube $ cp -i /etc/kubernetes/admin.conf <account-user-home>/.kube/config $ chown -R <account-user-name>:<account-user-group> /home/master/.kube/
Kubernetes comes not with a networking provider for the pod network by default, so we need to install one. There are many possible networking providers. Here I will use flannel. We need to run the following as a regular user:
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Finally login to each node and execute the noted kubeadm join
command. On the master we could check if all nodes are joined and running:
$ kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 18m v1.10.2 node1 Ready <none> 2m v1.10.2 node2 Ready <none> 1m v1.10.2 node3 Ready <none> 1m v1.10.2
kubeadm reset
Stops a node. Could be used at the master or worker node.
kubeadm init
Starts the master and after that, a new <token>
is shown. With this token, it is possible for a worker node to connect to the master.
kubectl create -f <file>.yaml
Creates a resource defined in a YAML file.
kubectl delete all --all
Deletes all resources in the cluster.
With Node.js we create a very simple web application, dockerizing this app and deploy it to our Kubernetes cluster.
This application waits for incoming HTTP requests on port 8080 and sends the current hostname as the response.
var http = require('http'); var os = require("os"); var server = http.createServer(function(req, res) { res.writeHead(200); res.end('Hello from: ' + os.hostname()); }); server.listen(8080);
First create a Dockerfile
which describes how the Docker images is created:
FROM node:10.0 ADD app.js /app.js CMD node app.js
Create a new Docker image:
$ docker build -t sample-node-app .
Tag this image:
$ docker tag sample-node-app <docker-hub-username>/sample-node-app
Login at Docker Hub:
$ docker login
Push the image:
$ docker push <docker-hub-username>/sample-node-app
To test our application load the image from Docker Hub und run it as a container:
$ docker run -p 8080:8080 -d <docker-hub-username>/sample-node-app
With curl 127.0.0.1:8080
you should see your hostname of the Docker container as the response.
At first, we create a Deployment which describes which application should be deployed and distributed on how many Pods.
apiVersion: apps/v1 kind: Deployment metadata: name: sample-node-app-deployment spec: replicas: 3 selector: matchLabels: app: sample-node-app template: metadata: name: sample-node-app labels: app: sample-node-app env: test spec: containers: - name: sample-node-app image: <docker-hub-username>/sample-node-app ports: - containerPort: 8080
And on our master we could create the deployment with:
$ kubectl apply -f sample-node-app-deployment.yml
Check the state of the Pods with:
$ kubectl get pods
The container creation may take a while because the Docker whole image needs to be downloaded.
Detailed description of a pod:
$ kubectl describe pod <pod-name>
And show the logs of a pod with:
$ kubectl logs <pod-name>
After all pods are running, we create a Service to access our application.
apiVersion: v1 kind: Service metadata: name: sample-node-app-service spec: type: LoadBalancer ports: - port: 8080 targetPort: 8080 selector: app: sample-node-app
Check the created service:
$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE sample-node-app-service LoadBalancer 10.109.118.20 <pending> 8080:30362/TCP 17s
And the application is accessible under the cluster IP with curl 10.109.118.20:8080
.
Delete anything with:
$ kubectl delete svc <service-name> $ kubectl delete deployment <deployment-name>