Kubernetes для бедных. Собираем кластер
Мы подготовили четыре сервера: мастер и три рабочих узла. Соберём на них кластер Kubernetes c шифрованием трафика между узлами, контроллером persistent volumes и менеджером пакетов Helm.
Собирать и настраивать кластер будем преимущественно в ручном режиме.
Объединение серверов в кластер
Инициализируем мастер.
$ ansible -i terraform-hetzner-inventory master1 -u root -a "kubeadm init" master1 | SUCCESS | rc=0 >> [init] Using Kubernetes version: v1.10.0 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 195.201.22.111] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [master1] and IPs [195.201.22.111] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 32.502314 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node master1 as master by adding a label and a taint [markmaster] Master master1 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: c1kc7m.k65vhmbzu62x9bbi [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 195.201.22.111:6443 --token k6jc9FdpQT34Ow7m5dWT4V2 --discovery-token-ca-cert-hash sha256:hwojsusOW4ipGrnjjz9hJ9WT1dODUU3IHiPT4aaHyHevWYobbBv27vxndBumdKW5 [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.03.0-ce. Max validated version: 17.03 [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
Из вывода kubeadm init скопируем команду kubeadm join... и выполним на всех рабочих узлах
$ ansible -i terraform-hetzner-inventory node* -u root -a "kubeadm join 195.201.22.111:6443 --token k6jc9FdpQT34Ow7m5dWT4V2 --discovery-token-ca-cert-hash sha256:hwojsusOW4ipGrnjjz9hJ9WT1dODUU3IHiPT4aaHyHevWYobbBv27vxndBumdKW5" node1 | SUCCESS | rc=0 >> [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "195.201.22.111:6443" [discovery] Created cluster-info discovery client, requesting info from "https://195.201.22.111:6443" [discovery] Requesting info from "https://195.201.22.111:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "195.201.22.111:6443" [discovery] Successfully established connection with API Server "195.201.22.111:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.03.0-ce. Max validated version: 17.03 [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl node2 | SUCCESS | rc=0 >> [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "195.201.22.111:6443" [discovery] Created cluster-info discovery client, requesting info from "https://195.201.22.111:6443" [discovery] Requesting info from "https://195.201.22.111:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "195.201.22.111:6443" [discovery] Successfully established connection with API Server "195.201.22.111:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.03.0-ce. Max validated version: 17.03 [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl node3 | SUCCESS | rc=0 >> [preflight] Running pre-flight checks. [discovery] Trying to connect to API Server "195.201.22.111:6443" [discovery] Created cluster-info discovery client, requesting info from "https://195.201.22.111:6443" [discovery] Requesting info from "https://195.201.22.111:6443" again to validate TLS against the pinned public key [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "195.201.22.111:6443" [discovery] Successfully established connection with API Server "195.201.22.111:6443" This node has joined the cluster: * Certificate signing request was sent to master and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster. [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.03.0-ce. Max validated version: 17.03 [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
Теперь все рабочие узлы присоединены к мастеру.
Настроим доступ к кластеру с рабочей станциии. Будет перезаписан файл ~/.kube/config.
mkdir -p ~/.kube scp root@195.201.22.111:/etc/kubernetes/admin.conf ~/.kube/config
Если уже есть настроенный доступ к какому-нибудь кластеру Kubernetes, нужно скопировать admin.conf в сторонку и настроить доступы согласно официальной инструкции Configure Access to Multiple Clusters.
Проверим видимость всех серверов в кластере
$ kubectl get no NAME STATUS ROLES AGE VERSION master1 NotReady master 41m v1.10.0 node1 NotReady <none> 13m v1.10.0 node2 NotReady <none> 13m v1.10.0 node3 NotReady <none> 13m v1.10.0
Ноды находятся в состоянии NotReady, пока не настроена сеть.
Настройка сети
Поскольку кластер живёт полностью в публичной сети, хочется шифровать трафик между узлами. Поддержку шифрования из коробки я нашёл только в Rancher и Weave Net. Rancher это слишком тяжёлая для рамок этой статьи надстройка над Kubernetes, требующая отдельного рассмотрения. Weave Net - это только CNI addon c очень простой настройкой шифрования, поэтому остановимся на нём.
Поставим Weave Net addon согласно официальной инструкции
$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" serviceaccount "weave-net" created clusterrole.rbac.authorization.k8s.io "weave-net" created clusterrolebinding.rbac.authorization.k8s.io "weave-net" created role.rbac.authorization.k8s.io "weave-net" created rolebinding.rbac.authorization.k8s.io "weave-net" created daemonset.extensions "weave-net" created
Создадим пароль, который будет использован для шифрования трафика.
$ kubectl -n kube-system create secret generic weave-encryption --from-literal=weave-password="`cat /dev/urandom | base64 | head -c32`" secret "weave-encryption" created
Включим шифрование
$ kubectl -n kube-system patch ds weave-net --patch "$(cat <<EOF spec: template: spec: containers: - name: weave env: - name: WEAVE_PASSWORD valueFrom: secretKeyRef: key: weave-password name: weave-encryption EOF ) " daemonset.extensions "weave-net" patched
Проверим результат
$ kubectl -n kube-system exec `kubectl -n kube-system get po -l name=weave-net -o jsonpath="{.items[0].metadata.name}"` -c weave -- /home/weave/weave --local status connections <- 195.201.22.121:37249 established encrypted fastdp b2:cc:48:28:a4:bc(node1) encrypted=truemtu=1376 <- 195.201.22.111:52318 established encrypted fastdp 8a:34:08:36:2c:ff(master1) encrypted=truemtu=1376 <- 195.201.22.122:44641 established encrypted fastdp ce:fc:d7:a2:7a:54(node2) encrypted=truemtu=1376 -> 195.201.22.123:6783 failed cannot connect to ourself, retry: never $ kubectl get no NAME STATUS ROLES AGE VERSION master1 Ready master 41m v1.10.1 node1 Ready <none> 40m v1.10.1 node2 Ready <none> 40m v1.10.1 node3 Ready <none> 40m v1.10.1
Видим, что все узлы находятся в состоянии Ready и соединения между ними зашифрованы.
Persistent Volumes
В качестве контроллера Persistent Volumes выбран Longhorn от Rancher просто потому, что выглядит дешево и сердито. Есть сомнения, что это будет хорошо работать под реальной нагрузкой, но для того, чтобы попробовать какой-нибудь чарт, которому необходим Persistent Volume, он точно подходит. Прямо скажу, что толком я с ним пока не разбирался, более того, удаляю UI, пока не выяснил как толком ограничить к нему доступ.
Итак, устанавливаем Longhorn.
$ kubectl create -f https://raw.githubusercontent.com/rancher/longhorn/v0.2/deploy/longhorn.yaml namespace "longhorn-system" created serviceaccount "longhorn-service-account" created clusterrole.rbac.authorization.k8s.io "longhorn-role" created clusterrolebinding.rbac.authorization.k8s.io "longhorn-bind" created customresourcedefinition.apiextensions.k8s.io "engines.longhorn.rancher.io" created customresourcedefinition.apiextensions.k8s.io "replicas.longhorn.rancher.io" created customresourcedefinition.apiextensions.k8s.io "settings.longhorn.rancher.io" created customresourcedefinition.apiextensions.k8s.io "volumes.longhorn.rancher.io" created daemonset.extensions "longhorn-manager" created service "longhorn-backend" created deployment.extensions "longhorn-ui" created service "longhorn-frontend" created deployment.extensions "longhorn-flexvolume-driver-deployer" created
Создаём Storage Class. Настроим репликацию данных в надежде, что это будет работать как ожидается.
$ cat <<EOF | kubectl create -f - kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: longhorn provisioner: rancher.io/longhorn parameters: numberOfReplicas: "2" staleReplicaTimeout: "30" fromBackup: "" EOF storageclass.storage.k8s.io "longhorn" created
Удаляем UI.
$ kubectl -n longhorn-system delete deploy/longhorn-ui svc/longhorn-frontend deployment.extensions "longhorn-ui" deleted service "longhorn-frontend" deleted
Менеджер пакетов Helm
На рабочую станцию устанавливаем Helm согласно официальной инструкции. Для установки в ~/bin/, как в моём случае, можно воспользоваться командами
wget -qO- https://kubernetes-helm.storage.googleapis.com/helm-v2.8.2-linux-amd64.tar.gz | tar xzO linux-amd64/helm > ~/bin/helm && chmod +x ~/bin/helm
Создадим сервисную учётную запись tiller, которая будет использоваться для управления пакетами Helm.
$ cat <<EOF | kubectl create -f - apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system EOF serviceaccount "tiller" created clusterrolebinding.rbac.authorization.k8s.io "tiller" created
Подготовим кластер к работе с пакетами:
$ helm init --service-account tiller
Какое-то время понадобится на старт пода c tiller, после этого можно будет проверить инсталляцию командой
$ helm version Client: &version.Version{SemVer:"v2.8.2", GitCommit:"a80231648a1473929271764b920a8e346f6de844", GitTreeState:"clean"} Server: &version.Version{SemVer:"v2.8.2", GitCommit:"a80231648a1473929271764b920a8e346f6de844", GitTreeState:"clean"}
Helm понадобится нам на следующем этапе при подготовке кластера к разворачиванию веб-приложений.