How to use Kata Containers and CRI (containerd plugin) with Kubernetes

https://github.com/kata-containers/documentation/blob/master/how-to/how-to-use-k8s-with-cri-containerd-and-kata.md

kata container与kubernetes集成的几种方式

  • cri-containerd plugin + containerd方式
  • cri-o方式
cri-containerd 安装
cri-containerd 安装
wget https://github.com/containerd/containerd/releases/download/v1.1.0/containerd-1.1.0.linux-amd64.tar.gz

sudo tar -C / -xzf  containerd-1.1.0.linux-amd64.tar.gz

sudo systemctl start containerd

sudo systemctl status containerd

containerd config default #生成默认配置,位置/etc/containerd/config.toml

 。

   新增0-containerd.conf有问题

$ sudo mkdir -p  /etc/systemd/system/kubelet.service.d/
$ cat << EOF | sudo tee  /etc/systemd/system/kubelet.service.d/0-containerd.conf
[Service]                                                 
Environment="KUBELET_EXTRA_ARGS=--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock"
EOF

配置虚拟机网络

/etc/hostname中配置主节点为master,node1为 node1,node2为 node2

配置每台机器的/etc/netplan/50-cloud-init.yaml,把DHCP的IP改为固定IP:

network:
    ethernets:
        ens33:
            addresses: [192.168.32.132/24]
            dhcp4: false
            gateway4: 192.168.32.2
            nameservers:
                addresses: [192.168.32.2]
            optional: true
    version: 2

修改/etc/hosts

192.168.32.132 master
192.168.32.133 node1
192.168.32.134 node2

重启机器后能互相ping表示配置成功:

ubuntu@node2:~$ ping master
PING master (192.168.32.132) 56(84) bytes of data.
64 bytes from master (192.168.32.132): icmp_seq=1 ttl=64 time=0.837 ms
64 bytes from master (192.168.32.132): icmp_seq=2 ttl=64 time=0.358 ms

配置Master节点的k8s,并使用 kubeadm 拉取镜像

使用kubeadm init进行初始化操作:

#修改IP地址为master节点的IP地址并配置pod地址
kubeadm init 
--apiserver-advertise-address=192.168.32.132 
--image-repository registry.aliyuncs.com/google_containers  
--pod-network-cidr=10.244.0.0/16 
root@master:/home/ubuntu# kubeadm init 
> --apiserver-advertise-address=192.168.32.132 
> --image-repository registry.aliyuncs.com/google_containers  
> --pod-network-cidr=10.244.0.0/16 

更改点

1、Start cluster using kubeadm

$ sudo kubeadm init --cri-socket /run/containerd/containerd.sock --pod-network-cidr=10.244.0.0/16
 
改为kubeadm init --cri-socket /run/containerd/containerd.sock --pod-network-cidr=10.244.0.0/16 --image-repository registry.aliyuncs.com/google_containers


/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
                - 'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

 journalctl -xeu kubelet

kubelet.go:2183] node "ubuntu" not found
 cat /etc/kubernetes/kubelet.conf
    server: https://10.10.16.82:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: system:node:ubuntu
  name: system:node:ubuntu@kubernetes
current-context: system:node:ubuntu@kubernetes

/etc/hosts

root@ubuntu:/etc/apt# ping ubuntu
PING ubuntu (127.0.1.1) 56(84) bytes of data.
64 bytes from ubuntu (127.0.1.1): icmp_seq=1 ttl=64 time=0.050 ms
64 bytes from ubuntu (127.0.1.1): icmp_seq=2 ttl=64 time=0.025 ms
64 bytes from ubuntu (127.0.1.1): icmp_seq=3 ttl=64 time=0.027 ms

/etc/hosts

有关node节点名称修改:在master上通过kubectl get node 获得的列表中,Name显示的名称是通过  客户端kubelet和proxy配置文件中hostname-override配置参数定义的,修改这2个参数为你想要的名称,并且删除kubelet.kubeconfig(这个文件是master认证后客户端自动生成的,如果不删除会报node节点forbidden)文件,重新启动着2个服务,master端重新
kubectl certificate approve  name名称  就可以看到新名称。

root@ubuntu:/etc/apt# kubectl get node
The connection to the server localhost:8080 was refused - did you specify the right host or port?
root@ubuntu:/etc/apt# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
root@ubuntu:/etc/apt# source ~/.bash_profile
root@ubuntu:/etc/apt# kubectl get node
The connection to the server 10.10.16.82:6443 was refused - did you specify the right host or port?
root@ubuntu:/etc/apt# 
kubeadm reset
root@ubuntu:/etc/apt# kubeadm init --cri-socket /run/containerd/containerd.sock --pod-network-cidr=10.244.0.0/16 --image-repository registry.aliyuncs.com/google_containers  --apiserver-advertise-address=10.10.16.82
W1013 17:45:43.917183   50850 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.2
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml]: /etc/kubernetes/manifests/kube-apiserver.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml]: /etc/kubernetes/manifests/kube-controller-manager.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml]: /etc/kubernetes/manifests/kube-scheduler.yaml already exists
        [ERROR FileAvailable--etc-kubernetes-manifests-etcd.yaml]: /etc/kubernetes/manifests/etcd.yaml already exists
        [ERROR Port-10250]: Port 10250 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
root@ubuntu:/etc/apt# kubeadm reset
[reset] Reading configuration from the cluster..
kubeadm.yaml 示例:
https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/

apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
  advertiseAddress: "10.20.79.10"
networking:
  podSubnet: "10.244.0.0/16"
kubernetesVersion: "v1.10.3"
imageRepository: "registry.cn-hangzhou.aliyuncs.com/google_containers"

初始化命令为:

kubeadm init --config kubeadm.yaml
原文地址:https://www.cnblogs.com/dream397/p/13809824.html