Điểm:0

Kubeadm init fails, kubelet fails to start

lá cờ in

I'm trying to set up a Kubernetes cluster on a set of raspberry pi 4s, I'm running into a issue with kubelet failing when running the kubeadm init command

I0205 12:29:52.930582    5348 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0205 12:29:52.930638    5348 waitcontrolplane.go:91] [wait-control-plane] Waiting for the API server to be healthy
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

    Unfortunately, an error has occurred:
        timed out waiting for the condition

    This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

    If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

    Additionally, a control plane component may have crashed or exited when started by the container runtime.
    To troubleshoot, list all containers using your preferred container runtimes CLI.

    Here is one example how you may list all Kubernetes containers running in docker:
        - 'docker ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'docker logs CONTAINERID'

couldn't initialize a Kubernetes cluster
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:118
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:153
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:856
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:974
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:902
k8s.io/kubernetes/cmd/kubeadm/app.Run
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
    _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
    /usr/local/go/src/runtime/proc.go:255
runtime.goexit
    /usr/local/go/src/runtime/asm_arm.s:838
error execution phase wait-control-plane
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/cmd/init.go:153
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:856
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:974
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:902
k8s.io/kubernetes/cmd/kubeadm/app.Run
    /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/app/kubeadm.go:50
main.main
    _output/dockerized/go/src/k8s.io/kubernetes/cmd/kubeadm/kubeadm.go:25
runtime.main
    /usr/local/go/src/runtime/proc.go:255
runtime.goexit
    /usr/local/go/src/runtime/asm_arm.s:838

journal -xeu kubelet returns the following

 kubelet[6145]: Flag --network-plugin has been deprecated, will be removed along with dockershim.
 kubelet[6145]: Flag --network-plugin has been deprecated, will be removed along with dockershim.
 kubelet[6145]: I0205 12:30:42.432739    6145 server.go:446] "Kubelet version" kubeletVersion="v1.23.3"
 kubelet[6145]: I0205 12:30:42.434017    6145 server.go:874] "Client rotation is on, will bootstrap in background"
 kubelet[6145]: I0205 12:30:42.439452    6145 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
 kubelet[6145]: I0205 12:30:42.442739    6145 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
 kubelet[6145]: W0205 12:30:42.442919    6145 manager.go:159] Cannot detect current cgroup on cgroup v2
 kubelet[6145]: W0205 12:30:42.661741    6145 sysinfo.go:203] Nodes topology is not available, providing CPU topology
 kubelet[6145]: W0205 12:30:42.663764    6145 machine.go:65] Cannot read vendor id correctly, set empty.
 kubelet[6145]: I0205 12:30:42.666660    6145 server.go:693] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
 kubelet[6145]: I0205 12:30:42.667641    6145 container_manager_linux.go:281] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
 kubelet[6145]: I0205 12:30:42.667940    6145 container_manager_linux.go:286] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroups>
 kubelet[6145]: I0205 12:30:42.668146    6145 topology_manager.go:133] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
 kubelet[6145]: I0205 12:30:42.668188    6145 container_manager_linux.go:321] "Creating device plugin manager" devicePluginEnabled=true
 kubelet[6145]: I0205 12:30:42.668256    6145 state_mem.go:36] "Initialized new in-memory state store"
 kubelet[6145]: I0205 12:30:42.668448    6145 kubelet.go:313] "Using dockershim is deprecated, please consider using a full-fledged CRI implementation"
 kubelet[6145]: I0205 12:30:42.668635    6145 client.go:80] "Connecting to docker on the dockerEndpoint" endpoint="unix:///var/run/docker.sock"
 kubelet[6145]: I0205 12:30:42.668699    6145 client.go:99] "Start docker client with request timeout" timeout="2m0s"
 kubelet[6145]: I0205 12:30:42.705426    6145 docker_service.go:571] "Hairpin mode is set but kubenet is not enabled, falling back to HairpinVeth" hairpinMode=promiscuous-bridge
 kubelet[6145]: I0205 12:30:42.705510    6145 docker_service.go:243] "Hairpin mode is set" hairpinMode=hairpin-veth
 kubelet[6145]: I0205 12:30:42.705832    6145 cni.go:240] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
 kubelet[6145]: I0205 12:30:42.712758    6145 cni.go:240] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
 kubelet[6145]: I0205 12:30:42.712950    6145 docker_service.go:258] "Docker cri networking managed by the network plugin" networkPluginName="cni"
 kubelet[6145]: I0205 12:30:42.713256    6145 cni.go:240] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
 kubelet[6145]: I0205 12:30:42.750856    6145 docker_service.go:264] "Docker Info" dockerInfo=&{ID:GLFU:JW22:MA7Q:YPYL:UDSW:E4EC:G2M3:QSBM:G7YC:S2YF:YT6I:J34B Containers:0 Containe>
 kubelet[6145]: I0205 12:30:42.750943    6145 docker_service.go:279] "Setting cgroupDriver" cgroupDriver="systemd"
 kubelet[6145]: I0205 12:30:42.824353    6145 kubelet.go:416] "Attempting to sync node with API server"
 kubelet[6145]: I0205 12:30:42.824406    6145 kubelet.go:278] "Adding static pod path" path="/etc/kubernetes/manifests"
 kubelet[6145]: I0205 12:30:42.824521    6145 kubelet.go:289] "Adding apiserver pod source"
 kubelet[6145]: I0205 12:30:42.824593    6145 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
 kubelet[6145]: W0205 12:30:42.827626    6145 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get "https://192.168.1.53:6443/api/v1/nodes?fiel>
 kubelet[6145]: E0205 12:30:42.828345    6145 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.1>
 kubelet[6145]: W0205 12:30:42.829064    6145 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get "https://192.168.1.53:6443/api/v1/service>
 kubelet[6145]: E0205 12:30:42.829173    6145 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192>
 kubelet[6145]: I0205 12:30:42.866086    6145 kuberuntime_manager.go:248] "Container runtime initialized" containerRuntime="docker" version="20.10.12" apiVersion="1.41.0"
 kubelet[6145]: I0205 12:30:42.867183    6145 server.go:1231] "Started kubelet"
 kubelet[6145]: I0205 12:30:42.867659    6145 server.go:150] "Starting to listen" address="0.0.0.0" port=10250
 kubelet[6145]: E0205 12:30:42.869415    6145 kubelet.go:1351] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs in>
 kubelet[6145]: E0205 12:30:42.869315    6145 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pi-kube-m>
 kubelet[6145]: I0205 12:30:42.869879    6145 server.go:410] "Adding debug handlers to kubelet server"
 kubelet[6145]: I0205 12:30:42.871657    6145 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
 kubelet[6145]: I0205 12:30:42.873710    6145 volume_manager.go:291] "Starting Kubelet Volume Manager"
 kubelet[6145]: I0205 12:30:42.875471    6145 desired_state_of_world_populator.go:147] "Desired state populator starts to run"
 kubelet[6145]: E0205 12:30:42.880669    6145 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://192.168.1.53:6443/apis/coordination.k8s.io/>
 kubelet[6145]: W0205 12:30:42.887766    6145 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get "https://192.168.1.53:6443/apis/storage>
 kubelet[6145]: E0205 12:30:42.888790    6145 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https:/>
 kubelet[6145]: E0205 12:30:42.934490    6145 kubelet.go:2347] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
 kubelet[6145]: E0205 12:30:42.974631    6145 kubelet.go:2422] "Error getting node" err="node \"pi-kube-master\" not found"
 kubelet[6145]: I0205 12:30:42.980188    6145 kubelet_network_linux.go:57] "Initialized protocol iptables rules." protocol=IPv4
 kubelet[6145]: I0205 12:30:43.046659    6145 kubelet_node_status.go:70] "Attempting to register node" node="pi-kube-master"
 kubelet[6145]: E0205 12:30:43.048340    6145 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://192.168.1.53:6443/api/v1/nodes\": dial tcp 19>
 kubelet[6145]: panic: unaligned 64-bit atomic operation
 kubelet[6145]: goroutine 380 [running]:
 kubelet[6145]: runtime/internal/atomic.panicUnaligned()
 kubelet[6145]:         /usr/local/go/src/runtime/internal/atomic/unaligned.go:8 +0x24
 kubelet[6145]: runtime/internal/atomic.Load64(0x857016c)
 kubelet[6145]:         /usr/local/go/src/runtime/internal/atomic/atomic_arm.s:286 +0x14
 kubelet[6145]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*containerData).updateStats(0x8570000)
 kubelet[6145]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/container.go:676 +0x438
 kubelet[6145]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*containerData).housekeepingTick(0x8570000, 0x8535500, 0x5f5e100)
 kubelet[6145]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/container.go:587 +0x104
 kubelet[6145]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*containerData).housekeeping(0x8570000)
 kubelet[6145]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/container.go:535 +0x3bc
 kubelet[6145]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*containerData).Start
 kubelet[6145]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/container.go:122 +0x2c
 systemd[1]: kubelet.service: Main process exited, code=exited, status=2/INVALIDARGUMENT

I'm seeing the connection refused on port 6443, though not seeing an issue with iptables or seeing a service listening on that port. I'm also not seeing any containers get started either.

p10l avatar
lá cờ us
Bạn có chạy docker không?
Điểm:1
lá cờ us

Theo bài viết của bạn, vấn đề dường như là

kubelet[6145]: E0205 12:30:43.048340 6145 kubelet_node_status.go:92] "Không thể đăng ký nút với máy chủ API" err="Đăng "https://192.168.1.53:6443/api/v1/nodes": quay số tcp 19> kubelet [6145]: hoảng loạn: hoạt động nguyên tử 64 bit chưa được phân bổ

Vui lòng tham khảo phần sau cho vấn đề cụ thể

https://github.com/kubernetes/kubernetes/issues/106977 Theo cuộc thảo luận ở trên, Sự cố này sẽ được khắc phục sau khi #107225 được hợp nhất.

Điểm:0
lá cờ jp

Một giải pháp tuyệt vời được tìm thấy trong này liên kết: https://jaedsada.me/blogs/kubernetes/k8s-init-fail

$mèo <<EOF | sudo tee /etc/docker/daemon.json
{
    "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
$ sudo systemctl daemon-tải lại
$ sudo systemctl khởi động lại docker
thiết lập lại sudo kubeadm
Sudo kubeadm init

Đăng câu trả lời

Hầu hết mọi người không hiểu rằng việc đặt nhiều câu hỏi sẽ mở ra cơ hội học hỏi và cải thiện mối quan hệ giữa các cá nhân. Ví dụ, trong các nghiên cứu của Alison, mặc dù mọi người có thể nhớ chính xác có bao nhiêu câu hỏi đã được đặt ra trong các cuộc trò chuyện của họ, nhưng họ không trực giác nhận ra mối liên hệ giữa câu hỏi và sự yêu thích. Qua bốn nghiên cứu, trong đó những người tham gia tự tham gia vào các cuộc trò chuyện hoặc đọc bản ghi lại các cuộc trò chuyện của người khác, mọi người có xu hướng không nhận ra rằng việc đặt câu hỏi sẽ ảnh hưởng—hoặc đã ảnh hưởng—mức độ thân thiện giữa những người đối thoại.