Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Warning FailedScheduling 56s (x7 over 9m45s) default-scheduler 0/6 nodes are available: 6 node(s) didn't match node selector. #1

Open
szbjb opened this issue Oct 8, 2019 · 3 comments

Comments

@szbjb
Copy link

szbjb commented Oct 8, 2019

[root@123-40 clickhouse]# kubectl  get pods| grep click
[root@123-40 clickhouse]# kubectl  get pods| grep click
clickhouse-0                                                   0/1     Pending   0          9m24s
clickhouse-1                                                   0/1     Pending   0          9m24s
clickhouse-2                                                   0/1     Pending   0          9m24s
clickhouse-replica-0                                           0/1     Pending   0          9m24s
clickhouse-replica-1                                           0/1     Pending   0          9m24s
clickhouse-replica-2                                           0/1     Pending   0          9m24s
clickhouse-tabix-74c69f9c5f-8j2g5                              0/1     Pending   0          9m24s
[root@123-40 clickhouse]# kubectl  get pvc| grep click
clickhouse-data-clickhouse-0                   Bound    pvc-94731be4-e98b-11e9-a081-000c2964f032   500Gi      RWO            gluster-heketi   40m
clickhouse-data-clickhouse-1                   Bound    pvc-94743519-e98b-11e9-a081-000c2964f032   500Gi      RWO            gluster-heketi   40m
clickhouse-data-clickhouse-2                   Bound    pvc-94854cae-e98b-11e9-a081-000c2964f032   500Gi      RWO            gluster-heketi   40m
clickhouse-logs-clickhouse-0                   Bound    pvc-94736404-e98b-11e9-a081-000c2964f032   50Gi       RWO            gluster-heketi   40m
clickhouse-logs-clickhouse-1                   Bound    pvc-947496dd-e98b-11e9-a081-000c2964f032   50Gi       RWO            gluster-heketi   40m
clickhouse-logs-clickhouse-2                   Bound    pvc-949490f8-e98b-11e9-a081-000c2964f032   50Gi       RWO            gluster-heketi   40m
clickhouse-replica-data-clickhouse-replica-0   Bound    pvc-946f1d09-e98b-11e9-a081-000c2964f032   500Gi      RWO            gluster-heketi   40m
clickhouse-replica-data-clickhouse-replica-1   Bound    pvc-9470116d-e98b-11e9-a081-000c2964f032   500Gi      RWO            gluster-heketi   40m
clickhouse-replica-data-clickhouse-replica-2   Bound    pvc-94710fb8-e98b-11e9-a081-000c2964f032   500Gi      RWO            gluster-heketi   40m
clickhouse-replica-logs-clickhouse-replica-0   Bound    pvc-946f6d4b-e98b-11e9-a081-000c2964f032   50Gi       RWO            gluster-heketi   40m
clickhouse-replica-logs-clickhouse-replica-1   Bound    pvc-947066df-e98b-11e9-a081-000c2964f032   50Gi       RWO            gluster-heketi   40m
clickhouse-replica-logs-clickhouse-replica-2   Bound    pvc-947161ef-e98b-11e9-a081-000c2964f032   50Gi       RWO            gluster-heketi   40m
[root@123-40 clickhouse]# kubectl  describe  pods clickhouse-0
Name:               clickhouse-0
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               <none>
Labels:             app.kubernetes.io/instance=clickhouse
                    app.kubernetes.io/name=clickhouse
                    controller-revision-hash=clickhouse-85cc8dd68
                    statefulset.kubernetes.io/pod-name=clickhouse-0
Annotations:        <none>
Status:             Pending
IP:
Controlled By:      StatefulSet/clickhouse
Init Containers:
  init:
    Image:      busybox:1.31.0
    Port:       <none>
    Host Port:  <none>
    Args:
      /bin/sh
      -c
      mkdir -p /etc/clickhouse-server/metrica.d

    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-cc9t9 (ro)
Containers:
  clickhouse:
    Image:        yandex/clickhouse-server:19.14
    Ports:        8123/TCP, 9000/TCP, 9009/TCP
    Host Ports:   0/TCP, 0/TCP, 0/TCP
    Liveness:     tcp-socket :9000 delay=30s timeout=5s period=30s #success=1 #failure=3
    Readiness:    tcp-socket :9000 delay=30s timeout=5s period=30s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/clickhouse-server/config.d from clickhouse-config (rw)
      /etc/clickhouse-server/metrica.d from clickhouse-metrica (rw)
      /etc/clickhouse-server/users.d from clickhouse-users (rw)
      /var/lib/clickhouse from clickhouse-data (rw)
      /var/log/clickhouse-server from clickhouse-logs (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-cc9t9 (ro)
Conditions:
  Type           Status
  PodScheduled   False
Volumes:
  clickhouse-logs:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  clickhouse-logs-clickhouse-0
    ReadOnly:   false
  clickhouse-data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  clickhouse-data-clickhouse-0
    ReadOnly:   false
  clickhouse-config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      clickhouse-config
    Optional:  false
  clickhouse-metrica:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      clickhouse-metrica
    Optional:  false
  clickhouse-users:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      clickhouse-users
    Optional:  false
  default-token-cc9t9:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-cc9t9
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  56s (x7 over 9m45s)  default-scheduler  0/6 nodes are available: 6 node(s) didn't match node selector.
[root@123-40 clickhouse]#

@szbjb
Copy link
Author

szbjb commented Oct 8, 2019

一直是Pending

@liwenhe1993
Copy link
Owner

根据你给出pod的描述,得出的错误如下

Warning FailedScheduling 56s (x7 over 9m45s) default-scheduler 0/6 nodes are available: 6 node(s) didn't match node selector.

即没到找到相应标签的kubernetes节点,无法生成pod

目前有两种方法可以解决你的问题

  1. 注释values.yaml文件中的affinity的整个配置信息(由于我提交的时候忘记注释)
affinity:
  nodeAffinity:
    requiredDuringSchedulingIgnoredDuringExecution:
      nodeSelectorTerms:
      - matchExpressions:
        - key: "application/clickhouse"
          operator: In
          values:
          - "true"
  1. 给你的kubernetes集群的每个节点添加标签
kubectl label node <node_name> application/clickhouse=true

@szbjb
Copy link
Author

szbjb commented Oct 8, 2019 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants