Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New installations of versions 1.9.2 yield errors #2

Closed
sfxworks opened this issue Feb 27, 2021 · 3 comments
Closed

New installations of versions 1.9.2 yield errors #2

sfxworks opened this issue Feb 27, 2021 · 3 comments

Comments

@sfxworks
Copy link
Contributor

sfxworks commented Feb 27, 2021

Hello,

I deviated from my custom output to try the newest version and received an error on the kube API server. Possibly due to a kube binary version difference/new flag requirements? Here are the results:

kubectl get pods
NAME                                                      READY   STATUS             RESTARTS   AGE
cluster1-kubernetes-admin-6fb799f984-jds44                0/1     Running            0          3m26s
cluster1-kubernetes-apiserver-7bb6b845d8-mtl48            0/1     CrashLoopBackOff   4          3m26s
cluster1-kubernetes-apiserver-7bb6b845d8-q8fzp            0/1     CrashLoopBackOff   4          3m26s
cluster1-kubernetes-controller-manager-5bf87785cb-vfbcp   1/1     Running            0          3m26s
cluster1-kubernetes-controller-manager-5bf87785cb-vxjzc   1/1     Running            0          3m26s
cluster1-kubernetes-etcd-0                                1/1     Running            0          3m26s
cluster1-kubernetes-etcd-1                                1/1     Running            0          3m26s
cluster1-kubernetes-etcd-2                                1/1     Running            1          3m26s
cluster1-kubernetes-kubeadm-tasks-c598t                   1/1     Running            0          3m26s
cluster1-kubernetes-scheduler-5cf7cb57c7-jrc4d            1/1     Running            0          3m26s
cluster1-kubernetes-scheduler-5cf7cb57c7-wgq8q            1/1     Running            0          3m26s
cluster1-ltsp-585c59599-96wwd                             3/3     Running            0          3m26s

kubectl logs -f cluster1-kubernetes-apiserver-7bb6b845d8-mtl48
Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
I0227 13:45:20.901095       1 server.go:632] external host was not specified, using 10.244.0.67
Error: [service-account-issuer is a required flag, --service-account-signing-key-file and --service-account-issuer are required flags]

Here is the values file that I used:

# ------------------------------------------------------------------------------
# Kubernetes control-plane
# ------------------------------------------------------------------------------
kubernetes:
  enabled: true
  # See all available options for kubernetes-in-kubernetes chart on:
  # https://github.com/kvaps/kubernetes-in-kubernetes/blob/master/deploy/helm/kubernetes/values.yaml

  persistence:
    enabled: true
    storageClassName: longhorn

  apiServer:
    service:
      type: LoadBalancer
      annotations: {}
      loadBalancerIP:

    # Specify external endpoints here to have an oportunity to the Kubernetes cluster externaly
    certSANs:
#      ipAddresses:
#      - 10.9.8.10
      dnsNames:
      - farmcluster.sfxworks.net
    #extraArgs:
    #  # advertise-address is required for kube-proxy
    #  advertise-address: 10.9.8.10

# ------------------------------------------------------------------------------
# Kubeadm token generator
# (tokens are needed to join nodes to the cluster)
# ------------------------------------------------------------------------------
tokenGenerator:
  enabled: true
  tokenTtl: 24h0m0s
  schedule: "0 */12 * * *"

# ------------------------------------------------------------------------------
# Network boot server configuration
# ------------------------------------------------------------------------------
ltsp:
  enabled: true
  image:
    repository: docker.io/kvaps/kubefarm-ltsp
    tag: v0.7.0
    PullPolicy: IfNotPresent
    PullSecrets: []
  replicaCount: 1

  publishDHCP: true
  service:
    enabled: true
    type: LoadBalancer
    loadBalancerIP:
    labels: {}
    annotations: {}

  labels: {}
  annotations: {}
  podLabels: {}
  podAnnotations: {}
  nodeSelector: {}
  tolerations: []
  affinity: {}
  sidecars: []
  extraVolumes: []

  config:

    # Disable ubuntu apt auto-update task
    disableAutoupdate: true

    # from /usr/share/zoneinfo/<timezone> (eg. Europe/Moscow)
    #timezone: UTC

    # SSH-keys authorized to access the nodes
    sshAuthorizedKeys:
      - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC8+LqzvVcJShK7W+sZgCQ7kBiUGbZBIXkhpsaFTqICUSRNZ5W8RpK7YHxnHlyN1GgOUzFa1kGpq6/ZX5hPN8QivHV338EzJ5zd7uYTwu1mtxJtKVmj6Ru9Sz/B/IBqqcttQnaMdIDkyFzV5L8M+eVSA0Pxlvr1pftRcRJwpL/ie6jbRgkpD0LpDaskbcbJbQUn8DSesb43XgoG/maUQuCZU94FjQfVhNJe8NoV1BM2WTaaZKWmeQcKwZdECgYSDzQ3Ed+tfAEmt53ZOxEwXS2QZUTnMt/PvrCxKEC+CHctCifR7Eba3hhTAx/qgvm/5Er8hm2wwtSdMlPoIqOIORHz

    # Hashed password for root, use `openssl passwd -1` to generate one
    #rootPasswd: $1$jaKnTiEb$IhpsNUfssXQ8eQg8orald0 # hackme

    # Sysctl setting for the nodes
    sysctls:
      net.ipv4.ip_forward: 1
      net.ipv6.conf.all.forwarding: 1

    # Modules to load during startup
    modules:
      - br_netfilter
      #- ip_vs
      #- ip_vs_rr
      #- ip_vs_wrr
      #- ip_vs_sh

    # Docker configuration file
    dockerConfig:
      exec-opts: [ native.cgroupdriver=systemd ]
      iptables: false
      ip-forward: false
      bridge: none
      log-driver: journald
      storage-driver: overlay2

    # Extra options for ltsp.conf
    options:
      MENU_TIMEOUT: 0
      #KERNEL_PARAMETERS: "forcepae console=tty1 console=ttyS0,9600n8"

    # Extra sections for ltsp.conf
    sections: {}
      #init/: |
      #  cp -f /etc/ltsp/journald.conf /etc/systemd/journald.conf
      #initrd-bottom/: |
      #  echo "Hello World!"

    # These files will be copied into /etc/ltsp directory for all clients
    # Note: all *.service files will be copied and enabled via systemd
    extraFiles: {}
      #journald.conf: |
      #   [Manager]
      #   SystemMaxUse=200M
      #   RuntimeMaxUse=200M

    # Optionaly you might want to map additional ConfigMaps or Secrets 
    # they will be projected to /etc/ltsp drirectory
    extraProjectedItems: []
      #- secret:
      #    name: ipa-join-token


# ------------------------------------------------------------------------------
# Nodes configuration
# ------------------------------------------------------------------------------
nodePools:
  -
    # DHCP range for the node pool, required for issuing leases.
    # See --dhcp-range option syntax on dnsmasq-man page.
    # Note: the range will automatically be appended with the set:{{ .Release.Name }}-ltsp option.
    #
    # examples:
    #   172.16.0.0,static,infinite
    #   172.16.0.1,172.16.0.100,255.255.255.0,172.16.0.255,1h
    #
    # WARNING setting broadcast-address is required! (see: https://www.mail-archive.com/[email protected]/msg14137.html)
    range: "192.168.0.100,192.168.0.199,255.255.255.0,192.168.0.255,1h"

    # DHCP configuration for each node
    nodes: []
      #- name: node1
      #  mac: 02:00:ac:10:00:0a
      #  ip: 172.16.0.10

    # Extra tags applied to this node pool, tags may contain additional
    # DHCP- and LTSP- options as well node labels and taints
    tags:
      #- debug
      #- foo

    # Extra Options for Dnsmasq. This section can be used to setup Circuit ID matching.
    # Note: Symbol '?' will automatically be replaced by the '{{ .Release.Name }}-ltsp' tag.
    extraOpts: []
      #- dhcp-circuitid: "set:port_0,ge-0/0/0.0:staging"
      #- dhcp-circuitid: "set:port_1,ge-0/0/1.0:staging"
      #- dhcp-range: "set:?,tag:port_0,192.168.11.10,192.168.11.10,255.255.255.0"
      #- dhcp-range: "set:?,tag:port_1,192.168.11.11,192.168.11.11,255.255.255.0"
      #- tag-if: "set:bar,tag:?,tag:port_0"
      #- tag-if: "set:bar,tag:?,tag:port_1"

# ------------------------------------------------------------------------------
# Extra options can be specified for each tag
# ("all" options are aplicable for any node)
# ------------------------------------------------------------------------------
tags:
  dhcpOptions:
    # dnsmasq options
    # see all available options list (https://git.io/JJ0dH)
    all:
      router: 192.168.0.1
      dns-server: 1.1.1.1
      broadcast: 192.168.0.255
#      domain-search: sfxworks.net

  ltspOptions:
    all:
      FSTAB_KUBELET: "tmpfs /var/lib/kubelet tmpfs x-systemd.wanted-by=kubelet.service 0 0"
      FSTAB_DOCKER: "tmpfs /var/lib/docker tmpfs x-systemd.wanted-by=docker.service 0 0"
    debug:
      DEBUG_SHELL: "1"

  kubernetesLabels:
    all: {}
    #foo:
    #  label1: value1
    #  label2: value2

  kubernetesTaints:
    all: {}
    #foo:
    #  - effect: NoSchedule
    #    key: foo
    #    value: bar
@kvaps
Copy link
Member

kvaps commented Mar 2, 2021

Hi, thanks for the report, I'm going to fix it in the next release.

For now please use the following values:

kubernetes:
  apiServer:
    extraArgs:
      service-account-issuer: https://kubernetes.default.svc.cluster.local
      service-account-signing-key-file: /pki/sa/tls.key

kvaps added a commit to aenix-io/kubernetes-in-kubernetes that referenced this issue Mar 2, 2021
Keys --service-account-issuer --service-account-signing-key-file are now
required.

aenix-io/kubefarm#2
@sfxworks
Copy link
Contributor Author

sfxworks commented Mar 5, 2021

The values above worked. Thank you.

@kvaps
Copy link
Member

kvaps commented Mar 16, 2021

Should be fixed in v0.10.0 by hardcoding --service-account-issuer and --service-account-signing-key-file flags
ref aenix-io/kubernetes-in-kubernetes@c3573df

@kvaps kvaps closed this as completed Mar 16, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants