Skip to content
This repository has been archived by the owner on Dec 8, 2023. It is now read-only.

[Question][Suggestion] How to provision k3os servers usual way? #314

Open
Moep90 opened this issue Dec 3, 2019 · 17 comments
Open

[Question][Suggestion] How to provision k3os servers usual way? #314

Moep90 opened this issue Dec 3, 2019 · 17 comments
Labels
area/configuration kind/question Further information is requested

Comments

@Moep90
Copy link

Moep90 commented Dec 3, 2019

I saw several issues regarding usage of Ansible is not possible duo to lag of python etc. (which is fine duo to the claim of the project).
So I came up with a pretty ugly Ansible raw command hackaround...because I had no better idea.
I would ask which possibilities you see to costumize the k3os after installation.

  1. I build my vmware-image of k3os with the default config.yaml provided by the README.md
  2. I will add spin up a cluster (5 servers 3Master / 2Worker) with an external datasource
  3. Therefore I need to update the config.yaml after the vm-template spun up to be either the server or the agent
  4. what happens if I need to change the following:
  • token
  • environment or taints
  • adding/removing new/old ssh_pub_keys?

Are there any suggestions by @dweomer or the community?

@dweomer dweomer added area/configuration kind/question Further information is requested labels Dec 3, 2019
@dweomer
Copy link
Contributor

dweomer commented Dec 3, 2019

@Moep90 wrote:

  1. what happens if I need to change the following:
  • token
  • environment or taints
  • adding/removing new/old ssh_pub_keys?
  • token
    update the config.yaml you are passing in via the data source and reboot (may not matter after the node has joined the cluster)
  • environment or taints
    • environment
      update the config.yaml you are passing in via the data source and reboot
    • taints
      kubectl taint is what you will have to use, see: https://github.com/rancher/k3os/blob/master/README.md#k3ostaints

      Taints to set on the current node when it is first registered. After the node is first registered the value of this field is ignored.

  • ssh_authorized_keys
    update the config.yaml you are passing in via the data source and reboot

@Moep90
Copy link
Author

Moep90 commented Dec 3, 2019

@dweomer thanks. I think I wasnt clear enough.
I wanted to know how, technology wise, you will usually update the config.yaml. I dont want to change it by hand but ansible doesnt work duo to missing python.

@dweomer
Copy link
Contributor

dweomer commented Dec 3, 2019

@Moep90 from a thread in the rancher-users#k3os slack:

config.yaml supplied via user data will get written to /run/config/userdata and picked up and merged with

  • /k3os/system/config.yaml
  • /var/lib/rancher/k3os/config.yaml
  • /var/lib/rancher/k3os/config.d/*.yaml

You shouldn't write to /k3os/system/config.yaml but you are free to modify /var/lib/rancher/k3os/config.yaml and/or drop-in configs under /var/lib/rancher/k3os/config.d/ as these locations are persisted to disk and never written to by k3OS (unless you specify such a location via a write_files directive in a config.yaml or otherwise modify via boot_cmd or run_cmd).

@Moep90
Copy link
Author

Moep90 commented Dec 3, 2019

@dweomer I just talk about the /var/lib/rancher/k3os/config.yaml

My current situation/setup is the following:
I'm in a VMware vCenter/vSphere environment.
So there is no data_source like:

aws
gcp
openstack
packet
scaleway
vultr
cdrom

...available to provide user_data or so.

My Steps are:

  1. I create a VMware template using packer
  2. I deploy n instances of the templat (lets say 5)
  3. On 3 instances I would like to update the /var/lib/rancher/k3os/config.yaml to be a k3s server
  4. On 2 instances I would like to update the /var/lib/rancher/k3os/config.yaml to be a k3s agent

How can I achive step 3 +4 with any kind of automation but without using boot_cmd or run_cmd.
Ansible seams not possible...
I dont want to check my config.yaml into my repository because it contains sensitive data like the node-token.

@dweomer
Copy link
Contributor

dweomer commented Dec 3, 2019

@Moep90 in vSphere (and I believe VMware on a workstation) the cdrom datasource is actually available to you. You'll need to

  1. have /k3os/system/config.yaml on the VMs you've installed specify the data source:
    (this can be achieved by editing the vm/template or modifying the packer build to upload a config.yaml at build-time)
k3os:
  data_sources:
  - cdrom
  1. attach to your VM(s) your ISO image with config.yaml renamed to user-data.txt at the root
  2. boot them up and watch the magic happen!

Keep in mind that the Packer builds are mostly contributed by the community and only reviewed/updated sporadically. That said, the vSphere template(s) are relatively recent and look to be in good order. I am not sure you need two different builds (one for server, one for agent) however and the server build should suffice for your use-case (with userdata via cdrom). This implies that you will likely want to have a server userdata ISO and an agent userdata ISO uploaded to vSphere.

See:

@Moep90
Copy link
Author

Moep90 commented Dec 4, 2019

@dweomer thanks for you answer, I didnt know that.

But this is way to manual/overhead.
Is there nothing known/planned for a simpler automation?

This is my current solution:
Inventory generated by Terraform:

# ---------------------- Managed by Terraform ----------------------------------
[all:vars]
ansible_user=rancher

[kcluster-master]
kcluster-master-01 ansible_host=192.168.123.85
kcluster-master-02 ansible_host=192.168.123.86
kcluster-master-03 ansible_host=192.168.123.87

[kcluster-agent:vars]
ansible_become=yes

[kcluster-agent]
kcluster-agent-01 ansible_host=192.168.123.89
kcluster-agent-02 ansible_host=192.168.123.90
kcluster-agent-03 ansible_host=192.168.123.92

[kcluster-agent:vars]
ansible_become=yes

Ansible-Playbook mess:

---
- hosts: kcluster-master, kcluster-agent
  serial: 1
  gather_facts: no
  vars_files:
    - ../group_vars/all.yml
  vars:
  tasks:
    - name: Template a file to /tmp/server.yaml
      template:
        src: ../templates/config_server.yaml.j2
        dest: /tmp/server.yaml
      run_once: true
      delegate_to: localhost

    - name: Template a file to /tmp/agent.yaml
      template:
        src: ../templates/config_agent.yaml.j2
        dest: /tmp/agent.yaml
      run_once: true
      delegate_to: localhost

    - debug:
        msg:
          - "{{ inventory_hostname }}"
          - "{{ ansible_host }}"

    - name: CLOUD-INIT | Master
      local_action: "command scp /tmp/server.yaml rancher@{{ ansible_host }}:~/"
      when: "'kcluster-master' in group_names"

    - name: CLOUD-INIT | COPY to correct location
      raw: cp /home/rancher/server.yaml /var/lib/rancher/k3os/config.yaml
      when: "'kcluster-master' in group_names"
      become: yes

    - name: CLOUD-INIT | Agent
      local_action: command scp /tmp/agent.yaml rancher@{{ ansible_host }}:~/
      when: "'kcluster-agent' in group_names"

    - name: CLOUD-INIT | COPY to correct location
      raw: cp /home/rancher/agent.yaml /var/lib/rancher/k3os/config.yaml
      when: "'kcluster-agent' in group_names"
      become: yes

    - name: REBOOT | TO ADD cloud-init
      raw: reboot
      become: yes

@ecowden
Copy link

ecowden commented Dec 6, 2019

As a heavy Ansible shop, we've tackled this problem in a rather different way. I don't know if it's broadly applicable, but I figured I'd throw it out there in case better minds than mine can put it to use.

We're looking to spin up around sixty single-node, bare-metal clusters. Each one has it's own metadata specific to the use case, though they're functionally the same. We need to tweak some properties uniformly, like the enabled module list, and set some properties uniquely, like hostname and node labels. Fortunately, everything we need for bootstrapping is available via the config.yaml.

I tossed together a simple API that accepts and stores a list of use-case specific configurations, one for each box. I then added a /pop endpoint that generates a config.yaml from one of those configurations and pops it off the stack.

When we bootstrap a machine, all we have to do is point it to the /pop endpoint. The machine gets a unique configuration, and voila! Everything after that we can lay down and invoke with the config.yaml's write_files and xxx_cmd fields, or just leverage the Kubernetes control plane.

For a small number of systems or a PoC, it's easy enough to walk through the live CD installer and just type the URL into the appropriate prompt. It takes under three minutes from boot to done. For larger numbers, it's probably worth tweaking the ISO / PXE booting / etc. to set k3os.install.config_url and automate away even that step. At that point, configurations could be mapped to MAC addresses and stored for repeated setup, if ever needed.

Is this a dumb approach? Would anyone else find it useful?

@msnelling
Copy link

I'm in the same boat as @Moep90. It feels like a vmware datasource is missing here as a way to pass in the config.yaml, similar to how we can do this in RancherOS with the guestinfo.cloud-init.config.data VM extra config field.
I understand that the cdrom data source exists but it feels rather burdensome to build a custom ISO image for (potentially) each VM, upload it to vSphere and mount it into the VM when there is already a similar mechanism used by another Rancher product.

@dweomer
Copy link
Contributor

dweomer commented Dec 6, 2019

open-vm-tools is installed and vmtoolsd runs early during the boot phase but I don’t know if anything special must be configured for it to dump the guestinfo userdata anywhere

@Moep90
Copy link
Author

Moep90 commented Dec 16, 2019

@dweomer are there any plans to support any automation technology at all out of the box (ansible, puppet, chef, ...)?

@dweomer
Copy link
Contributor

dweomer commented Dec 16, 2019

@dweomer are there any plans to support any automation technology at all out of the box (ansible, puppet, chef, ...)?

@Moep90 as of yet, not explicitly. the idea is that the config.yaml delivered as "user data" should be the "last mile" to need get you up-and running with k3OS and hence k3s and hence kubernetes.

@Moep90
Copy link
Author

Moep90 commented Feb 18, 2020

@dweomer What about this idea:
First:
1.) APK remains installed with the iso.
2. )Trough the k3os installation process, the user can specifiy diverse APK packages (e.g. python) to be installed, default is nothing will be installed.
3.) APK can then be removed during the installation process.

@erkki
Copy link
Contributor

erkki commented Sep 2, 2020

@ecowden I was thinking of a similar workflow, would it make sense to your use-case that the /pop endpoint be passed a unique identifier (mac address?) from the config downloader?

@ecowden
Copy link

ecowden commented Sep 2, 2020

@erkki Sure, you could get pretty fancy with this kind of setup. Have a full-featured config-vending API that stores and vends configs based on an identifier like a MAC address. All depends on the level of complexity you need or want for your use case.

Good luck!

@erkki
Copy link
Contributor

erkki commented Sep 3, 2020

@ecowden yes the reason I ask is that I'm pondering a pull-request to upstream to incorporate sending relevant local metadata (needs to be defined) up as parameters/headers (needs to be defined) of the config request.

@robertkaelin
Copy link

robertkaelin commented May 18, 2021

Just fyi for others looking for vmware support:
As far as I can tell, k3os uses the metadata executable from linuxkit for the cloud-config, and somebody made a PR there to add vmware as a information provider: linuxkit/linuxkit#3526

not sure if this will be available here once it gets merged, but would seem logical

@brlbil
Copy link
Contributor

brlbil commented May 19, 2021

@robertkaelin I am the one who made that PR in the hopes that it would eventually be used in K3OS.
But the problem is linuxkit maintainers have been ignoring the PR for a long time. I am sure it will ever be merged.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
area/configuration kind/question Further information is requested
Projects
None yet
Development

No branches or pull requests

7 participants