Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

network-binding-plugin: add plugin for vhostuser interfaces. #294

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

bgaussen
Copy link

What this PR does / why we need it:

This design proposal aims at implementing a new network binding plugin to support vhostuser interfaces. This will allow fast userspace datapath when used with a userspace dataplane like OVS-DPDK or VPP.
This design proposal takes into consideration sockets sharing issues between kubevirt and dataplane pods.

Special notes for your reviewer:

Checklist

This checklist is not enforcing, but it's a reminder of items that could be relevant to every PR.
Approvers are expected to review this list.

  • Design: A design document was considered and is present.
  • PR: The PR description is expressive enough and will help future contributors
  • Community: Announcement to kubevirt-dev was considered

Release note:

NONE

This design proposal implements a new network binding plugin to support vhostuser interfaces.
This will allow fast userspace datapath when used with a userspace dataplane like OVS-DPDK or VPP.
This design proposal takes into consideration sockets sharing issues between kubevirt and dataplane pods.

Signed-off-by: Benoît Gaussen <[email protected]>
@kubevirt-bot kubevirt-bot added the dco-signoff: yes Indicates the PR's author has DCO signed all their commits. label May 22, 2024
@kubevirt-bot
Copy link

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign rmohr for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@kubevirt-bot
Copy link

Hi @bgaussen. Thanks for your PR.

PRs from untrusted users cannot be marked as trusted with /ok-to-test in this repo meaning untrusted PR authors can never trigger tests themselves. Collaborators can still trigger tests on the PR using /test all.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

- hides the KubeVirt implementation details. Currently, you need to know where the KubeVirt sockets are located in the virt-launcher filesystem. Potentially, if we change the directory path for the sockets, this would break the CNI plugin
- can isolate the resources dedicated to that particular plugin

The drawback side is that `virt-launcher` pod would need to be run with `privileged: true` in order to do the bind mount.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is true. The directory can be exposed by virt-handler. Hence, no need of privileged for virt-launcher. This would also be unfeasible since virt-launcher is untrusted

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see very well the interaction between virt-handler, virt-launcher and the exposed directories. Need to check that.

- dataplane: 1
This only resource is resquested by the userspace dataplane, and add a `/var/run/vhost_sockets` mount to the dataplane pod.
- vhostuser sockets: n
This as many resources as we want to handle, is requested by the `virt-launcher` pod using vhostuser plugin. This makes the device plugin create a per pod directory like `/var/run/vhost_sockets/<launcher-id>`, and mount it into the `virt-launcher` pod.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you tried this? I still worry that the directory created by the dp will still have the original issue with the mount propagation HostToContainer

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only bind mount needed is the one the DP will push through kubelet. As there will be no further bind mount inside the pods, and as it will not be necessary for the CNI to do it neither, there should be no issue with mountPropagation.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still I'm not sure how kubelet mounts the directory inside the pod. It might be possible that it is with HostToContainer and the socket won't be visible in the host directory. I would really like to check it with real code.

Copy link
Author

@bgaussen bgaussen May 31, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand your doubts ;) So I tested the scenario with generic-device-plugin, and the following resources definition:

        - --domain
        - dataplane.io
        - --device
        - |
          name: dataplane
          groups:
            - count: 1
              paths:
                - path: /var/lib/sockets
                  mountPath: /var/lib/sockets
                  type: Mount
                  permissions: mrw
        - --device
        - |
          name: sockets
          groups:
            - paths:
                - path: /var/lib/sockets/pod*
                  mountPath: /var/lib/socket
                  type: Mount
                  permissions: mrw

When a pod's container requests a datplane.io/socket, it will get assigned one of the existing /var/lib/sockets/podXXX, that gets bind mounted to /var/lib/socket in the container. Any file created there by the container is available in host /var/lib/sockets/podXXX.

If we have a dataplane pod requesting dataplane.io/dataplane, the whole /var/lib/sockets directory is bind mounted to /var/lib/sockets, and the dataplane container can see the files created in /var/lib/sockets/podXXX.
Reverse is also true if dataplane container creates a file in /var/lib/sockets/podXXX it is available to the earlier pod's container.

I tested with only unprivileged pods.

By the way I checked the propagation option of the target mount, it's the default private.

/ # findmnt -o TARGET,PROPAGATION /var/lib/one-socket
TARGET              PROPAGATION
/var/lib/one-socket private

But as far as we don't create new mount in that target, there is no propagation issue.

- dataplane: 1
This only resource is resquested by the userspace dataplane, and add a `/var/run/vhost_sockets` mount to the dataplane pod.
- vhostuser sockets: n
This as many resources as we want to handle, is requested by the `virt-launcher` pod using vhostuser plugin. This makes the device plugin create a per pod directory like `/var/run/vhost_sockets/<launcher-id>`, and mount it into the `virt-launcher` pod.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the pod ID know by the CNI plugin or outside KubeVirt?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DP can push annotation to the pod with an id it will generate. So it's not really the virt-launcher pod id, as it does not exist at the time DP is called.
The CNI can read the pod annotation and use it to configure the vhostuser with the right path in the data plane.

We still have to care about directory and sockets permission (and SELinux categories?).

## API Examples
(tangible API examples used for discussion)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please. still add here the examples, it helps with the reviews and when this proposal is merged

@EdDev
Copy link
Member

EdDev commented May 22, 2024

/sig-network
/assign

Copy link
Member

@EdDev EdDev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I revisited this a bit late, sorry.

There are several points which I do not fully understand yet.
I would be interested in having a meeting to discuss this in detail, hopefully clarifying a few points and allowing this to be pushed forward.

The `vhostuser` secondary interfaces configuration in the dataplane is under the responsibility of Multus and the CNI such as `userspace CNI`.

## Definition of Users
Users of the feature are everyone that deploys a VM.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Usually we have these basic users:

  • Cluster Admin
  • VM User
  • Guest User

In this case, I think you also have:

  • Network Binding Plugin Developer

- As a user, I want the `vhostuser` interface to be configured with a specific MAC address.
- As a user, I want to enable multi-queue on the `vhostuser` interface
- As a Network Binding Plugin developper, I want the shared socket path to be accessible to virt-launcher pod
- As a CNI developper, I want to access the shared vhostuser sockets from Multus pod
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this one is also under Network Binding Plugin developer.

- As a CNI developper, I want to access the shared vhostuser sockets from Multus pod

## Repos
Kubevirt repo, and most specificaly cmd/sidecars.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please provide links for clarity?

1. Creates a new interface with `type='vhostuser'`
2. Set the MAC address if specified in the VMI spec
3. If `networkInterfaceMultiqueue` is set to `true`, add the number of queues calculated after the number of cores of the VMI
4. Add `memAccess='shared'` to all NUMA cells elements
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is worth adding a note that the change needs to be idempotent.

To clarify, the hook is called multiple times and the result needs to be consistent between the first and following changes.

Also, I am unsure what are the side-effects of such a marking. Please share how such a change will influence the VM.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Ed,

You mean the hook is called one time for each device using the same binding plugin? If so yes we need to add some words about that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The OnDefineDomain hook can be called multiple times by Kubeivrt regardless of devices count.
Thus the sidecar implementation for OnDefineDomain hook should be idempotent.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean the hook is called one time for each device using the same binding plugin? If so yes we need to add some words about that.

Just assume that it can be called many times, you should not assume how many times.
Today, it is called twice in a normal flow, for the purpose of calculating some default that libvirt provides.
In that past it was called many more times (but we got it optimized since).

In a flow with failures, reconciliation may cause it to be called several times.

In the future, other hook points in the code flow may trigger it.

</interface>
```

This design leverages the existing `sockets` emptyDir mounted in `/var/run/kubevirt/sockets`. This allows the CNI to bind mount the socket emptyDir (`/var/lib/kubelet/<pod uid>/volumes/kubernetes.io~empty-dir/sockets`) to a host directory available to the dataplane pod through a hostPath mount.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a bit confused with the changes this proposals has passed through.
Can you please keep the alternatives and the reasons they have been kept out in an appendix?
E.g. #294 (comment) discussed the DP performing the mounting.

Regarding this intro sentence:

  • How exactly can a CNI perform a mount to the pod/container? At the time the CNI is executed, there is not filesystem namespace defined yet. The CNI is also unlikely to have access to the api-server, it is unsafe and limited to trusted binaries only (e.g. Multus).
  • The diagram describes in step 4 that qemu creates the socket.
    First, it will be nice to see that diagram steps described in details in text as well.
    In general, libvirt handles interfaces in an unmanaged manner, i.e. all it needs is provided to it (e.g. tap device). This is done to allow libvirt to operate in an unprivileged/rootless mode.
    • Is the socket creation possible with the rootless priv?
    • Can the socket be created by something else and then just consumed by libvirt?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding this intro sentence:

  • How exactly can a CNI perform a mount to the pod/container? At the time the CNI is executed, there is not filesystem namespace defined yet. The CNI is also unlikely to have access to the api-server, it is unsafe and limited to trusted binaries only (e.g. Multus).

Indeed CNI needs to access api-server. Multus 3 allowed that through its own kubeconfig.multus. With Multus 4 thick mode, it's no longer possible, and CNI requesting api-server access needs to handle their own credentials creation.

  • The diagram describes in step 4 that qemu creates the socket.
    First, it will be nice to see that diagram steps described in details in text as well.
    In general, libvirt handles interfaces in an unmanaged manner, i.e. all it needs is provided to it (e.g. tap device). This is done to allow libvirt to operate in an unprivileged/rootless mode.

    • Is the socket creation possible with the rootless priv?

Yes it is, as far the filesystem permissions AND SElinux allows it.

  • Can the socket be created by something else and then just consumed by libvirt?

It can. vhostusers sockets are created in server mode, and consumed in client mode. Usually qemu is in server mode, and the data plane the client. By the way the other way is getting deprecated in ovs-dpdk for example.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the clarification.
As a generic policy, I would not recommend accessing the api-server from the CNI binary. It is unsafe and can compromise the cluster, especially when a worker node is compromised.
You can add rules to limit the access and mitigate the vulnerability, but it will be much cleaner if it is avoided.

If the mount path is known in advance, I think we can find alternatives.


## Alternative designs

Some alternative designs were discussed in ![kubevirt-dev mailing-list](https://groups.google.com/g/kubevirt-dev/c/3w_WStrJfZw/m/yWSBpDAKAQAJ).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please summarize them in an appendix so we can see the full picture in one place.

Comment on lines 95 to 100
This requires to implement a new network binding plugin mechanism we could expose the content of a `virt-launcher` directory to an external plugin.
The plugin registration in Kubervirt resource would define the target directory on the node where the directory should be exposed.

This diagram explains this mechanism.

![kubevirt-plugin-extension](kubevirt-plugin-extension.png)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not understand what is suggested here.
In virt-launcher we have a plugin container (the sidecar) and the compute container in which libvirt runs.
We have virt-handler in which a privileged operations may be conducted on the virt-launcher and on the api-server.

  • Do you want a new parameter to be added to the network binding plugin spec to share data of the plugin container with the node, the virt-handler or the compute container in the virt-launcher?
  • I do not understand how this is related to the CNI plugin. I guess this is related to my misunderstanding on how a CNI plugin can mount anything (I asked it in a different thread).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not understand what is suggested here. In virt-launcher we have a plugin container (the sidecar) and the compute container in which libvirt runs. We have virt-handler in which a privileged operations may be conducted on the virt-launcher and on the api-server.

I guess both virt-launcher and virt-handler needs to be clarified when it comes to network binding plugins.

  • Do you want a new parameter to be added to the network binding plugin spec to share data of the plugin container with the node, the virt-handler or the compute container in the virt-launcher?

In that case it'd be a new network binding plugin spec parameter.

  • I do not understand how this is related to the CNI plugin. I guess this is related to my misunderstanding on how a CNI plugin can mount anything (I asked it in a different thread).

Indeed, the CNI needs to know where the socket is located to configure the data plane with this socket. In current userspace CNI implementation, the CNI does a bind mount of the socket into data plane filesystem namespace.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the CNI does a bind mount of the socket into data plane filesystem namespace.

This part I do not understand.
AFAIU there is no socket yet, as that is created by libvirt when the domain is created.
Perhaps we can clarify this by listing all the steps that are actually done.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I meant "the CNI does a bind mount of directory where the socket will be created"...
You can refer to the diagram included in the DP.

@EdDev
Copy link
Member

EdDev commented Jun 2, 2024

/sig network

@toelke
Copy link

toelke commented Jun 17, 2024

I only found this discussion now, so I am very late in offering our experience:

We are productively using a self-build vhost-user management:

  1. An admission webhook patches all pods with label kubevirt.io=virt-launcher to contain a hostPath volume (/run/vpp, in our case) and corresponding volumeMounts
  2. A sidecar that patches the xml in a way very similar to this proposal. We add a new PCI subtree for the virtual devices, but I know that we are breaking kubevirt expectations there. Please note that having <reconnect enabled="yes"/> is useful here.

In our case, the driver creating the socket also creates a config.json in the same folder that is read by the sidecar to communicate information to the sidecar.

I will join the meeting on Wednesday.

Copy link
Contributor

@ormergi ormergi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the proposal

Please see my inline comments

</numa>
</cpu>
<interface type='vhostuser'>
<source type='unix' path='/var/run/kubevirt/sockets/poda08a0fcbdea' mode='server'/>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder, does the socket is created by the CNI inside virt-launcher pod?

If so, I think the socket path can be reflected on Multus network-status annotation under the subject element device-info, see https://github.com/k8snetworkplumbingwg/device-info-spec/blob/main/SPEC.md#315-vhost-user

The vhost-user device-info can be exposed using the network binding-plugin API's downardAPI.
The sidecar could then read the vhost-user socket from the mounted downwardAPI volume and spesify it in the domain xml.

Please see vDPA example for reference how downardAPI can be used.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the hints. Indeed the new network binding plugin downardAPI will be helpful!

1. Creates a new interface with `type='vhostuser'`
2. Set the MAC address if specified in the VMI spec
3. If `networkInterfaceMultiqueue` is set to `true`, add the number of queues calculated after the number of cores of the VMI
4. Add `memAccess='shared'` to all NUMA cells elements
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The OnDefineDomain hook can be called multiple times by Kubeivrt regardless of devices count.
Thus the sidecar implementation for OnDefineDomain hook should be idempotent.

- As a user, I want to create a VM with one or serveral `vhostuser` interfaces attached to a userspace dataplane.
- As a user, I want the `vhostuser` interface to be configured with a specific MAC address.
- As a user, I want to enable multi-queue on the `vhostuser` interface
- As a Network Binding Plugin developper, I want the shared socket path to be accessible to virt-launcher pod
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo

Suggested change
- As a Network Binding Plugin developper, I want the shared socket path to be accessible to virt-launcher pod
- As a Network Binding Plugin developer, I want the shared socket path to be accessible to virt-launcher pod

@kubevirt-bot kubevirt-bot added dco-signoff: no Indicates the PR's author has not DCO signed all their commits. and removed dco-signoff: yes Indicates the PR's author has DCO signed all their commits. labels Jun 26, 2024
@kubevirt-bot
Copy link

Thanks for your pull request. Before we can look at it, you'll need to add a 'DCO signoff' to your commits.

📝 Please follow instructions in the contributing guide to update your commits with the DCO

Full details of the Developer Certificate of Origin can be found at developercertificate.org.

The list of commits missing DCO signoff:

  • af99c1c Re-factor design proposal to focus on device plugin proposal

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@bgaussen
Copy link
Author

Hi all,

I just committed modifications to this design proposal PR to focus on the device plugin based solution to facilitate socket sharing, as discussed during last week meeting.

Regards,

Benoit.

This as many resources as we want to handle and may represent the number of port of the dataplane vSwitch.
It is requested through VM or VMI definition in resources request spec. In turn the `virt-launcher` pod will request the same resources.
This makes the device plugin create a per pod directory like `/var/run/vhost_sockets/<pod-dp-id>`, and mount it into the `virt-launcher` pod as to well known location `/var/run/vhostuser`.
The device plugin has to generate a `pod-dp-id` and push it as an annotation in `virt-launcher` pod. This will be used later by the CNI or any component that configures the vhostuser socket in the dataplane with right path.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that CNI plugin that interacts with the cluster API (e.g.: get pod annotation) is not a good practice and better to avoid (for example if the CNI hangs pod creation hangs).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right and we are trying to not rely on API server for the CNI to get annotations.
For example this Mutlus PR could ease the task.

## Repos
Kubevirt repo, and most specificaly [cmd/sidecars](https://github.com/kubevirt/kubevirt/tree/main/cmd/sidecars).

## Design
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Which kind of networks the plugin is going to support? (pod network, secondary networks)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The plugin is designed to support secondary networks.

- check the VM is running

# Implementation Phases
1. First implementation of the `network-vhostuser-binding` done
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you link to where that is?

Copy link
Author

@bgaussen bgaussen Jul 31, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yuval, sorry it's not yet ready to be shared publicly. We can have a talk if you want to have a look at the current implementation.

@bgaussen
Copy link
Author

Hi all,

A little update: we're currently implementing the Device Plugin for sockets sharing. We're implementing the device-info spec to share device information between Device Plugin and CNI, and we'll rely on DownwardAPI support in Kubevirt 1.3 to get network-status annotation in the network binding plugin (as suggested by @ormergi, thx 😁).

I'll update the Design Proposal ASAP.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dco-signoff: no Indicates the PR's author has not DCO signed all their commits. sig/network size/L
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants