Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WFLY-17986] Support external client access to EAP deployments on OpenShift oer http #523

Draft
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

rachmatowicz
Copy link
Contributor

This PR contains the analysis document for https://issues.redhat.com/browse/WFLY-17986
It is in draft state initially for review purposes.

An ingress controller is a load balancer external to the cluster which can be shared by multiple deployments.

NOTE: External access points for a deployment may be set up automatically (as in the case of a `route`,
when the application is deployed using the Wildfly operator) or may need to be set up manually.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/Wildfly/WildFly

This happens in several parts of the document.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed, thanks, Yeray!


=== Singleton or clustered deployment
Another issue concerns whether the nature of the EAP deployment itself.
A EAP deployment in OpenShift may be deployed using a non-HA server profile, or using an HA server profile. In the
Copy link
Contributor Author

@rachmatowicz rachmatowicz May 5, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clarification required: can a non-HA WildFly deployment nstalled by the WilfFly operator be scaled to multiple(non-clustered) replicas? Or will it always be deployed as clustered?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @rachmatowicz , yes, the operator will scale up/down your application image. If your application image is not configured as a HA, for example, you have trimmed it and it doesn't configure JGroups, your deployment will be scaled up and you will get a bunch of nodes that will not form a cluster.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yersan thanks for confirming!


The `WildFly operator` can be used to deploy an EAP application in OpenShift and provide the following guarantees:

* persistent IP addresses for Pods
Copy link

@tommaso-borgato tommaso-borgato Jun 7, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think here the correct statement is "persistent name for Pods"

basically what you get on a Pod named wildfly-build-from-server-0 with:

sh-4.4$ hostname
wildfly-build-from-server-0

If you try and scale up and down deployments, you can see Pods IPs always change whenever a Pod is destroyed and then re-created (this was observed on an OpenShift 4.13 cluster using EAP Operator 2.4.0 - but that would most probably be the same with other cluster/operator versions):

scale to 3:

wildfly-build-from-server-0 Pod IP: 10.129.3.31 Host IP: 172.215.0.255
wildfly-build-from-server-1 Pod IP: 10.131.0.9 Host IP: 172.215.1.38
wildfly-build-from-server-2 Pod IP: 10.129.3.32 Host IP: 172.215.0.255

scale to 1 and then back to 3:

wildfly-build-from-server-0 Pod IP: 10.129.3.31 Host IP: 172.215.0.255 <-- This POD wasn't stopped and IP stays the same
wildfly-build-from-server-1 Pod IP: 10.131.0.10 Host IP: 172.215.1.38 <-- This POD was destroyed and recreated: IP changes
wildfly-build-from-server-2 Pod IP: 10.131.0.11 Host IP: 172.215.1.38 <-- This POD was destroyed and recreated: IP changes

scale to 0 and then to 4:

wildfly-build-from-server-0 Pod IP: 10.129.3.34 Host IP: 172.215.0.255 <-- This POD was destroyed and recreated: IP changes
wildfly-build-from-server-1 Pod IP: 10.131.0.12 Host IP: 172.215.1.38 <-- This POD was destroyed and recreated: IP changes
wildfly-build-from-server-2 Pod IP: 10.129.3.35 Host IP: 172.215.0.255 <-- This POD was destroyed and recreated: IP changes
wildfly-build-from-server-3 Pod IP: 10.131.0.13 Host IP: 172.215.1.38

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, good point. Thanks for providing proof. The wording needs to be changed. I'm wondering how this feature/ guarantee is described in the documentation for stateful sets.

** affinity management (HA only)
** failover (HA only)

* the affinity management feature should work with all load balancers supported by OpenShift; this includes:
Copy link

@tommaso-borgato tommaso-borgato Jun 7, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is the mechanism that is going to provide such a feature? does it already exist? is it cookie base affinity?

Copy link
Contributor Author

@rachmatowicz rachmatowicz Jun 7, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When using the EJB client with protocol http, the wildfly-http-client library is used to provide the transport of invocations from the client to the server. This transport has a client side part and a server side part and they work together to make sure that HTTP requests have their affinity managed correctly. The current implementation is broken and the issue is addressed by WEJBHTTP-81.

https://operatorhub.io/operator/WildFly[WildFly Operator on operatorhub.io]

=== Scope
As mentioned earlier, the scope of this issue is limited to `external` EJB clients using the `http` protocol to

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rachmatowicz where are we going to tests HELM based deployments? in some different RFE?


* external to the OpenShift cluster
* the transport protocol is http (or its encrypted variant https)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Tomasso
The first link describes the two options available for looking up beans in a scenario where one OpenShift EAP deployment (represented by a cluster) needs to communicate with another OpenShift EAP deployment (represented by another cluster), and so both clusters are deployed on OpenShift. This is really an issue for EAP7-2063.
So I would say the type of client envisaged for EAP7-2062 was a standalone client only and not a client in an EAP deployment on a cluster external to the OpenShift cluster.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

However, that client should be able to lookup the proxy for the EJB bean in the OpenShift EAP deployment either by using JNDI/HTTP or by creating the proxy programmatically.

protocols, NodePorts are used to gain access. For protocols based on HTTP, routes or ingress controllers
are available. A route is a per-deployment load balancer which is accessible external to the cluster.
An ingress controller is a load balancer external to the cluster which can be shared by multiple deployments.

Copy link

@tommaso-borgato tommaso-borgato Jul 13, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reading https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/7.4/html/getting_started_with_jboss_eap_for_openshift_container_platform/eap-operator-for-automating-application-deployment-on-openshift_default#jakarta-enterprise-beans-remoting-on-openshift_default we can find details about "headless service" created by the EAP Operator which, is not exposed outside of the cluster and, hence, not relevant for EAP7-2062;

On the other hand, the EAP Operator creates a Route like e.g.:

kind: Route
apiVersion: route.openshift.io/v1
metadata:
  annotations:
    openshift.io/host.generated: 'true'
    wildfly.org/wildfly-server-generation: '1'
  resourceVersion: '27583543'
  name: eap-server-route
  uid: 4613b2f8-7ad6-4db7-8338-1109de1c1630
  creationTimestamp: '2023-07-13T10:24:49Z'
  managedFields:
    - manager: openshift-router
      operation: Update
      apiVersion: route.openshift.io/v1
      time: '2023-07-13T10:24:49Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:status':
          'f:ingress': {}
      subresource: status
    - manager: wildfly-operator
      operation: Update
      apiVersion: route.openshift.io/v1
      time: '2023-07-13T10:24:49Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            .: {}
            'f:wildfly.org/wildfly-server-generation': {}
          'f:labels':
            .: {}
            'f:app.kubernetes.io/managed-by': {}
            'f:app.kubernetes.io/name': {}
            'f:app.openshift.io/runtime': {}
          'f:ownerReferences':
            .: {}
            'k:{"uid":"1d9d5b27-aa2b-4c9d-b267-cfdfc0c1ebf5"}': {}
        'f:spec':
          'f:port':
            .: {}
            'f:targetPort': {}
          'f:to':
            'f:kind': {}
            'f:name': {}
            'f:weight': {}
          'f:wildcardPolicy': {}
  namespace: <OPENSHIFT_NAMESPACE>
  ownerReferences:
    - apiVersion: wildfly.org/v1alpha1
      kind: WildFlyServer
      name: eap-server
      uid: 1d9d5b27-aa2b-4c9d-b267-cfdfc0c1ebf5
      controller: true
      blockOwnerDeletion: true
  labels:
    app.kubernetes.io/managed-by: eap-operator
    app.kubernetes.io/name: eap-server
    app.openshift.io/runtime: eap
spec:
  host: eap-server-route-<OPENSHIFT_NAMESPACE>.apps.<OPENSHIFT_CLUSTER>
  to:
    kind: Service
    name: eap-server-loadbalancer
    weight: 100
  port:
    targetPort: http
  wildcardPolicy: None
status:
  ingress:
    - host: eap-server-route-<OPENSHIFT_NAMESPACE>.apps.<OPENSHIFT_CLUSTER>
      routerName: default
      conditions:
        - type: Admitted
          status: 'True'
          lastTransitionTime: '2023-07-13T10:24:49Z'
      wildcardPolicy: None
      routerCanonicalHostname: router-default.apps.<OPENSHIFT_CLUSTER>

This Route is the only entry point to the EAP Cluster for clients outside the OpenShift cluster;

For this reason, unless we plan to ask for modifications to the EAP Operator, we should proceed relying on this route for EAP7-2062;

This Route, in turn, forwards request to the following service:

kind: Service
apiVersion: v1
metadata:
  annotations:
    wildfly.org/wildfly-server-generation: '1'
  resourceVersion: '27583531'
  name: eap-server-loadbalancer
  uid: d37c718f-45a0-4743-a0b5-511492800e63
  creationTimestamp: '2023-07-13T10:24:49Z'
  managedFields:
    - manager: wildfly-operator
      operation: Update
      apiVersion: v1
      time: '2023-07-13T10:24:49Z'
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            .: {}
            'f:wildfly.org/wildfly-server-generation': {}
          'f:labels':
            .: {}
            'f:app.kubernetes.io/managed-by': {}
            'f:app.kubernetes.io/name': {}
            'f:app.openshift.io/runtime': {}
            'f:wildfly.org/operated-by-headless': {}
            'f:wildfly.org/operated-by-loadbalancer': {}
          'f:ownerReferences':
            .: {}
            'k:{"uid":"1d9d5b27-aa2b-4c9d-b267-cfdfc0c1ebf5"}': {}
        'f:spec':
          'f:internalTrafficPolicy': {}
          'f:ports':
            .: {}
            'k:{"port":8080,"protocol":"TCP"}':
              .: {}
              'f:name': {}
              'f:port': {}
              'f:protocol': {}
              'f:targetPort': {}
          'f:selector': {}
          'f:sessionAffinity': {}
          'f:type': {}
  namespace: <OPENSHIFT_NAMESPACE>
  ownerReferences:
    - apiVersion: wildfly.org/v1alpha1
      kind: WildFlyServer
      name: eap-server
      uid: 1d9d5b27-aa2b-4c9d-b267-cfdfc0c1ebf5
      controller: true
      blockOwnerDeletion: true
  labels:
    app.kubernetes.io/managed-by: eap-operator
    app.kubernetes.io/name: eap-server
    app.openshift.io/runtime: eap
    wildfly.org/operated-by-headless: active
    wildfly.org/operated-by-loadbalancer: active
spec:
  clusterIP: 172.122.69.0
  ipFamilies:
    - IPv4
  ports:
    - name: http
      protocol: TCP
      port: 8080
      targetPort: 8080
  internalTrafficPolicy: Cluster
  clusterIPs:
    - 172.122.69.0
  type: ClusterIP
  ipFamilyPolicy: SingleStack
  sessionAffinity: None
  selector:
    app.kubernetes.io/managed-by: eap-operator
    app.kubernetes.io/name: eap-server
    app.openshift.io/runtime: eap
    wildfly.org/operated-by-headless: active
    wildfly.org/operated-by-loadbalancer: active
status:
  loadBalancer: {}

This Service, is backed by an endpoint like the following, supposing we have a 6 nodes cluster:

apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    endpoints.kubernetes.io/last-change-trigger-time: "2023-07-13T10:25:29Z"
  creationTimestamp: "2023-07-13T10:24:49Z"
  labels:
    app.kubernetes.io/managed-by: eap-operator
    app.kubernetes.io/name: eap-server
    app.openshift.io/runtime: eap
    wildfly.org/operated-by-headless: active
    wildfly.org/operated-by-loadbalancer: active
  name: eap-server-loadbalancer
  namespace: <OPENSHIFT_NAMESPACE>
  resourceVersion: "27583968"
  uid: 50196306-a098-44a6-8492-518b46c3d7b9
subsets:
- addresses:
  - ip: 10.128.3.95
    nodeName: tborgato-yhvd-wsjm5-worker-0-xkd9q
    targetRef:
      kind: Pod
      name: eap-server-2
      namespace: <OPENSHIFT_NAMESPACE>
      uid: c90369a8-d65a-4d22-9a50-e2cd2d5c0dfe
  - ip: 10.128.3.96
    nodeName: tborgato-yhvd-wsjm5-worker-0-xkd9q
    targetRef:
      kind: Pod
      name: eap-server-5
      namespace: <OPENSHIFT_NAMESPACE>
      uid: 09bfa6b6-46ad-4253-8ec5-b7898900030a
  - ip: 10.129.3.109
    nodeName: tborgato-yhvd-wsjm5-worker-0-jg72t
    targetRef:
      kind: Pod
      name: eap-server-1
      namespace: <OPENSHIFT_NAMESPACE>
      uid: e30ebbb5-472c-459f-ad42-87da30c4f45d
  - ip: 10.129.3.110
    nodeName: tborgato-yhvd-wsjm5-worker-0-jg72t
    targetRef:
      kind: Pod
      name: eap-server-4
      namespace: <OPENSHIFT_NAMESPACE>
      uid: cefa4e48-75d1-451f-a468-ccbad745c275
  - ip: 10.131.1.149
    nodeName: tborgato-yhvd-wsjm5-worker-0-jqklq
    targetRef:
      kind: Pod
      name: eap-server-0
      namespace: <OPENSHIFT_NAMESPACE>
      uid: 324c1425-dbe1-407e-a421-0d986ba3bf27
  - ip: 10.131.1.150
    nodeName: tborgato-yhvd-wsjm5-worker-0-jqklq
    targetRef:
      kind: Pod
      name: eap-server-3
      namespace: <OPENSHIFT_NAMESPACE>
      uid: c3cf24c4-877c-47db-921a-801511c77c97
  ports:
  - name: http
    port: 8080
    protocol: TCP

If you think this is correct, could you please incorporate this info into this AD?

Next step could be giving instructions on how-to setup the EJB call, e.g. how-to set the java.naming.provider.url using that Route (e.g. http://eap-server-route-<OPENSHIFT_NAMESPACE>.apps.<OPENSHIFT_CLUSTER>/)

Copy link

@tommaso-borgato tommaso-borgato Jul 13, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I light of our latest call, and the changes @rhusar is making to the Operator, we might want elaborate a little on this topic: IIUC, @rhusar is going to modify the load balancer used and will replace the Route with an Ingress

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants