Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No ws/udp service definition for JVB ? #11

Open
grenzr opened this issue Dec 12, 2022 · 10 comments
Open

No ws/udp service definition for JVB ? #11

grenzr opened this issue Dec 12, 2022 · 10 comments

Comments

@grenzr
Copy link

grenzr commented Dec 12, 2022

Hey all,

Great work on the operator so far, I like where this is going and it is almost in a place where I can use it in my cluster.
However, there seems to be an omission of JVB service definition to allow it to reach the UDP and websocket ports on that service.

For example: https://github.com/jitsi-contrib/jitsi-helm/blob/main/templates/jvb/service.yaml

I have set my ingress up with nginx, and have added the server-snippet into the jitsi CRD ingress annotations like you used to have previously:

obj.Annotations["nginx.ingress.kubernetes.io/server-snippet"] = fmt.Sprintf(`add_header X-Jitsi-Shard shard;
location = /xmpp-websocket {
proxy_pass http://%s-prosody.%s:5280/xmpp-websocket;
proxy_http_version 1.1;
proxy_set_header Connection "upgrade";
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Host %s;
proxy_set_header X-Forwarded-For $remote_addr;
tcp_nodelay on;
}
location ~ ^/colibri-ws/([a-zA-Z0-9-\.]+)/(.*) {
proxy_pass http://$1:9090/colibri-ws/$1/$2$is_args$args;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
tcp_nodelay on;
}`, jitsi.Name, jitsi.Namespace, jitsi.Spec.Domain)

My CRD in my Rancher cluster so far :

apiVersion: apps.jit.si/v1alpha1
kind: Jitsi
metadata:
  name: gnz-jitsi
spec:
  domain: example.com
  ingress:
    annotations:
      cert-manager.io/cluster-issuer: cert-manager-webhook-dnsimple-production
      external-dns.alpha.kubernetes.io/hostname: example.com
      kubernetes.io/ingress.class: nginx-jitsi
      nginx.org/proxy-read-timeout: "3600"
      nginx.org/proxy-send-timeout: "3600"
      nginx.org/server-snippets: |
        add_header X-Jitsi-Shard shard;
        location = /xmpp-websocket {
            proxy_pass http://gnz-jitsi-prosody.jitsi.svc.cluster.local:5280/xmpp-websocket;
            proxy_http_version 1.1;

            proxy_set_header Connection "upgrade";
            proxy_set_header Upgrade $http_upgrade;

            proxy_set_header Host $host;
            proxy_set_header X-Forwarded-For $remote_addr;
            tcp_nodelay on;
        }
        location ~ ^/colibri-ws/([a-zA-Z0-9-\.]+)/(.*) {
          proxy_pass http://$1:9090/colibri-ws/$1/$2$is_args$args;
          proxy_http_version 1.1;
          proxy_set_header Upgrade $http_upgrade;
          proxy_set_header Connection "upgrade";
          tcp_nodelay on;
        }
    enabled: true
    tls: true
  jibri:
    enabled: true
    replicas: 1
  jvb:
    ports:
      udp: 30100
    gracefulShutdown: true
    strategy:
      replicas: 1
      type: static
  region: europe
  timezone: Europe/London
  variables:
    ENABLE_BREAKOUT_ROOMS: "1"
    ENABLE_XMPP_WEBSOCKET: "1"
    NGINX_RESOLVER: rke2-coredns-rke2-coredns.kube-system.svc.cluster.local

I can now reach the xmpp-websocket ok, but the colibri-ws one is currently not reaching as port 9090 is not exposed in the JVB service, plus also the UDP port is not either.

I'm interested in finding out how you have you achieved this so far?

Thanks, Ryan

@hrenard
Copy link
Member

hrenard commented Dec 12, 2022

Hey Ryan ! I'm glad you're interested.

About the server snippet, we added it because conferences were dropped, but I recently understood that the issue was the default proxy timeout. (It's the same as the default heartbeat of meet over the websocket, this mean that with a little delay, the connection would drop). So, I changed the proxy timeout and removed the snippet in order to avoid duplicating work with upstream.

For now, we don't require a JVB service because the UDP port is binded via pod's hostPort and when accessing colibri-ws the client uses the cluster internal pod ip in the url that is then used by nginx (in the web pod) to connect directly to the jvb pod.

Do you have network policies that could block traffic not explicitly allowed ?

@grenzr
Copy link
Author

grenzr commented Dec 12, 2022

Hey @hrenard thanks for the quick reply!

Yeah I had heard about the proxy-timeout issues with nginx and websocket proxying and needing the proxy annotations set, however my nginx ingress controller only seems to respond to nginx.org/ annotations (maybe this is a recent change?)

Thanks for the tip on the colibri-ws nginx config in the web container, i hadn't noticed it being present there. Ok, so I shouldn't need a server snippet in my ingress annotation any more, only the proxy-timeout stuff, and let the web container handle the proxying.

Im not sure why yet, but I can see status code 101's for switching protocols on ws upgrade to /xmpp-websocket in my ingress nginx and web container logs, but can just see connections being opened and immediately closed in the prosody logs, so the web UI is just trying to force me to reconnect endlessly currently.

I am just running this from inside my home network at the moment, so I don't have any network policies blocking anything afaik.

As for the UDP port, yes I see I've got the UDP port bound to hostPort, which is a private ip, so do you expose this to clients on the internet at all, or are you just relying on the websocket connection? I'm guessing the latter?

@hrenard
Copy link
Member

hrenard commented Dec 12, 2022

Which ingress controller are you using ? I only added annotations for this one : https://github.com/kubernetes/ingress-nginx.

When you same home network, do you mean being a NAT or firewall ? Because for now, the JVB discover its public IP through STUN. So, if your router doesn't let traffic on port 10000/UDP through to the JVB, it's likely meet refreshes the UI because you are not able to connect to the JVB and in the same process closes the XMPP websocket.

@grenzr
Copy link
Author

grenzr commented Dec 14, 2022

Hi again - sorry for the late reply - been mad busy!

Well, it turns out I was using nginx's ingress-nginx off the Rancher marketplace, not kubernetes ingress-nginx (https://artifacthub.io/packages/helm/ingress-nginx/ingress-nginx) which annoyingly have the same name but do behave a bit differently. So I've switched to the right one now. The jitsi ingress now has a public IP which is the same one being detected by STUN in jvb, and has the appropriate proxy config in nginx.conf without any additional mods to the ingress annotations.

I also use BGP in metallb to advertise the jitsi nginx ingress IP to my opnsense router, so it seems to registers the public IP address there ok. That sorts out the routing side, and then I use firewall to ensure only port 443/tcp and 30100/udp are allowed in on the WAN. Because you're using host port for udp, I thought it might be better if I use udp ingress with nginx to proxy to the request to the jvb, but haven't got that working quite yet.
I'm curious why you've used hostport and not some udp ingressing? Am I missing something?

I'm going to have another crack at it again tonight so will keep you posted.

@hrenard
Copy link
Member

hrenard commented Dec 14, 2022

We could also add compatibility for other ingress controllers...

Using ingress for jvb is complicated because you'll need an ingress by jvb because prosody tells the client on wich jvb to go. So, autoscaling would become harder. Also, it would add a lot of stress on ingress and an extra hop, for no gain. And finally, k8s is for now quite bad at handling udp traffic in general.

A better solution would be to use a service in NodePort mode. And we'll probably do that. We just need to know the nodePort for advertising. But, mostly for historical reason (customers with firewall whitelist) we kept the default jvb port (10000) wich isn't in the node port range by default, so hostPort 😄 .

@grenzr
Copy link
Author

grenzr commented Dec 18, 2022

Hi again - thanks for your feedback. I've decided to switch back to the 'default' simple jitsi manifest and work forward from there again.

So I'm using port 10000/udp on jvb now, and I've set up a temporary port forward in my router to allow public ip access to the JVB hostPort. That seems to reach ok from the internet, though if I restart jvb, the node IP changes and I have to update the rule on the router, so its not ideal but just doing it this way for basic testing first of all.

The web ingress seems to be working ok up until I need a colibri-ws connection, which 403's after a few seconds.
Looking the JVB logs it looks like its trying to establish ice sessions, but times out trying which then expires the conference, making subsequent colibri-ws connection attempts 403 because it no longer exists.

JVB 2022-12-18 14:28:13.559 INFO: [87] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0] EndpointConnectionStatusMonitor.start#58: Starting connection status monitor
JVB 2022-12-18 14:28:13.559 INFO: [87] Videobridge.createConference#282: create_conf, id=8d73beae83531b0 meetingId=0f521be0-a777-432a-96ab-13f2602d328b
JVB 2022-12-18 14:28:13.560 INFO: [73] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 local_ufrag=e1e4f1gkiq58jn ufrag=e1e4f1gkiq58jn] Agent.gatherCandidates#647: Gathering candidates for component stream-749fe1f3.RTP.
JVB 2022-12-18 14:28:13.562 INFO: [73] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3] Endpoint.<init>#328: Created new endpoint isUsingSourceNames=true, iceControlling=true
JVB 2022-12-18 14:28:13.564 INFO: [73] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=891a9ce9 local_ufrag=5ces21gkiq58jr ufrag=5ces21gkiq58jr] Agent.gatherCandidates#647: Gathering candidates for component stream-891a9ce9.RTP.
JVB 2022-12-18 14:28:13.566 INFO: [73] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=891a9ce9] Endpoint.<init>#328: Created new endpoint isUsingSourceNames=true, iceControlling=true
JVB 2022-12-18 14:28:14.263 INFO: [73] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0] DtlsTransport.setSetupAttribute#120: The remote side is acting as DTLS client, we'll act as server
JVB 2022-12-18 14:28:14.264 INFO: [73] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0 local_ufrag=e1e4f1gkiq58jn] IceTransport.startConnectivityEstablishment#199: Starting the Agent without remote candidates.
JVB 2022-12-18 14:28:14.264 INFO: [73] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0 local_ufrag=e1e4f1gkiq58jn ufrag=e1e4f1gkiq58jn] Agent.startConnectivityEstablishment#736: Start ICE connectivity establishment.
JVB 2022-12-18 14:28:14.264 INFO: [73] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0 local_ufrag=e1e4f1gkiq58jn ufrag=e1e4f1gkiq58jn] Agent.initCheckLists#972: Init checklist for stream stream-749fe1f3
JVB 2022-12-18 14:28:14.264 INFO: [73] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0 local_ufrag=e1e4f1gkiq58jn ufrag=e1e4f1gkiq58jn] Agent.setState#946: ICE state changed from Waiting to Running.
JVB 2022-12-18 14:28:14.264 INFO: [73] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0 local_ufrag=e1e4f1gkiq58jn] IceTransport.iceStateChanged#342: ICE state changed old=Waiting new=Running
JVB 2022-12-18 14:28:14.264 INFO: [73] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0 local_ufrag=e1e4f1gkiq58jn ufrag=e1e4f1gkiq58jn] Agent.startConnectivityEstablishment#758: Trigger checks for pairs that were received before running state
JVB 2022-12-18 14:28:14.264 INFO: [73] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0 local_ufrag=e1e4f1gkiq58jn ufrag=e1e4f1gkiq58jn] Agent.triggerCheck#1737: Add peer CandidatePair with new reflexive address to checkList: CandidatePair (State=Frozen Priority=7962116751041232895):
	LocalCandidate=candidate:1 1 udp 2130706431 10.42.105.235 10000 typ host
	RemoteCandidate=candidate:10000 1 udp 1853824767 192.168.1.76 63935 typ prflx
JVB 2022-12-18 14:28:14.265 INFO: [73] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0 local_ufrag=e1e4f1gkiq58jn ufrag=e1e4f1gkiq58jn] ConnectivityCheckClient.startChecks#147: Start connectivity checks.
JVB 2022-12-18 14:28:14.291 INFO: [93] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0 local_ufrag=e1e4f1gkiq58jn ufrag=e1e4f1gkiq58jn] ConnectivityCheckClient.processSuccessResponse#649: Pair succeeded: 10.42.105.235:10000/udp/host -> 192.168.1.76:63935/udp/prflx (stream-749fe1f3.RTP).
JVB 2022-12-18 14:28:14.292 INFO: [93] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0 local_ufrag=e1e4f1gkiq58jn ufrag=e1e4f1gkiq58jn name=stream-749fe1f3 componentId=1] ComponentSocket.addAuthorizedAddress#99: Adding allowed address: 192.168.1.76:63935/udp
JVB 2022-12-18 14:28:14.292 INFO: [93] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0 local_ufrag=e1e4f1gkiq58jn ufrag=e1e4f1gkiq58jn] ConnectivityCheckClient.processSuccessResponse#658: Pair validated: <my public ip>:10000/udp/srflx -> 192.168.1.76:63935/udp/prflx (stream-749fe1f3.RTP).
JVB 2022-12-18 14:28:14.292 INFO: [98] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0 local_ufrag=e1e4f1gkiq58jn ufrag=e1e4f1gkiq58jn] ConnectivityCheckClient.processSuccessResponse#649: Pair succeeded: 10.42.105.235:10000/udp/host -> 192.168.1.76:63935/udp/prflx (stream-749fe1f3.RTP).
JVB 2022-12-18 14:28:14.292 INFO: [93] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0 local_ufrag=e1e4f1gkiq58jn ufrag=e1e4f1gkiq58jn] DefaultNominator.strategyNominateFirstHostOrReflexiveValid#268: Nominate (first highest valid): <my public ip>:10000/udp/srflx -> 192.168.1.76:63935/udp/prflx (stream-749fe1f3.RTP)
JVB 2022-12-18 14:28:14.292 INFO: [98] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0 local_ufrag=e1e4f1gkiq58jn ufrag=e1e4f1gkiq58jn] ConnectivityCheckClient.processSuccessResponse#658: Pair validated: <my public ip>:10000/udp/srflx -> 192.168.1.76:63935/udp/prflx (stream-749fe1f3.RTP).
JVB 2022-12-18 14:28:14.292 INFO: [93] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0 local_ufrag=e1e4f1gkiq58jn ufrag=e1e4f1gkiq58jn] Agent.nominate#1810: verify if nominated pair answer again
JVB 2022-12-18 14:28:14.292 INFO: [98] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0 local_ufrag=e1e4f1gkiq58jn ufrag=e1e4f1gkiq58jn] DefaultNominator.strategyNominateFirstHostOrReflexiveValid#268: Nominate (first highest valid): <my public ip>:10000/udp/srflx -> 192.168.1.76:63935/udp/prflx (stream-749fe1f3.RTP)
JVB 2022-12-18 14:28:14.293 WARNING: [93] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0 local_ufrag=e1e4f1gkiq58jn ufrag=e1e4f1gkiq58jn name=stream-749fe1f3 componentId=1] MergingDatagramSocket.initializeActive#599: Active socket already initialized.
JVB 2022-12-18 14:28:14.293 INFO: [93] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0 local_ufrag=e1e4f1gkiq58jn ufrag=e1e4f1gkiq58jn] ConnectivityCheckClient.processSuccessResponse#727: IsControlling: true USE-CANDIDATE:false.
JVB 2022-12-18 14:28:14.293 INFO: [98] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0 local_ufrag=e1e4f1gkiq58jn ufrag=e1e4f1gkiq58jn] ConnectivityCheckClient.processSuccessResponse#727: IsControlling: true USE-CANDIDATE:false.
JVB 2022-12-18 14:28:14.311 INFO: [98] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0 local_ufrag=e1e4f1gkiq58jn ufrag=e1e4f1gkiq58jn] ConnectivityCheckClient.processSuccessResponse#649: Pair succeeded: <my public ip>:10000/udp/srflx -> 192.168.1.76:63935/udp/prflx (stream-749fe1f3.RTP).
JVB 2022-12-18 14:28:14.311 INFO: [98] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0 local_ufrag=e1e4f1gkiq58jn ufrag=e1e4f1gkiq58jn] ConnectivityCheckClient.processSuccessResponse#658: Pair validated: <my public ip>:10000/udp/srflx -> 192.168.1.76:63935/udp/prflx (stream-749fe1f3.RTP).
JVB 2022-12-18 14:28:14.312 INFO: [98] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0 local_ufrag=e1e4f1gkiq58jn ufrag=e1e4f1gkiq58jn] ConnectivityCheckClient.processSuccessResponse#727: IsControlling: true USE-CANDIDATE:true.
JVB 2022-12-18 14:28:14.312 INFO: [98] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0 local_ufrag=e1e4f1gkiq58jn ufrag=e1e4f1gkiq58jn] ConnectivityCheckClient.processSuccessResponse#742: Nomination confirmed for pair: <my public ip>:10000/udp/srflx -> 192.168.1.76:63935/udp/prflx (stream-749fe1f3.RTP).
JVB 2022-12-18 14:28:14.312 INFO: [98] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0 local_ufrag=e1e4f1gkiq58jn ufrag=e1e4f1gkiq58jn name=stream-749fe1f3] CheckList.handleNominationConfirmed#406: Selected pair for stream stream-749fe1f3.RTP: <my public ip>:10000/udp/srflx -> 192.168.1.76:63935/udp/prflx (stream-749fe1f3.RTP)
JVB 2022-12-18 14:28:14.312 INFO: [98] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0 local_ufrag=e1e4f1gkiq58jn ufrag=e1e4f1gkiq58jn] Agent.checkListStatesUpdated#1901: CheckList of stream stream-749fe1f3 is COMPLETED
JVB 2022-12-18 14:28:14.312 INFO: [98] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0 local_ufrag=e1e4f1gkiq58jn ufrag=e1e4f1gkiq58jn] Agent.setState#946: ICE state changed from Running to Completed.
JVB 2022-12-18 14:28:14.312 INFO: [98] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0 local_ufrag=e1e4f1gkiq58jn] IceTransport.iceStateChanged#342: ICE state changed old=Running new=Completed
JVB 2022-12-18 14:28:14.312 INFO: [98] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0] Endpoint$setupIceTransport$2.connected#375: ICE connected
JVB 2022-12-18 14:28:14.312 INFO: [50] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0] DtlsTransport.startDtlsHandshake#102: Starting DTLS handshake, role=org.jitsi.nlj.dtls.DtlsServer@36f2e5a2
JVB 2022-12-18 14:28:14.313 INFO: [98] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0 local_ufrag=e1e4f1gkiq58jn ufrag=e1e4f1gkiq58jn] Agent.logCandTypes#2009: Harvester used for selected pair for stream-749fe1f3.RTP: srflx
JVB 2022-12-18 14:28:14.313 INFO: [50] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0] TlsServerImpl.notifyClientVersion#199: Negotiated DTLS version DTLS 1.2
JVB 2022-12-18 14:28:14.324 INFO: [50] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0] Endpoint$setupDtlsTransport$3.handshakeComplete#419: DTLS handshake complete
JVB 2022-12-18 14:28:14.335 INFO: [50] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=891a9ce9 stats_id=Larue-sP4] DtlsTransport.setSetupAttribute#120: The remote side is acting as DTLS client, we'll act as server
JVB 2022-12-18 14:28:14.336 INFO: [50] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=891a9ce9 stats_id=Larue-sP4 local_ufrag=5ces21gkiq58jr] IceTransport.startConnectivityEstablishment#199: Starting the Agent without remote candidates.
JVB 2022-12-18 14:28:14.336 INFO: [50] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=891a9ce9 stats_id=Larue-sP4 local_ufrag=5ces21gkiq58jr ufrag=5ces21gkiq58jr] Agent.startConnectivityEstablishment#736: Start ICE connectivity establishment.
JVB 2022-12-18 14:28:14.336 INFO: [50] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=891a9ce9 stats_id=Larue-sP4 local_ufrag=5ces21gkiq58jr ufrag=5ces21gkiq58jr] Agent.initCheckLists#972: Init checklist for stream stream-891a9ce9
JVB 2022-12-18 14:28:14.336 INFO: [50] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=891a9ce9 stats_id=Larue-sP4 local_ufrag=5ces21gkiq58jr ufrag=5ces21gkiq58jr] Agent.setState#946: ICE state changed from Waiting to Running.
JVB 2022-12-18 14:28:14.336 INFO: [50] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=891a9ce9 stats_id=Larue-sP4 local_ufrag=5ces21gkiq58jr] IceTransport.iceStateChanged#342: ICE state changed old=Waiting new=Running
JVB 2022-12-18 14:28:14.336 INFO: [50] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=891a9ce9 stats_id=Larue-sP4 local_ufrag=5ces21gkiq58jr ufrag=5ces21gkiq58jr] ConnectivityCheckClient.startChecks#147: Start connectivity checks.
JVB 2022-12-18 14:28:14.348 INFO: [50] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0] Conference.recentSpeakersChanged#467: Recent speakers changed: [749fe1f3], dominant speaker changed: true silence:false
JVB 2022-12-18 14:28:17.312 INFO: [94] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0 local_ufrag=e1e4f1gkiq58jn ufrag=e1e4f1gkiq58jn] Agent.setState#946: ICE state changed from Completed to Terminated.
JVB 2022-12-18 14:28:17.313 INFO: [94] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=749fe1f3 stats_id=Vergie-IO0 local_ufrag=e1e4f1gkiq58jn] IceTransport.iceStateChanged#342: ICE state changed old=Completed new=Terminated
JVB 2022-12-18 14:31:06.552 INFO: [23] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=891a9ce9 stats_id=Larue-sP4] Endpoint.shouldExpire#935: Endpoint's ICE connection has neither failed nor connected after PT2M52.988417S expiring	
JVB 2022-12-18 14:31:06.552 INFO: [23] VideobridgeExpireThread.expire#157: Expiring endpoint 891a9ce9
JVB 2022-12-18 14:31:06.552 INFO: [104] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=891a9ce9 stats_id=Larue-sP4] AbstractEndpoint.expire#289: Expiring.
JVB 2022-12-18 14:31:06.553 INFO: [104] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=891a9ce9 stats_id=Larue-sP4] Endpoint.expire#1112: Spent 0 seconds oversending
JVB 2022-12-18 14:31:06.553 INFO: [104] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=891a9ce9 stats_id=Larue-sP4] Transceiver.teardown#353: Tearing down
JVB 2022-12-18 14:31:06.553 INFO: [104] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=891a9ce9 stats_id=Larue-sP4] RtpReceiverImpl.tearDown#347: Tearing down
JVB 2022-12-18 14:31:06.553 INFO: [104] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=891a9ce9 stats_id=Larue-sP4] RtpSenderImpl.tearDown#318: Tearing down
JVB 2022-12-18 14:31:06.553 INFO: [104] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=891a9ce9 stats_id=Larue-sP4] DtlsTransport.stop#178: Stopping
JVB 2022-12-18 14:31:06.553 INFO: [104] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=891a9ce9 stats_id=Larue-sP4 local_ufrag=5ces21gkiq58jr] IceTransport.stop#252: Stopping
JVB 2022-12-18 14:31:06.555 INFO: [104] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=891a9ce9 stats_id=Larue-sP4 local_ufrag=5ces21gkiq58jr ufrag=5ces21gkiq58jr] Agent.setState#946: ICE state changed from Running to Terminated.
JVB 2022-12-18 14:31:06.555 INFO: [104] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=891a9ce9 stats_id=Larue-sP4 local_ufrag=5ces21gkiq58jr ufrag=5ces21gkiq58jr name=stream-891a9ce9 componentId=1] MergingDatagramSocket.close#142: Closing.
JVB 2022-12-18 14:31:06.555 INFO: [104] [confId=8d73beae83531b0 [email protected] meeting_id=0f521be0 epId=891a9ce9 stats_id=Larue-sP4] Endpoint.expire#1130: Expired.
JVB 2022-12-18 14:31:07.926 WARNING: [30] ColibriWebSocketServlet.createWebSocket#187: Received request for a nonexistent endpoint: 891a9ce9 (conference 8d73beae83531b0)
JVB 2022-12-18 14:31:09.948 WARNING: [32] ColibriWebSocketServlet.createWebSocket#187: Received request for a nonexistent endpoint: 891a9ce9 (conference 8d73beae83531b0)
JVB 2022-12-18 14:31:14.019 WARNING: [30] ColibriWebSocketServlet.createWebSocket#187: Received request for a nonexistent endpoint: 891a9ce9 (conference 8d73beae83531b0)
JVB 2022-12-18 14:31:22.081 WARNING: [32] ColibriWebSocketServlet.createWebSocket#187: Received request for a nonexistent endpoint: 891a9ce9 (conference 8d73beae83531b0)
JVB 2022-12-18 14:31:38.142 WARNING: [33] ColibriWebSocketServlet.createWebSocket#187: Received request for a nonexistent endpoint: 891a9ce9 (conference 8d73beae83531b0)

Having googled around extensively, I'm thinking at this point I'm going to need coturn deployed, and get some turncredentials added to prosody to make the p2p stuff work.

Still trying to get this working, but I do feel like I'm getting nearer to a working state.

@grenzr
Copy link
Author

grenzr commented Dec 19, 2022

I managed to get the prosody external_services plugin working with the default jitsi stun, and enabling the useStunTurn in config.js via configmap:

---
kind: ConfigMap
apiVersion: v1
metadata:
  namespace: jitsi
  name: jitsi-custom-config
data:
 
  custom-config.js: |

    config.p2p.enabled = true;
    config.useStunTurn = true;
    config.p2p.useStunTurn = true;
    config.p2p.stunServers = [
      { urls: 'stun:meet-jit-si-turnrelay.jitsi.net:443' }
    ];

and then add this into the jitsi crd manifest:

  web:
    customConfigCM:
      name: jitsi-custom-config

Now I just need to get this config into the prosody.cfg.lua as well:

modules_enabled: {
     ....
     ....
     "external_services";
}

external_services = {
     { type = "stun", host = "meet-jit-si-turnrelay.jitsi.net", port = 443, transport = "udp" }
};

I'm guessing this is done through :

  prosody:
    customProsodyConfigCM:
       name: jitsi-prosody-config 

(see https://github.com/jitsi-contrib/jitsi-kubernetes-operator/blob/master/controllers/prosody.go#L178)

Thats the bit I'm working on now, and then I'll make my own coturn deployment and use that instead of the jitsi one.

I'm pretty pleased I've managed to get something fairly reliable working, and I think I've got what I need now to finish the job. Thanks for your help though, and great operator - well done :)

@hrenard
Copy link
Member

hrenard commented Dec 19, 2022

Hey, Glad you're making it work !
We focused on Jitsi for teams, so p2p is of no use and turn is required for reliability.
Your use case is interesting, I hope we can cover it better in the future but upstream docker-compose doesn't cover it yet, so it's quite some work.

@grenzr
Copy link
Author

grenzr commented Dec 19, 2022

Yep, thank you - I just added the customProsodyConfigCM and its working now. I notice the current implementation is to completely override the jitsi-meet.cfg.lua file with the content of the file in the configmap. I guess this is ok, as I used what was already there as a template, but I suppose changes in upstream won't be reflected there until you take off the configmap and allow the operator to create a new templated file for you first.

Are you interested in a helm chart being created for the operator yet, or are you waiting until the operator has reached a bit more maturity?

@hrenard
Copy link
Member

hrenard commented Dec 19, 2022

Yes, to minimize maintenance burden, we chose to use mostly upstream configurations. But we added some escape hatches like this for experienced admins.
I build the CI to allow us further customizations, but It's work 😄

I'm not against a helm chart, but I don't believe it to be very useful, as we can install and update the operator with a one-liner :

kubectl apply -f https://raw.githubusercontent.com/jitsi-contrib/jitsi-kubernetes-operator/master/deploy/jitsi-operator.yaml

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants