Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[multistream] Segmentation fault, re-negotation/DTLS alert race condition #2761

Closed
cameronlucas3 opened this issue Aug 24, 2021 · 5 comments
Closed

Comments

@cameronlucas3
Copy link
Contributor

There appears to be a race condition that can occur when a client sends a new SDP, and then immediately closes it's peer connection (hang-up due to DTLS alert).

This causes a segmentation fault, due to a null pointer access of handle->pc at

for(mi=0; mi<g_hash_table_size(handle->pc->media); mi++) {

I am able to reliably reproduce it, by adding a sleep(1) immediately before that loop. And then forcing our own web client to re-negotiate and close it's PeerConnection ~100ms later.

Full stack trace from AddressSanitizer

@atoppi
Copy link
Member

atoppi commented Aug 24, 2021

Thanks for the detailed report @cameronlucas3
Lines do not match, can you please share a stack trace from a recent commit ?

@atoppi
Copy link
Member

atoppi commented Aug 24, 2021

Oops, I was inspecting the wrong file! My bad, sorry for the noise.

@lminiero
Copy link
Member

lminiero commented Sep 1, 2021

I guess the obvious, and simplest, choice would be wrapping that simulcast traversal check in handle->mutex, since that's what janus_ice_webrtc_free (that sets handle->pc to NULL) is using internally. This should ensure no race condition occurs for that specific variable in that circumstance.

It might be better to extend the usage of that mutex to more parts of that request management, though, as just fixing it there may still result in some weird state in case conflict occurs (we start handling a new SDP, an alert wipes everything away, we then keep on handling the request anyway after the mutex is released). I'll have a look at the code to see if/where this can be done.

@lminiero
Copy link
Member

@cameronlucas3 apologies if this took a while, but I had a stab at a few multistream issues so I addressed this too. Could you let me know if you can still replicate this, or if we can consider this fixed?

@lminiero
Copy link
Member

I'll assume it's fixed. Please let us know if that's not the case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants