Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

b2b_logic: saftey check on old_entity->peer #2999

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

dunst0
Copy link

@dunst0 dunst0 commented Jan 20, 2023

Summary

This PR is aiming to avoid a crash when using the b2b_logic module in an edge case.

Details

It was noticed that in a b2b_bridge_request call that the function calls b2bl_bridge_msg which is using old_entity->peer without checking whether it is NULL or not, like old_entity->peer->peer == old_entity.

The following snippets are from a coredump but sensitive information was removed.

(gdb) p tuple->bridge_entities
$9 = {0x7f51f5657480, 0x7f51f28577f8, 0x0}
(gdb) p tuple->bridge_entities[1]
$12 = (b2bl_entity_id_t *) 0x7f51f28577f8
(gdb) p *tuple->bridge_entities[1]
$13 = {scenario_id = {s = 0x7f51f28578ea "callee", len = 6}, key = {
    s = 0x7f51f28578d0 "B2B.234.4994781.1674134122", len = 26}, to_uri = {
    s = 0x7f51f28578f0 "sip:[email protected]", len = 50}, proxy = {s = 0x0, len = 0}, from_uri = {
    s = 0x7f51f2857922 "sip:[email protected]", len = 22}, from_dname = {s = 0x0, len = 0}, hdrs = {s = 0x0, len = 0}, adv_contact = {s = 0x0, len = 0}, dlginfo = 0x0, rejected = 0,
  disconnected = 1, state = 0, no = 1, sdp_type = 0, type = B2B_CLIENT, stats = {key = {s = 0x0, len = 0}, start_time = 13926551, setup_time = 0, call_time = 0}, peer = 0x0, prev = 0x0, next = 0x0}
(gdb) p *tuple->bridge_entities[0]
$14 = {scenario_id = {s = 0x7f51f565756d "caller", len = 6}, key = {
    s = 0x7f51f5657558 "B2B.323.40.1674134122", len = 21}, to_uri = {
    s = 0x7f51f5657573 "sip:[email protected]", len = 50}, proxy = {s = 0x0, len = 0}, from_uri = {
    s = 0x7f51f56575a5 "sip:[email protected]", len = 22}, from_dname = {s = 0x0, len = 0}, hdrs = {s = 0x0, len = 0}, adv_contact = {s = 0x0, len = 0}, dlginfo = 0x7f51f2df6428, rejected = 0,
  disconnected = 0, state = 0, no = 0, sdp_type = 0, type = B2B_SERVER, stats = {key = {s = 0x0, len = 0}, start_time = 13926551, setup_time = 0, call_time = 0}, peer = 0x7f51f4fdfd48, prev = 0x0, next = 0x0}

It seems that this was actually happening on tuple that was marked for deletion.

(gdb) p *tuple
$18 = {
  id = 0,
  hash_index = 234,
  key = 0x7f51f63036c8,
  scenario_id = 0x7f51f5062cf0,
  init_sdp = {
    s = 0x0,
    len = 0
  },
  state = -2,
  req_routeid = 2,
  reply_routeid = -1,
  servers = {0x7f51f5657480, 0x7f51f4fdfd48, 0x0},
  clients = {0x0, 0x0, 0x0},
  bridge_entities = {0x7f51f5657480, 0x7f51f28577f8, 0x0},
  bridge_initiator = 0x0,
  bridge_flags = 0,
  to_del = 1,
  extra_headers = 0x7f51f23354b0,
  next = 0x0,
  prev = 0x0,
  lifetime = 13926587,
  local_contact = {
    s = 0x7f51f1b56140 "sip:127.0.0.1:5080",
    len = 22
  },
  sdp = {
    s = 0x0,
    len = 0
  },
  b1_sdp = {
    s = 0x0,
    len = 0
  },
  db_flag = 2,
  repl_flag = 0,
  vals = 0x0,
  tracer = {
    f = 0x0,
    param = 0x0,
    f_freep = 0x0
  },
  cbf = 0x0,
  cb_mask = 0,
  cb_param = 0x0
}

Solution

This is just avoid the crash by not deref the old_entity->peer but this is after that anyways set to NULL.

Compatibility

It shouldn't break anything but maybe it is already undefined behaviour when it got to this point.

Closing issues

@rvlad-patrascu
Copy link
Member

Hi @dunst0 ,

Although a sanity check on that pointer does make sense, the tuple seems to be already in an inconsistent state. The peer field should not be NULL if the entity is still in the bridge_entities array for example. Also, the pointer in bridge_entities[1] is missing from the clients array etc. So the issue here is probably deeper and we're only seeing the effects through this crash.

Can you post the bt full from this core dump? You can email this to [email protected] if you want to avoid showing sensitive info.

And a few extra questions:

  • Does the crash reproduce easily or have you noticed a specific traffic pattern that causes this?
  • Have you tested the fix suggested here? If so, did you see any other issues after avoiding this crash?

@dunst0
Copy link
Author

dunst0 commented Feb 1, 2023

Hi @rvlad-patrascu,

So the issue here is probably deeper and we're only seeing the effects through this crash.

That was my fear also but wanted avoid the crash here.

A crash at the same line was possible to be reproduced locally but it is not clear if it is the reason for the crash.
The suggest fix was tested with this local reproduced case and did prevent the crash, but not sure if there was any other issues visible after avoiding the crash. I will collect the informations and share with you over your email address.

@razvancrainea
Copy link
Member

I believe this crash has been fixed by ae098f7. can you please confirm and close this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants