Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix flushing channel data when pty was allocated #85

Closed
wants to merge 2 commits into from

Conversation

yousong
Copy link
Contributor

@yousong yousong commented Nov 1, 2019

The first change is flushing data before closing channel->writefd. When
pty was used, readfd and writefd are the same. Reading then flushing
data will be impossible if readfd was closed first.

The second change is to actually flush data with new wrapper function
flush_msg_channel_data(). It seems with OpenWrt kernel 4.19.79 at
least, a single read() can only fetch 4095 bytes of data. It takes
multiple reads to drain the buffer.

The last change is in send_msg_channel_data(). The channel fd SHUT_RD
logic was removed. Like said above, read less than requested does not
imply EOF

Ref: https://bugs.openwrt.org/index.php?do=details&task_id=1814

The first change is flushing data before closing channel->writefd.  When
pty was used, readfd and writefd are the same.  Reading then flushing
data will be impossible if readfd was closed first.

The second change is to actually flush data with new wrapper function
flush_msg_channel_data().  It seems with OpenWrt kernel 4.19.79 at
least, a single read() can only fetch 4095 bytes of data.  It takes
multiple reads to drain the buffer.

The last change is in send_msg_channel_data().  The channel fd SHUT_RD
logic was removed.  Like said above, read less than requested does not
imply EOF

Ref: https://bugs.openwrt.org/index.php?do=details&task_id=1814
Copy link
Owner

@mkj mkj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for looking at this. I'll see if I can fix this a different way, it might be possible to remove the check_close() logic entirely and just rely on the file descriptors being closed.

common-channel.c Outdated
|| (channel->type->check_close && close_allowed)) {
close_chan_fd(channel, channel->writefd, SHUT_WR);
}

/* Special handling for flushing read data after an exit. We
read regardless of whether the select FD was set,
and if there isn't data available, the channel will get closed. */
if (channel->flushing) {
TRACE(("might send data, flushing"))
if (channel->readfd >= 0 && channel->transwindow > 0) {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because it's checking for available channel->transwindow, a larger file might still get truncated?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It depends on how much data the pty can buffer and how small the transwindow can be. I tested with 800000 bytes file and it worked. But the correct approach should be to drain the buffer by both waiting for readability of readfd and availability of transwinfow.

@yousong
Copy link
Contributor Author

yousong commented Nov 5, 2019

Thanks for looking at this. I'll see if I can fix this a different way, it might be possible to remove the check_close() logic entirely and just rely on the file descriptors being closed.

Feel free to close this when you start working on the new fix :)

@yousong
Copy link
Contributor Author

yousong commented Nov 19, 2019

Hi @mkj , I added a 2nd change to check for availability of transwindow before doing flush.

@yousong
Copy link
Contributor Author

yousong commented Dec 12, 2019

Friendly ping.

@mkj
Copy link
Owner

mkj commented Oct 29, 2021

I've finally fixed this in 8e6f73e and a7ef149

There is now a small test suite of channel behaviours that will catch this (and also the regression I caused when I first tried to fix it).

Thanks for your report and patch

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants