Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Buffer overflow due to timeout with Netty + Jersey #3500

Open
jerseyrobot opened this issue Feb 6, 2017 · 13 comments · May be fixed by #4329
Open

Buffer overflow due to timeout with Netty + Jersey #3500

jerseyrobot opened this issue Feb 6, 2017 · 13 comments · May be fixed by #4329

Comments

@jerseyrobot
Copy link
Contributor

When returning a large response using the Netty connector, a "Buffer overflow" error occurs. Switching from jersey-container-netty-http to jersey-container-grizzly2-http resolves the issue, so it appears to be specific to the netty connector.

Testing with a 521k response and Jersey 2.25.1, the buffer overflow occurs in org.glassfish.jersey.netty.connector.internal.JerseyChunkedInput#write at line 224. The revelant code is:

private void write(Provider<ByteBuffer> bufferSupplier) throws IOException {

        checkClosed();

        try {
            boolean queued = queue.offer(bufferSupplier.get(), WRITE_TIMEOUT, TimeUnit.MILLISECONDS);
            if (!queued) {
throw new IOException("Buffer overflow.");
            }

        } catch (InterruptedException e) {
            throw new IOException(e);
        }
    }

The queue has a capacity of 8, and the offer timeout is 1000ms, which seems low.

Relevant portion of the stack trace:

Caused by: java.io.IOException: Buffer overflow.
	at org.glassfish.jersey.netty.connector.internal.JerseyChunkedInput.write(JerseyChunkedInput.java:224)
	at org.glassfish.jersey.netty.connector.internal.JerseyChunkedInput.write(JerseyChunkedInput.java:204)
	at org.glassfish.jersey.message.internal.CommittingOutputStream.write(CommittingOutputStream.java:229)
	at org.glassfish.jersey.message.internal.WriterInterceptorExecutor$UnCloseableOutputStream.write(WriterInterceptorExecutor.java:299)
	at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)

Environment

OpenJDK 1.8.0_121

Affected Versions

[2.25.1]

@jerseyrobot
Copy link
Contributor Author

@glassfishrobot Commented
Reported by jekh

@jerseyrobot
Copy link
Contributor Author

@glassfishrobot Commented
@pavelbucek said:
thanks for your report.

do you have any opinion about what are reasonable defaults here?

Also, proper solution to this one is to make these settings configurable..

@jerseyrobot
Copy link
Contributor Author

@glassfishrobot Commented
jekh said:
Would it make sense to have an unlimited wait when offering? Users would use a javax.ws.rs.core.StreamingOutput to avoid storing large amounts of data in memory, right?

@jerseyrobot
Copy link
Contributor Author

@glassfishrobot Commented
This issue was imported from java.net JIRA JERSEY-3228

@jerseyrobot
Copy link
Contributor Author

@brettkail-wk Commented
To fix, I think HttpChunkedInput needs to be flushed so that it attempts to start reading input immediately. Otherwise, all writes are buffered until the response is fully written, which overflows the buffer if enough data is sent.

--- server/src/main/java/org/glassfish/jersey/netty/httpserver/NettyResponseWriter.java
+++ server/src/main/java/org/glassfish/jersey/netty/httpserver/NettyResponseWriter.java
@@ -144,7 +144,7 @@ class NettyResponseWriter implements ContainerResponseWriter {
             JerseyChunkedInput jerseyChunkedInput = new JerseyChunkedInput(ctx.channel());
 
             if (HttpUtil.isTransferEncodingChunked(response)) {
-                ctx.write(new HttpChunkedInput(jerseyChunkedInput)).addListener(FLUSH_FUTURE);
+                ctx.writeAndFlush(new HttpChunkedInput(jerseyChunkedInput));
             } else {
                 ctx.write(new HttpChunkedInput(jerseyChunkedInput)).addListener(FLUSH_FUTURE);
             }

This only works if the client is reading at least one chunk per 10sec (perhaps reasonable). To fully fix, JerseyChunkedInput presumably needs to be reworked to block indefinitely when there's no space but to allow the ChunkedInput.close to waken the writer and throw an IOException. (The existing close implements both the OutputStream.close and ChunkedInput.close, but it appears to be implemented as the latter to abort the output stream. In that case, the removeLast/add appears to race with concurrent writes.)

@jerseyrobot
Copy link
Contributor Author

@jpommerening Commented
@brettkail-wk, @pavelbucek: Is there some progress here? I'd volunteer to help, if someone can get me started (and if that's at all acceptable).

Brett, did you come up with a workaround that does not require patching jersey-container-netty-http?

@jerseyrobot
Copy link
Contributor Author

@brettkail-wk Commented
@jpommerening No, I don't think there is a workaround. In the meantime, my company has agreed to the CLA, so I'll submit a PR with a variation of the fix above.

@jerseyrobot
Copy link
Contributor Author

@jpommerening Commented
@brettkail-wk Thanks for the quick update! That's some promising news! :)

@jerseyrobot
Copy link
Contributor Author

@rvowles
Copy link

rvowles commented Apr 17, 2019

This is now a blocking issue for me, is there a workaround? I am having to drop and filter data because I can't get the buffer to stop overflowing.

@brettkail-wk
Copy link

Not as far as I know. jersey/jersey#3791 would need to be applied. I'm no longer pursuing a fix since we stopped using Netty with Jersey.

@rvowles
Copy link

rvowles commented May 8, 2019

This affected us so badly we have now done the same and swapped to Grizzly for Jersey. I will keep watch on this ticket as swapping back (which is desirable as we also use gRPC) is fairly easy at the moment.

@jansupol
Copy link
Contributor

jansupol commented May 9, 2019

Unfortunately, any PR made to to the original repository needs to be made again to this repository under the new conditions.

010gvr added a commit to 010gvr/jersey that referenced this issue Nov 6, 2019
This commit contains a rewritten implementation of Netty Jersey server and client with the following major changes:
1) Consume `ByteBuf`s from `HttpContent` as-is for processing.
2) Fixes major leak and timout issues reported in eclipse-ee4j#3500 , eclipse-ee4j#3568 and eclipse-ee4j#4286
3) Fixes a bug on reading 0-byte or 1-byte JSON content

Signed-off-by: Venkat Ganesh [email protected]
010gvr added a commit to 010gvr/jersey that referenced this issue Nov 6, 2019
This commit contains a rewritten implementation of Netty Jersey server and client with the following major changes:
1) Consume `ByteBuf`s from `HttpContent` as-is for processing, reduces a lot of overhead.
2) Fixes a bug on reading 0-byte or 1-byte JSON content
3) Add three new configurable `ChunkedInput` buffer properties through `jersey.ci.capacity`, `jersey.ci.read.timeout` and `jersey.ci.write.timeout`
4) Add configurable payload size through `jersey.max.http.request.entitySizeMb` , defaults to 50 Mb

This change should fix some of the long pending bug reports in eclipse-ee4j#3500 , eclipse-ee4j#3568 and eclipse-ee4j#4286 . This may also resolve eclipse-ee4j#4285

Signed-off-by: Venkat Ganesh [email protected]
010gvr added a commit to 010gvr/jersey that referenced this issue Nov 6, 2019
This commit contains a rewritten implementation of Netty Jersey server and client with the following major changes:
1) Consume `ByteBuf`s from `HttpContent` as-is for processing, reduces a lot of overhead.
2) Fixes a bug on reading 0-byte or 1-byte JSON content
3) Add three new configurable `ChunkedInput` buffer properties through `jersey.ci.capacity`, `jersey.ci.read.timeout` and `jersey.ci.write.timeout`
4) Add configurable payload size through `jersey.max.http.request.entitySizeMb` , defaults to 50 Mb

This change should fix some of the long pending bug reports in eclipse-ee4j#3500 , eclipse-ee4j#3568 and eclipse-ee4j#4286 . This may also resolve eclipse-ee4j#4285

Signed-off-by: Venkat Ganesh [email protected]
010gvr added a commit to 010gvr/jersey that referenced this issue Nov 6, 2019
This commit contains a rewritten implementation of Netty Jersey server and client with the following major changes:
1) Consume `ByteBuf`s from `HttpContent` as-is for processing, reduces a lot of overhead.
2) Fixes a bug on reading 0-byte or 1-byte JSON content
3) Add three new configurable `ChunkedInput` buffer properties through `jersey.ci.capacity`, `jersey.ci.read.timeout` and `jersey.ci.write.timeout`
4) Add configurable payload size through `jersey.max.http.request.entitySizeMb` , defaults to 50 Mb

This change should fix some of the long pending bug reports in eclipse-ee4j#3500 , eclipse-ee4j#3568 and eclipse-ee4j#4286 . This may also resolve eclipse-ee4j#4285

Signed-off-by: Venkat Ganesh [email protected]
010gvr added a commit to 010gvr/jersey that referenced this issue Nov 6, 2019
This commit contains a rewritten implementation of Netty Jersey server and client with the following major changes:
1) Consume `ByteBuf`s from `HttpContent` as-is for processing, reduces a lot of overhead.
2) Fixes a bug on reading 0-byte or 1-byte JSON content
3) Add three new configurable `ChunkedInput` buffer properties through `jersey.ci.capacity`, `jersey.ci.read.timeout` and `jersey.ci.write.timeout`
4) Add configurable payload size through `jersey.max.http.request.entitySizeMb` , defaults to 50 Mb

This change should fix some of the long pending bug reports in eclipse-ee4j#3500 , eclipse-ee4j#3568 and eclipse-ee4j#4286 . This may also resolve eclipse-ee4j#4285

Signed-off-by: Venkat Ganesh [email protected]
010gvr pushed a commit to 010gvr/jersey that referenced this issue Nov 30, 2019
Current unit tests only validate Netty's `JerseyClientHandler` against a Grizzly http server. This commit adds test cases to validate the same using Netty's `JerseyServerHandler`.

Also adding extra test cases to effectively validate the bug fixes for eclipse-ee4j#3500 and eclipse-ee4j#3568 (that is, these new test cases will fail if ran against the code base older than eclipse-ee4j#4312)

Fixes eclipse-ee4j#3500 and eclipse-ee4j#3568

Signed-off-by: Venkat Ganesh <[email protected]>
010gvr pushed a commit to 010gvr/jersey that referenced this issue Nov 30, 2019
Current unit tests only validate Netty's `JerseyClientHandler` against a Grizzly http server. This commit adds test cases to validate the same using Netty's `JerseyServerHandler` using `NettyTestContainerFactory`.

Also adding extra test cases to effectively validate the bug fixes for eclipse-ee4j#3500 and eclipse-ee4j#3568 (that is, these new test cases will fail if ran against the code base older than eclipse-ee4j#4312)

Fixes eclipse-ee4j#3500 and eclipse-ee4j#3568

Signed-off-by: Venkat Ganesh <[email protected]>
010gvr pushed a commit to 010gvr/jersey that referenced this issue Nov 30, 2019
Current unit tests only validate Netty's `JerseyClientHandler` against a Grizzly http server. This commit adds test cases to validate the same using Netty's `JerseyServerHandler` using `NettyTestContainerFactory`.

Also adding extra test cases to effectively validate the bug fixes for eclipse-ee4j#3500 and eclipse-ee4j#3568 (that is, these new test cases will fail if ran against the code base older than eclipse-ee4j#4312)

Fixes eclipse-ee4j#3500 and eclipse-ee4j#3568

Signed-off-by: Venkat Ganesh <[email protected]>
@010gvr 010gvr linked a pull request Nov 30, 2019 that will close this issue
010gvr added a commit to 010gvr/jersey that referenced this issue Nov 30, 2019
Current unit tests only validate Netty's `JerseyClientHandler` against a Grizzly http server. This commit adds test cases to validate the same using Netty's `JerseyServerHandler` using `NettyTestContainerFactory`.

Also adding extra test cases to effectively validate the bug fixes for eclipse-ee4j#3500 and eclipse-ee4j#3568 (that is, these new test cases will fail if ran against the code base older than eclipse-ee4j#4312)

Fixes eclipse-ee4j#3500 and eclipse-ee4j#3568

Signed-off-by: Venkat Ganesh <[email protected]>
010gvr added a commit to 010gvr/jersey that referenced this issue Nov 30, 2019
Current unit tests only validate Netty's `JerseyClientHandler` against a Grizzly http server. This commit adds test cases to validate the same using Netty's `JerseyServerHandler` using `NettyTestContainerFactory`.

Also adding extra test cases to effectively validate the bug fixes for eclipse-ee4j#3500 and eclipse-ee4j#3568 (that is, these new test cases will fail if ran against the code base older than eclipse-ee4j#4312)

Fixes eclipse-ee4j#3500 and eclipse-ee4j#3568

Signed-off-by: Venkat Ganesh <[email protected]>
010gvr added a commit to 010gvr/jersey that referenced this issue Dec 1, 2019
Current unit tests only validate Netty's `JerseyClientHandler` against a Grizzly http server. This commit adds test cases to validate the same using Netty's `JerseyServerHandler` using `NettyTestContainerFactory`.

Also adding extra test cases to effectively validate the bug fixes for eclipse-ee4j#3500 and eclipse-ee4j#3568 (that is, these new test cases will fail if ran against the code base older than eclipse-ee4j#4312)

Fixes eclipse-ee4j#3500 and eclipse-ee4j#3568

Signed-off-by: Venkat Ganesh <[email protected]>
010gvr added a commit to 010gvr/jersey that referenced this issue Dec 1, 2019
Current unit tests only validate Netty's `JerseyClientHandler` against a Grizzly http server. This commit adds test cases to validate the same using Netty's `JerseyServerHandler` using `NettyTestContainerFactory`.

Also adding extra test cases to effectively validate the bug fixes for eclipse-ee4j#3500 and eclipse-ee4j#3568 (that is, these new test cases will fail if ran against the code base older than eclipse-ee4j#4312)

Fixes eclipse-ee4j#3500 and eclipse-ee4j#3568

Signed-off-by: Venkat Ganesh <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants